code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Cross validation of the potential fit
The functions required to run the fit in a Jupyter notebook are imported from the source code, along with matplotlib, and glob.
```
import popoff.fitting_output as fit_out
import popoff.cross_validation as cv
import matplotlib.pyplot as plt
import glob
```
## Setup of fitting parameters for your structure (Example: core-shell LiNiO$_2$)
Params is the dictionary (of dictionaries) which contains the main information relating to the system and potentials. There are 5 sub dictionaries: core_shell, charges, masses, potentials, and cs_springs.
**core_shell**: The keys are each atom type within the structure, with the relating value a boolean expression stating if that atom type is core-shell or not i.e. True = core-shell, False = rigid ion.
**charges**: The keys are again each atom type within the structure. The relating value is either a float representation of the atomic charge (for rigid ion atoms) or a sub dictionary where the sub keys are 'core' and 'shell', with the relating sub values being a float representation of the charge. Note: if you are fitting the charge separation (dq), the formal charge should be on the core and 0.0 charge on the shell.
**masses**: Similar to charges, the keys are each atom type within the structure, with the values either a float representation of the atomic mass, or a sub directory with the sub keys 'core' and 'shell' and sub values a float representation of the mass on each (summing to the atomic mass). Mass cannot be fitted, and there is no definitive way of splitting the mass, however convention suggests having 10 % mass on the shell.
**potentials**: The keys are atom label pairs separated by a dash (str), example: `Li-O`. The values are a list of the buckingham potential parameters, i.e. `[a, rho, c]`, where each parameter is a float.
**cs_springs**: The keys are again atom label pairs separated by a dash (str), example: `O-O`. This basically denotes the spring is between 'O' core and 'O' shell. The values are a list of the spring constants, k1 and k2, as floats. Commonly k2 is set to 0.0.
**NOTE: `masses` AND `core_shell` SHOULD BE THE SAME AS THE PARAMETERS DURING THE FIT.**
```
params = {}
params['core_shell'] = { 'Li': False, 'Ni': False, 'O': True }
params['charges'] = {'Li': +1.0,
'Ni': +3.0,
'O': {'core': -2.0,
'shell': 0.0}}
params['masses'] = {'Li': 6.941,
'Ni': 58.6934,
'O': {'core': 14.3991,
'shell': 1.5999} }
params['potentials'] = {'Li-O': [663.111, 0.119, 0.0],
'Ni-O': [1393.540, 0.218, 0.000],
'O-O': [25804.807, 0.284, 0.0]}
params['cs_springs'] = {'O-O' : [20.0, 0.0]}
```
## Define directory paths and names/number of structures
Structures, structures_to_fit, and fits are required inputs. These designate how many structures are in the training set (saved in the directory named `vaspruns`), how many of those structures to fit to (to match with cross-validation), and how many cross-validations to conduct, respectively. **NB: Carefully consider the last point. You can not cross-validate with structures in the fit, therefore if you have fitted to 10/15 structures, you cannot validate to 10 structures as only 5 are available.**
The `head_directory_name` is set up to output the fit to a results directory, with a sub directory of the number of structures fitted, i.e. where your data is saved for each fit of that type. For example, if you have fit 5 structures, your data would likely be in 'results/5_fit'. This can be different if you have changed the default location. `cv_directory_name` is the name of the sub directory where you wish to save your cross-validation data. The combination of these directories makes the output directory `head_output_directory`.
```
structures = 15 #Total number of structures in the training set
structures_in_fit = 5 #Number of structures you wish to fit to
fits = 8 #Number of fits to run
# Create cross validation directory
head_directory_name = 'results/{}_fit'.format(structures_in_fit)
cv_directory_name = 'cross_validation'
head_output_directory = fit_out.create_directory(head_directory_name, cv_directory_name)
```
## Runs cross-validation using the fitted parameters with other structures in training set
The cross-validation itself is run within the function called `run_cross_validation` in `cross_validation.py`, which is executed here. There are 2 required inputs: `head_directory_name` and `params` which are defined above. There is also 1 optional input, `supercell` which creates a supercell of the structures before running the fit. This can either be a list with the multipliers for x, y, and z, i.e. [2,2,2], or a list of list if you want to use different system sizes (not recommended) or have different starting sizes, i.e. [[2,2,2],[1,1,2]]. **Note: you want the cell to be the same size as that used in the fit for direct comparison.**
Output data is sent to the `head directory` location, in a sub directory named `cross_validation`. Each cross-validation set is saved in a sub directory named with the structure numbers used, prefixed with `p` denoting the potential was fitted to those structures. Inside the directory, the files are prefixed with `s` denoting the structures the potential was validated with, followed with the structure numbers in the validation set and the suffix stating what the file contains i.e. `dft_forces`.
```
cv.run_cross_validation(fits, structures, structures_in_fit, head_directory_name, head_output_directory, params, supercell=[2,2,2], seed=False)
```
## Plotting the cross-validation $\chi^{2}$ errors
Firstly, for each cross-validation set, the errors are read in from the sub directories within the `head_output_directory` and stored in a dictionary, converting the error to a float and using the sub directory names as the structure numbers (x-axis labels) by removing the leading head directory path and error file extension. This won't work for a directory tree, only for a depth of 1.
The cross-validation errors are then plotted and saved in the output directory, i.e. the `cross_validation` directory. There are options to change the title and degree of rotation on the x-axis labels. You can also chose whether to save the plot or not. Further editing and formatting can be done by changing the `plot_cross_validation` function in `plotting.py`.
```
for cv_directory in sorted(glob.glob('{}/*'.format(head_output_directory))):
if '.png' in cv_directory:
continue
error_dict = cv.setup_error_dict(cv_directory)
cv.plot_cross_validation(error_dict, cv_directory, head_output_directory, xlabel_rotation=50, title='default', save=True)
```
|
github_jupyter
|
import popoff.fitting_output as fit_out
import popoff.cross_validation as cv
import matplotlib.pyplot as plt
import glob
params = {}
params['core_shell'] = { 'Li': False, 'Ni': False, 'O': True }
params['charges'] = {'Li': +1.0,
'Ni': +3.0,
'O': {'core': -2.0,
'shell': 0.0}}
params['masses'] = {'Li': 6.941,
'Ni': 58.6934,
'O': {'core': 14.3991,
'shell': 1.5999} }
params['potentials'] = {'Li-O': [663.111, 0.119, 0.0],
'Ni-O': [1393.540, 0.218, 0.000],
'O-O': [25804.807, 0.284, 0.0]}
params['cs_springs'] = {'O-O' : [20.0, 0.0]}
structures = 15 #Total number of structures in the training set
structures_in_fit = 5 #Number of structures you wish to fit to
fits = 8 #Number of fits to run
# Create cross validation directory
head_directory_name = 'results/{}_fit'.format(structures_in_fit)
cv_directory_name = 'cross_validation'
head_output_directory = fit_out.create_directory(head_directory_name, cv_directory_name)
cv.run_cross_validation(fits, structures, structures_in_fit, head_directory_name, head_output_directory, params, supercell=[2,2,2], seed=False)
for cv_directory in sorted(glob.glob('{}/*'.format(head_output_directory))):
if '.png' in cv_directory:
continue
error_dict = cv.setup_error_dict(cv_directory)
cv.plot_cross_validation(error_dict, cv_directory, head_output_directory, xlabel_rotation=50, title='default', save=True)
| 0.363534 | 0.983407 |
```
import pandas as pd
import utils
import seaborn as sns
import matplotlib.pyplot as plt
import random
import plotly.express as px
random.seed(9000)
plt.style.use("seaborn-ticks")
plt.rcParams["image.cmap"] = "Set1"
plt.rcParams['axes.prop_cycle'] = plt.cycler(color=plt.cm.Set1.colors)
%matplotlib inline
```
In this notebook we calculate `Percent Replicating` to measure of the proportion of perturbations with detectable signature. The following are the steps taken
1. Normalized, feature selected ORF, CRISPR and Compound profiles are read and the replicate plates are merged into a single dataframe, for each time point and cell line.
2. Negative control and empty wells are removed from the dataframe.
3. The signal distribution, which is the median pairwise replicate correlation, is computed for each replicate.
4. The null distribution, which is the median pairwise correlation of non-replicates, is computed for 1000 combinations of non-replicates.
5. Percent Replicating is computed as the percentage of the signal distribution that is the greater than the 95th percentile of null distribution
6. The signal and noise distributions and the Percent Replicating values are plotted and the table of Percent Replicating is printed.
```
n_samples = 1000
n_replicates = 4
corr_replicating_df = pd.DataFrame()
group_by_feature = 'Metadata_broad_sample'
experiment = {
'ORF':{'A549':{'48':{'BR00117020':'A549 48-hour ORF Plate 1',
'BR00117021':'A549 48-hour ORF Plate 2'
},
'96':{'BR00118050':'A549 96-hour ORF Plate 1',
'BR00117006':'A549 96-hour ORF Plate 2'
}
},
'U2OS':{'48':{'BR00117022':'U2OS 48-hour ORF Plate 1',
'BR00117023':'U2OS 48-hour ORF Plate 2'
},
'96':{'BR00118039':'U2OS 96-hour ORF Plate 1',
'BR00118040':'U2OS 96-hour ORF Plate 2'
}
}
},
'CRISPR':{'A549':{'96':{'BR00118041':'A549 96-hour CRISPR Plate 1',
'BR00118042':'A549 96-hour CRISPR Plate 2',
'BR00118043':'A549 96-hour CRISPR Plate 3',
'BR00118044':'A549 96-hour CRISPR Plate 4'
},
'144':{'BR00117003':'A549 144-hour CRISPR Plate 1',
'BR00117004':'A549 144-hour CRISPR Plate 2',
'BR00117005':'A549 144-hour CRISPR Plate 3',
'BR00117000':'A549 144-hour CRISPR Plate 4'
}
},
'U2OS':{'96':{'BR00118045':'U2OS 96-hour CRISPR Plate 1',
'BR00118046':'U2OS 96-hour CRISPR Plate 2',
'BR00118047':'U2OS 96-hour CRISPR Plate 3',
'BR00118048':'U2OS 96-hour CRISPR Plate 4'
},
'144':{'BR00116997':'U2OS 144-hour CRISPR Plate 1',
'BR00116998':'U2OS 144-hour CRISPR Plate 2',
'BR00116999':'U2OS 144-hour CRISPR Plate 3',
'BR00116996':'U2OS 144-hour CRISPR Plate 4'
}
}
},
'Compound':{'A549':{'24':{'BR00116991':'A549 24-hour Compound Plate 1',
'BR00116992':'A549 24-hour Compound Plate 2',
'BR00116993':'A549 24-hour Compound Plate 3',
'BR00116994':'A549 24-hour Compound Plate 4'
},
'48':{'BR00117017':'A549 48-hour Compound Plate 1',
'BR00117019':'A549 48-hour Compound Plate 2',
'BR00117015':'A549 48-hour Compound Plate 3',
'BR00117016':'A549 48-hour Compound Plate 4'
}
},
'U2OS':{'24':{'BR00116995':'U2OS 24-hour Compound Plate 1',
'BR00117024':'U2OS 24-hour Compound Plate 2',
'BR00117025':'U2OS 24-hour Compound Plate 3',
'BR00117026':'U2OS 24-hour Compound Plate 4'
},
'48':{'BR00117012':'U2OS 48-hour Compound Plate 1',
'BR00117013':'U2OS 48-hour Compound Plate 2',
'BR00117010':'U2OS 48-hour Compound Plate 3',
'BR00117011':'U2OS 48-hour Compound Plate 4'
}
}
}
}
experiment_name = "2020_11_04_CPJUMP1"
for modality in experiment:
for cell in experiment[modality]:
for time in experiment[modality][cell]:
experiment_df = pd.DataFrame()
for plate in experiment[modality][cell][time]:
data_df = (
utils.load_data(experiment_name, plate, "normalized_feature_select_negcon_plate.csv.gz")
.assign(Metadata_cell_id=cell)
.assign(Metadata_modality=modality)
.assign(Metadata_time_point=time)
.assign(Metadata_experiment=f'{modality}_{cell}_{time}')
)
experiment_df = utils.concat_profiles(experiment_df, data_df)
experiment_df = utils.remove_negcon_empty_wells(experiment_df)
replicating_corr = list(utils.corr_between_replicates(experiment_df, group_by_feature))
null_replicating = list(utils.corr_between_non_replicates(experiment_df, n_samples=n_samples, n_replicates=n_replicates, metadata_compound_name = group_by_feature))
prop_95_replicating, value_95_replicating = utils.percent_score(null_replicating,
replicating_corr,
how='right')
corr_replicating_df = corr_replicating_df.append({'Description':f'{modality}_{cell}_{time}',
'Modality':f'{modality}',
'Cell':f'{cell}',
'time':f'{time}',
'Replicating':replicating_corr,
'Null_Replicating':null_replicating,
'Percent_Replicating':'%.1f'%prop_95_replicating,
'Value_95':value_95_replicating}, ignore_index=True)
print(corr_replicating_df[['Description', 'Percent_Replicating']].to_markdown(index=False))
n_experiments = len(corr_replicating_df)
plt.rcParams['figure.facecolor'] = 'white' # Enabling this makes the figure axes and labels visible in PyCharm Dracula theme
plt.figure(figsize=[12, n_experiments*6])
for i in range(n_experiments):
plt.subplot(n_experiments, 1, i+1)
plt.hist(corr_replicating_df.loc[i,'Null_Replicating'], label='non-replicates', density=True, bins=20, alpha=0.5)
plt.hist(corr_replicating_df.loc[i,'Replicating'], label='replicates', density=True, bins=20, alpha=0.5)
plt.axvline(corr_replicating_df.loc[i,'Value_95'], label='95% threshold')
plt.legend(fontsize=20)
plt.title(
f"{corr_replicating_df.loc[i,'Description']}\n" +
f"Percent Replicating = {corr_replicating_df.loc[i,'Percent_Replicating']}",
fontsize=25
)
plt.ylabel("density", fontsize=25)
plt.xlabel("Replicate correlation", fontsize=25)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
sns.despine()
plt.tight_layout()
plt.savefig('figures/0.percent_replicating.png')
corr_replicating_df['Percent_Replicating'] = corr_replicating_df['Percent_Replicating'].astype(float)
corr_replicating_df.loc[(corr_replicating_df.Modality=='Compound') & (corr_replicating_df.time=='24'), 'time'] = 'short'
corr_replicating_df.loc[(corr_replicating_df.Modality=='Compound') & (corr_replicating_df.time=='48'), 'time'] = 'long'
corr_replicating_df.loc[(corr_replicating_df.Modality=='CRISPR') & (corr_replicating_df.time=='96'), 'time'] = 'short'
corr_replicating_df.loc[(corr_replicating_df.Modality=='CRISPR') & (corr_replicating_df.time=='144'), 'time'] = 'long'
corr_replicating_df.loc[(corr_replicating_df.Modality=='ORF') & (corr_replicating_df.time=='48'), 'time'] = 'short'
corr_replicating_df.loc[(corr_replicating_df.Modality=='ORF') & (corr_replicating_df.time=='96'), 'time'] = 'long'
plot_corr_replicating_df = (
corr_replicating_df.rename(columns={'Modality':'Perturbation'})
.drop(columns=['Null_Replicating','Value_95','Replicating'])
)
fig = px.bar(data_frame=plot_corr_replicating_df,
x='Perturbation',
y='Percent_Replicating',
facet_row='time',
facet_col='Cell')
fig.update_layout(title='Percent Replicating vs. Perturbation',
yaxis=dict(title='Percent Replicating'),
yaxis3=dict(title='Percent Replicating'))
fig.show("png")
fig.write_image(f'figures/0.percent_replicating_facet.png', width=640, height=480, scale=2)
print(plot_corr_replicating_df[['Description','Perturbation','time', 'Cell' ,'Percent_Replicating']].to_markdown(index=False))
```
|
github_jupyter
|
import pandas as pd
import utils
import seaborn as sns
import matplotlib.pyplot as plt
import random
import plotly.express as px
random.seed(9000)
plt.style.use("seaborn-ticks")
plt.rcParams["image.cmap"] = "Set1"
plt.rcParams['axes.prop_cycle'] = plt.cycler(color=plt.cm.Set1.colors)
%matplotlib inline
n_samples = 1000
n_replicates = 4
corr_replicating_df = pd.DataFrame()
group_by_feature = 'Metadata_broad_sample'
experiment = {
'ORF':{'A549':{'48':{'BR00117020':'A549 48-hour ORF Plate 1',
'BR00117021':'A549 48-hour ORF Plate 2'
},
'96':{'BR00118050':'A549 96-hour ORF Plate 1',
'BR00117006':'A549 96-hour ORF Plate 2'
}
},
'U2OS':{'48':{'BR00117022':'U2OS 48-hour ORF Plate 1',
'BR00117023':'U2OS 48-hour ORF Plate 2'
},
'96':{'BR00118039':'U2OS 96-hour ORF Plate 1',
'BR00118040':'U2OS 96-hour ORF Plate 2'
}
}
},
'CRISPR':{'A549':{'96':{'BR00118041':'A549 96-hour CRISPR Plate 1',
'BR00118042':'A549 96-hour CRISPR Plate 2',
'BR00118043':'A549 96-hour CRISPR Plate 3',
'BR00118044':'A549 96-hour CRISPR Plate 4'
},
'144':{'BR00117003':'A549 144-hour CRISPR Plate 1',
'BR00117004':'A549 144-hour CRISPR Plate 2',
'BR00117005':'A549 144-hour CRISPR Plate 3',
'BR00117000':'A549 144-hour CRISPR Plate 4'
}
},
'U2OS':{'96':{'BR00118045':'U2OS 96-hour CRISPR Plate 1',
'BR00118046':'U2OS 96-hour CRISPR Plate 2',
'BR00118047':'U2OS 96-hour CRISPR Plate 3',
'BR00118048':'U2OS 96-hour CRISPR Plate 4'
},
'144':{'BR00116997':'U2OS 144-hour CRISPR Plate 1',
'BR00116998':'U2OS 144-hour CRISPR Plate 2',
'BR00116999':'U2OS 144-hour CRISPR Plate 3',
'BR00116996':'U2OS 144-hour CRISPR Plate 4'
}
}
},
'Compound':{'A549':{'24':{'BR00116991':'A549 24-hour Compound Plate 1',
'BR00116992':'A549 24-hour Compound Plate 2',
'BR00116993':'A549 24-hour Compound Plate 3',
'BR00116994':'A549 24-hour Compound Plate 4'
},
'48':{'BR00117017':'A549 48-hour Compound Plate 1',
'BR00117019':'A549 48-hour Compound Plate 2',
'BR00117015':'A549 48-hour Compound Plate 3',
'BR00117016':'A549 48-hour Compound Plate 4'
}
},
'U2OS':{'24':{'BR00116995':'U2OS 24-hour Compound Plate 1',
'BR00117024':'U2OS 24-hour Compound Plate 2',
'BR00117025':'U2OS 24-hour Compound Plate 3',
'BR00117026':'U2OS 24-hour Compound Plate 4'
},
'48':{'BR00117012':'U2OS 48-hour Compound Plate 1',
'BR00117013':'U2OS 48-hour Compound Plate 2',
'BR00117010':'U2OS 48-hour Compound Plate 3',
'BR00117011':'U2OS 48-hour Compound Plate 4'
}
}
}
}
experiment_name = "2020_11_04_CPJUMP1"
for modality in experiment:
for cell in experiment[modality]:
for time in experiment[modality][cell]:
experiment_df = pd.DataFrame()
for plate in experiment[modality][cell][time]:
data_df = (
utils.load_data(experiment_name, plate, "normalized_feature_select_negcon_plate.csv.gz")
.assign(Metadata_cell_id=cell)
.assign(Metadata_modality=modality)
.assign(Metadata_time_point=time)
.assign(Metadata_experiment=f'{modality}_{cell}_{time}')
)
experiment_df = utils.concat_profiles(experiment_df, data_df)
experiment_df = utils.remove_negcon_empty_wells(experiment_df)
replicating_corr = list(utils.corr_between_replicates(experiment_df, group_by_feature))
null_replicating = list(utils.corr_between_non_replicates(experiment_df, n_samples=n_samples, n_replicates=n_replicates, metadata_compound_name = group_by_feature))
prop_95_replicating, value_95_replicating = utils.percent_score(null_replicating,
replicating_corr,
how='right')
corr_replicating_df = corr_replicating_df.append({'Description':f'{modality}_{cell}_{time}',
'Modality':f'{modality}',
'Cell':f'{cell}',
'time':f'{time}',
'Replicating':replicating_corr,
'Null_Replicating':null_replicating,
'Percent_Replicating':'%.1f'%prop_95_replicating,
'Value_95':value_95_replicating}, ignore_index=True)
print(corr_replicating_df[['Description', 'Percent_Replicating']].to_markdown(index=False))
n_experiments = len(corr_replicating_df)
plt.rcParams['figure.facecolor'] = 'white' # Enabling this makes the figure axes and labels visible in PyCharm Dracula theme
plt.figure(figsize=[12, n_experiments*6])
for i in range(n_experiments):
plt.subplot(n_experiments, 1, i+1)
plt.hist(corr_replicating_df.loc[i,'Null_Replicating'], label='non-replicates', density=True, bins=20, alpha=0.5)
plt.hist(corr_replicating_df.loc[i,'Replicating'], label='replicates', density=True, bins=20, alpha=0.5)
plt.axvline(corr_replicating_df.loc[i,'Value_95'], label='95% threshold')
plt.legend(fontsize=20)
plt.title(
f"{corr_replicating_df.loc[i,'Description']}\n" +
f"Percent Replicating = {corr_replicating_df.loc[i,'Percent_Replicating']}",
fontsize=25
)
plt.ylabel("density", fontsize=25)
plt.xlabel("Replicate correlation", fontsize=25)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
sns.despine()
plt.tight_layout()
plt.savefig('figures/0.percent_replicating.png')
corr_replicating_df['Percent_Replicating'] = corr_replicating_df['Percent_Replicating'].astype(float)
corr_replicating_df.loc[(corr_replicating_df.Modality=='Compound') & (corr_replicating_df.time=='24'), 'time'] = 'short'
corr_replicating_df.loc[(corr_replicating_df.Modality=='Compound') & (corr_replicating_df.time=='48'), 'time'] = 'long'
corr_replicating_df.loc[(corr_replicating_df.Modality=='CRISPR') & (corr_replicating_df.time=='96'), 'time'] = 'short'
corr_replicating_df.loc[(corr_replicating_df.Modality=='CRISPR') & (corr_replicating_df.time=='144'), 'time'] = 'long'
corr_replicating_df.loc[(corr_replicating_df.Modality=='ORF') & (corr_replicating_df.time=='48'), 'time'] = 'short'
corr_replicating_df.loc[(corr_replicating_df.Modality=='ORF') & (corr_replicating_df.time=='96'), 'time'] = 'long'
plot_corr_replicating_df = (
corr_replicating_df.rename(columns={'Modality':'Perturbation'})
.drop(columns=['Null_Replicating','Value_95','Replicating'])
)
fig = px.bar(data_frame=plot_corr_replicating_df,
x='Perturbation',
y='Percent_Replicating',
facet_row='time',
facet_col='Cell')
fig.update_layout(title='Percent Replicating vs. Perturbation',
yaxis=dict(title='Percent Replicating'),
yaxis3=dict(title='Percent Replicating'))
fig.show("png")
fig.write_image(f'figures/0.percent_replicating_facet.png', width=640, height=480, scale=2)
print(plot_corr_replicating_df[['Description','Perturbation','time', 'Cell' ,'Percent_Replicating']].to_markdown(index=False))
| 0.427994 | 0.749145 |
International Morse Code defines a standard encoding where each letter is mapped to a series of dots and dashes, as follows: "a" maps to ".-", "b" maps to "-...", "c" maps to "-.-.", and so on.
For convenience, the full table for the 26 letters of the English alphabet is given below:
```javascript
[".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."]
```
Now, given a list of words, each word can be written as a concatenation of the Morse code of each letter. For example, "cab" can be written as "-.-.-....-", (which is the concatenation "-.-." + "-..." + ".-"). We'll call such a concatenation, the transformation of a word.
Return the number of different transformations among all words we have.
Example:
```javascript
Input: words = ["gin", "zen", "gig", "msg"]
```
Output: 2
Explanation:
The transformation of each word is:
```javascript
"gin" -> "--...-."
"zen" -> "--...-."
"gig" -> "--...--."
"msg" -> "--...--."
```
There are 2 different transformations, "--...-." and "--...--.".
Note:
The length of words will be at most 100.
Each words[i] will have length in range [1, 12].
words[i] will only consist of lowercase letters.
```
class Solution(object):
def uniqueMorseRepresentations(self, words):
"""
:type words: List[str]
:rtype: int
"""
morse_codes = [".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."]
wds_morse_codes = []
for word in words:
wd_morse_code = ""
for c in word.strip():
wd_morse_code = wd_morse_code + morse_codes[ord(c) - ord('a')]
#print(wd_morse_code)
#print("result code: " + wd_morse_code)
if wd_morse_code not in wds_morse_codes:
wds_morse_codes.append(wd_morse_code)
return len(wds_morse_codes)
if __name__ == "__main__":
words = ["gin", "zen", "gig", "msg"]
print(Solution().uniqueMorseRepresentations(words))
# expected 2
# Other solutions
class Solution(object):
def uniqueMorseRepresentations(self, words):
MORSE = [".-","-...","-.-.","-..",".","..-.","--.",
"....","..",".---","-.-",".-..","--","-.",
"---",".--.","--.-",".-.","...","-","..-",
"...-",".--","-..-","-.--","--.."]
seen = {"".join(MORSE[ord(c) - ord('a')] for c in word)
for word in words}
return len(seen)
# Other solutions
class Solution(object):
def uniqueMorseRepresentations(self, words):
"""
:type words: List[str]
:rtype: int
"""
moorse = [".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."]
trans = lambda x: moorse[ord(x) - ord('a')]
map_word = lambda word: ''.join([trans(x) for x in word])
res = map(map_word, words)
return len(set(res))
```
|
github_jupyter
|
[".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."]
Input: words = ["gin", "zen", "gig", "msg"]
"gin" -> "--...-."
"zen" -> "--...-."
"gig" -> "--...--."
"msg" -> "--...--."
class Solution(object):
def uniqueMorseRepresentations(self, words):
"""
:type words: List[str]
:rtype: int
"""
morse_codes = [".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."]
wds_morse_codes = []
for word in words:
wd_morse_code = ""
for c in word.strip():
wd_morse_code = wd_morse_code + morse_codes[ord(c) - ord('a')]
#print(wd_morse_code)
#print("result code: " + wd_morse_code)
if wd_morse_code not in wds_morse_codes:
wds_morse_codes.append(wd_morse_code)
return len(wds_morse_codes)
if __name__ == "__main__":
words = ["gin", "zen", "gig", "msg"]
print(Solution().uniqueMorseRepresentations(words))
# expected 2
# Other solutions
class Solution(object):
def uniqueMorseRepresentations(self, words):
MORSE = [".-","-...","-.-.","-..",".","..-.","--.",
"....","..",".---","-.-",".-..","--","-.",
"---",".--.","--.-",".-.","...","-","..-",
"...-",".--","-..-","-.--","--.."]
seen = {"".join(MORSE[ord(c) - ord('a')] for c in word)
for word in words}
return len(seen)
# Other solutions
class Solution(object):
def uniqueMorseRepresentations(self, words):
"""
:type words: List[str]
:rtype: int
"""
moorse = [".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."]
trans = lambda x: moorse[ord(x) - ord('a')]
map_word = lambda word: ''.join([trans(x) for x in word])
res = map(map_word, words)
return len(set(res))
| 0.314366 | 0.834744 |
# Support Vector Regression with RobustScaler
This Code template is for regression analysis using Support Vector Regressor(SVR) based on the Support Vector Machine algorithm and feature rescaling technique RobustScaler in a pipeline.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.svm import SVR
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
file_path= ""
```
List of features which are required for model training .
```
features = []
```
Target feature for prediction.
```
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.
Here we will use SVR, the svr implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and maybe impractical beyond tens of thousands of samples.
#### Model Tuning Parameters
1. C : float, default=1.0
> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty.
2. kernel : {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’
> Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples).
3. gamma : {‘scale’, ‘auto’} or float, default=’scale’
> Gamma is a hyperparameter that we have to set before the training model. Gamma decides how much curvature we want in a decision boundary.
4. degree : int, default=3
> Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.Using degree 1 is similar to using a linear kernel. Also, increasing degree parameter leads to higher training times.
#### Data Scaling
RobustScaler scales features using statistics that are robust to outliers.
This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).
Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the transform method.
##### For more information on RobustScaler [ click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html)
```
model=make_pipeline(RobustScaler(),SVR())
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Saharsh Laud , Github: [Profile](https://github.com/SaharshLaud)
|
github_jupyter
|
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.svm import SVR
from sklearn.preprocessing import RobustScaler
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
file_path= ""
features = []
target=''
df=pd.read_csv(file_path)
df.head()
X=df[features]
Y=df[target]
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
model=make_pipeline(RobustScaler(),SVR())
model.fit(x_train,y_train)
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
| 0.331877 | 0.989531 |
# Root cause analysis (RCA) of latencies in a microservice architecture
In this case study, we identify the root causes of "unexpected" observed latencies in cloud services that empower an
online shop. We focus on the process of placing an order, which involves different services to make sure that
the placed order is valid, the customer is authenticated, the shipping costs are calculated correctly, and the shipping
process is initiated accordingly. The dependencies of the services is shown in the graph below.
```
from IPython.display import Image
Image('microservice-architecture-dependencies.png', width=500)
```
This kind of dependency graph could be obtained from services like [Amazon X-Ray](https://aws.amazon.com/xray/) or
defined manually based on the trace structure of requests.
We assume that the dependency graph above is correct and that we are able to measure the latency (in seconds) of each node for an order request. In case of `Website`, the latency would represent the time until a confirmation of the order is shown. For simplicity, let us assume that the services are synchronized, i.e., a service has to wait for downstream services in order to proceed. Further, we assume that two nodes are not impacted by unobserved factors (hidden confounders) at the same time (i.e., causal sufficiency). Seeing that, for instance, network traffic affects multiple services, this assumption might be typically violated in a real-world scenario. However, weak confounders can be neglected, while stronger ones (like network traffic) could falsely render multiple nodes as root causes. Generally, we can only identify causes that are part of the data.
Under these assumptions, the observed latency of a node is defined by the latency of the node itself (intrinsic latency), and the sum over all latencies of direct child nodes. This could also include calling a child node multiple times.
Let us load data with observed latencies of each node.
```
import pandas as pd
normal_data = pd.read_csv("rca_microservice_architecture_latencies.csv")
normal_data.head()
```
Let us also take a look at the pair-wise scatter plots and histograms of the variables.
```
axes = pd.plotting.scatter_matrix(normal_data, figsize=(10, 10), c='#ff0d57', alpha=0.2, hist_kwds={'color':['#1E88E5']});
for ax in axes.flatten():
ax.xaxis.label.set_rotation(90)
ax.yaxis.label.set_rotation(0)
ax.yaxis.label.set_ha('right')
```
In the matrix above, the plots on the diagonal line are histograms of variables, whereas those outside of the diagonal are scatter plots of pair of variables. The histograms of services without a dependency, namely `Customer DB`, `Product DB`, `Order DB` and `Shipping Cost Service`, have shapes similar to one half of a Gaussian distribution. The scatter plots of various pairs of variables (e.g., `API` and `www`, `www` and `Website`, `Order Service` and `Order DB`) show linear relations. We shall use this information shortly to assign generative causal models to nodes in the causal graph.
## Setting up the causal graph
If we look at the `Website` node, it becomes apparent that the latency we experience there depends on the latencies of
all downstream nodes. In particular, if one of the downstream nodes takes a long time, `Website` will also take a
long time to show an update. Seeing this, the causal graph of the latencies can be built by inverting the arrows of the
service graph.
```
import networkx as nx
from dowhy import gcm
causal_graph = nx.DiGraph([('www', 'Website'),
('Auth Service', 'www'),
('API', 'www'),
('Customer DB', 'Auth Service'),
('Customer DB', 'API'),
('Product Service', 'API'),
('Auth Service', 'API'),
('Order Service', 'API'),
('Shipping Cost Service', 'Product Service'),
('Caching Service', 'Product Service'),
('Product DB', 'Caching Service'),
('Customer DB', 'Product Service'),
('Order DB', 'Order Service')])
```
<div class="alert alert-block alert-info">
Here, we are interested in the causal relationships between latencies of services rather than the order of calling the services.
</div>
We will use the information from the pair-wise scatter plots and histograms to manually assign causal models. In particular, we assign half-Normal distributions to the root nodes (i.e., `Customer DB`, `Product DB`, `Order DB` and `Shipping Cost Service`). For non-root nodes, we assign linear additive noise models (which scatter plots of many parent-child pairs indicate) with empirical distribution of noise terms.
```
from scipy.stats import halfnorm
causal_model = gcm.StructuralCausalModel(causal_graph)
for node in causal_graph.nodes:
if len(list(causal_graph.predecessors(node))) > 0:
causal_model.set_causal_mechanism(node, gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
else:
causal_model.set_causal_mechanism(node, gcm.ScipyDistribution(halfnorm))
```
## Scenario 1: Observing permanent degradation of latencies
We consider a scenario where we observe a permanent degradation of latencies and we want to understand its drivers. In particular, we attribute the change in the average latency of `Website` to upstream nodes.
Suppose we get additional 1000 requests with higher latencies as follows.
```
outlier_data = pd.read_csv("rca_microservice_architecture_anomaly_1000.csv")
outlier_data.head()
```
We are interested in the increased latency of `Website` on average for 1000 requests which the customers directly experienced.
```
outlier_data['Website'].mean() - normal_data['Website'].mean()
```
The _Website_ is slower on average (by almost 2 seconds) than usual. Why?
### Attributing permanent degradation of latencies at a target service to other services
To answer why `Website` is slower for those 1000 requests compared to before, we attribute the change in the average latency of `Website` to services upstream in the causal graph. We refer the reader to [Budhathoki et al., 2021](https://assets.amazon.science/b6/c0/604565d24d049a1b83355921cc6c/why-did-the-distribution-change.pdf) for scientific details behind this API. As in the previous scenario, we will calculate a 95% bootstrapped confidence interval of our attributions and visualize them in a bar plot.
```
import matplotlib.pyplot as plt
import numpy as np
attribs = gcm.distribution_change(causal_model,
normal_data.sample(frac=0.6),
outlier_data.sample(frac=0.6),
'Website',
difference_estimation_func=lambda x, y: np.mean(y) - np.mean(x))
```
Let's plot these attributions.
```
def bar_plot(median_attribs, ylabel='Attribution Score', figsize=(8, 3), bwidth=0.8, xticks=None, xticks_rotation=90):
fig, ax = plt.subplots(figsize=figsize)
plt.bar(median_attribs.keys(), median_attribs.values(), ecolor='#1E88E5', color='#ff0d57', width=bwidth)
plt.xticks(rotation=xticks_rotation)
plt.ylabel(ylabel)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
if xticks:
plt.xticks(list(median_attribs.keys()), xticks)
plt.show()
bar_plot(attribs)
```
We observe that `Caching Service` is the root cause that slowed down `Website`. In particular, the method we used tells us that the change in the causal mechanism (i.e., the input-output behaviour) of `Caching Service` (e.g., Caching algorithm) slowed down `Website`. This is also expected as the outlier latencies were generated by changing the causal mechanism of `Caching Service` (see Appendix below).
## Scenario 2: Simulating the intervention of shifting resources
Next, let us imagine a scenario where permanent degradation has happened as in scenario 2 and we've successfully identified `Caching Service` as the root cause. Furthermore, we figured out that a recent deployment of the `Caching Service` contained a bug that is causing the overloaded hosts. A proper fix must be deployed, or the previous deployment must be rolled back. But, in the meantime, could we mitigate the situation by shifting over some resources from `Shipping Service` to `Caching Service`? And would that help? Before doing it in reality, let us simulate it first and see whether it improves the situation.
```
Image('shifting-resources.png', width=600)
```
Let’s perform an intervention where we say we can reduce the average time of `Caching Service` by 1s. But at the same time we buy this speed-up by an average slow-down of 2s in `Shipping Cost Service`.
```
gcm.fit(causal_model, outlier_data)
mean_latencies = gcm.interventional_samples(causal_model,
interventions = {
"Caching Service": lambda x: x-1,
"Shipping Cost Service": lambda x: x+2
},
observed_data=outlier_data).mean()
```
Has the situation improved? Let's visualize the results.
```
bar_plot(dict(before=outlier_data.mean().to_dict()['Website'], after=mean_latencies['Website']),
ylabel='Avg. Website Latency',
figsize=(3, 2),
bwidth=0.4,
xticks=['Before', 'After'],
xticks_rotation=45)
```
Indeed, we do get an improvement by about 1s. We’re not back at normal operation, but we’ve mitigated part of the problem. From here, maybe we can wait until a proper fix is deployed.
## Appendix: Data generation process
The scenarios above work on synthetic data. The normal data was generated using the following functions:
```
from scipy.stats import truncexpon, halfnorm
def create_observed_latency_data(unobserved_intrinsic_latencies):
observed_latencies = {}
observed_latencies['Product DB'] = unobserved_intrinsic_latencies['Product DB']
observed_latencies['Customer DB'] = unobserved_intrinsic_latencies['Customer DB']
observed_latencies['Order DB'] = unobserved_intrinsic_latencies['Order DB']
observed_latencies['Shipping Cost Service'] = unobserved_intrinsic_latencies['Shipping Cost Service']
observed_latencies['Caching Service'] = np.random.choice([0, 1], size=(len(observed_latencies['Product DB']),),
p=[.5, .5]) * \
observed_latencies['Product DB'] \
+ unobserved_intrinsic_latencies['Caching Service']
observed_latencies['Product Service'] = np.maximum(np.maximum(observed_latencies['Shipping Cost Service'],
observed_latencies['Caching Service']),
observed_latencies['Customer DB']) \
+ unobserved_intrinsic_latencies['Product Service']
observed_latencies['Auth Service'] = observed_latencies['Customer DB'] \
+ unobserved_intrinsic_latencies['Auth Service']
observed_latencies['Order Service'] = observed_latencies['Order DB'] \
+ unobserved_intrinsic_latencies['Order Service']
observed_latencies['API'] = observed_latencies['Product Service'] \
+ observed_latencies['Customer DB'] \
+ observed_latencies['Auth Service'] \
+ observed_latencies['Order Service'] \
+ unobserved_intrinsic_latencies['API']
observed_latencies['www'] = observed_latencies['API'] \
+ observed_latencies['Auth Service'] \
+ unobserved_intrinsic_latencies['www']
observed_latencies['Website'] = observed_latencies['www'] \
+ unobserved_intrinsic_latencies['Website']
return pd.DataFrame(observed_latencies)
def unobserved_intrinsic_latencies_normal(num_samples):
return {
'Website': truncexpon.rvs(size=num_samples, b=3, scale=0.2),
'www': truncexpon.rvs(size=num_samples, b=2, scale=0.2),
'API': halfnorm.rvs(size=num_samples, loc=0.5, scale=0.2),
'Auth Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Product Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Order Service': halfnorm.rvs(size=num_samples, loc=0.5, scale=0.2),
'Shipping Cost Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Caching Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.1),
'Order DB': truncexpon.rvs(size=num_samples, b=5, scale=0.2),
'Customer DB': truncexpon.rvs(size=num_samples, b=6, scale=0.2),
'Product DB': truncexpon.rvs(size=num_samples, b=10, scale=0.2)
}
normal_data = create_observed_latency_data(unobserved_intrinsic_latencies_normal(10000))
```
This simulates the latency relationships under the assumption of having synchronized services and that there are no
hidden aspects that impact two nodes at the same time. Furthermore, we assume that the Caching Service has to call through to the Product DB only in 50% of the cases (i.e., we have a 50% cache miss rate). Also, we assume that the Product Service can make calls in parallel to its downstream services Shipping Cost Service, Caching Service, and Customer DB and join the threads when all three service have returned.
<div class="alert alert-block alert-info">
We use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncexpon.html">truncated exponential</a> and
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.halfnorm.html">half-normal</a> distributions,
since their shapes are similar to distributions observed in real services.
</div>
The anomalous data is generated in the following way:
```
def unobserved_intrinsic_latencies_anomalous(num_samples):
return {
'Website': truncexpon.rvs(size=num_samples, b=3, scale=0.2),
'www': truncexpon.rvs(size=num_samples, b=2, scale=0.2),
'API': halfnorm.rvs(size=num_samples, loc=0.5, scale=0.2),
'Auth Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Product Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Order Service': halfnorm.rvs(size=num_samples, loc=0.5, scale=0.2),
'Shipping Cost Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Caching Service': 2 + halfnorm.rvs(size=num_samples, loc=0.1, scale=0.1),
'Order DB': truncexpon.rvs(size=num_samples, b=5, scale=0.2),
'Customer DB': truncexpon.rvs(size=num_samples, b=6, scale=0.2),
'Product DB': truncexpon.rvs(size=num_samples, b=10, scale=0.2)
}
anomalous_data = create_observed_latency_data(unobserved_intrinsic_latencies_anomalous(1000))
```
Here, we significantly increased the average time of the *Caching Service* by two seconds, which coincides with our
results from the RCA. Note that a high latency in *Caching Service* would lead to a constantly higher latency in upstream
services. In particular, customers experience a higher latency than usual.
|
github_jupyter
|
from IPython.display import Image
Image('microservice-architecture-dependencies.png', width=500)
import pandas as pd
normal_data = pd.read_csv("rca_microservice_architecture_latencies.csv")
normal_data.head()
axes = pd.plotting.scatter_matrix(normal_data, figsize=(10, 10), c='#ff0d57', alpha=0.2, hist_kwds={'color':['#1E88E5']});
for ax in axes.flatten():
ax.xaxis.label.set_rotation(90)
ax.yaxis.label.set_rotation(0)
ax.yaxis.label.set_ha('right')
import networkx as nx
from dowhy import gcm
causal_graph = nx.DiGraph([('www', 'Website'),
('Auth Service', 'www'),
('API', 'www'),
('Customer DB', 'Auth Service'),
('Customer DB', 'API'),
('Product Service', 'API'),
('Auth Service', 'API'),
('Order Service', 'API'),
('Shipping Cost Service', 'Product Service'),
('Caching Service', 'Product Service'),
('Product DB', 'Caching Service'),
('Customer DB', 'Product Service'),
('Order DB', 'Order Service')])
from scipy.stats import halfnorm
causal_model = gcm.StructuralCausalModel(causal_graph)
for node in causal_graph.nodes:
if len(list(causal_graph.predecessors(node))) > 0:
causal_model.set_causal_mechanism(node, gcm.AdditiveNoiseModel(gcm.ml.create_linear_regressor()))
else:
causal_model.set_causal_mechanism(node, gcm.ScipyDistribution(halfnorm))
outlier_data = pd.read_csv("rca_microservice_architecture_anomaly_1000.csv")
outlier_data.head()
outlier_data['Website'].mean() - normal_data['Website'].mean()
import matplotlib.pyplot as plt
import numpy as np
attribs = gcm.distribution_change(causal_model,
normal_data.sample(frac=0.6),
outlier_data.sample(frac=0.6),
'Website',
difference_estimation_func=lambda x, y: np.mean(y) - np.mean(x))
def bar_plot(median_attribs, ylabel='Attribution Score', figsize=(8, 3), bwidth=0.8, xticks=None, xticks_rotation=90):
fig, ax = plt.subplots(figsize=figsize)
plt.bar(median_attribs.keys(), median_attribs.values(), ecolor='#1E88E5', color='#ff0d57', width=bwidth)
plt.xticks(rotation=xticks_rotation)
plt.ylabel(ylabel)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
if xticks:
plt.xticks(list(median_attribs.keys()), xticks)
plt.show()
bar_plot(attribs)
Image('shifting-resources.png', width=600)
gcm.fit(causal_model, outlier_data)
mean_latencies = gcm.interventional_samples(causal_model,
interventions = {
"Caching Service": lambda x: x-1,
"Shipping Cost Service": lambda x: x+2
},
observed_data=outlier_data).mean()
bar_plot(dict(before=outlier_data.mean().to_dict()['Website'], after=mean_latencies['Website']),
ylabel='Avg. Website Latency',
figsize=(3, 2),
bwidth=0.4,
xticks=['Before', 'After'],
xticks_rotation=45)
from scipy.stats import truncexpon, halfnorm
def create_observed_latency_data(unobserved_intrinsic_latencies):
observed_latencies = {}
observed_latencies['Product DB'] = unobserved_intrinsic_latencies['Product DB']
observed_latencies['Customer DB'] = unobserved_intrinsic_latencies['Customer DB']
observed_latencies['Order DB'] = unobserved_intrinsic_latencies['Order DB']
observed_latencies['Shipping Cost Service'] = unobserved_intrinsic_latencies['Shipping Cost Service']
observed_latencies['Caching Service'] = np.random.choice([0, 1], size=(len(observed_latencies['Product DB']),),
p=[.5, .5]) * \
observed_latencies['Product DB'] \
+ unobserved_intrinsic_latencies['Caching Service']
observed_latencies['Product Service'] = np.maximum(np.maximum(observed_latencies['Shipping Cost Service'],
observed_latencies['Caching Service']),
observed_latencies['Customer DB']) \
+ unobserved_intrinsic_latencies['Product Service']
observed_latencies['Auth Service'] = observed_latencies['Customer DB'] \
+ unobserved_intrinsic_latencies['Auth Service']
observed_latencies['Order Service'] = observed_latencies['Order DB'] \
+ unobserved_intrinsic_latencies['Order Service']
observed_latencies['API'] = observed_latencies['Product Service'] \
+ observed_latencies['Customer DB'] \
+ observed_latencies['Auth Service'] \
+ observed_latencies['Order Service'] \
+ unobserved_intrinsic_latencies['API']
observed_latencies['www'] = observed_latencies['API'] \
+ observed_latencies['Auth Service'] \
+ unobserved_intrinsic_latencies['www']
observed_latencies['Website'] = observed_latencies['www'] \
+ unobserved_intrinsic_latencies['Website']
return pd.DataFrame(observed_latencies)
def unobserved_intrinsic_latencies_normal(num_samples):
return {
'Website': truncexpon.rvs(size=num_samples, b=3, scale=0.2),
'www': truncexpon.rvs(size=num_samples, b=2, scale=0.2),
'API': halfnorm.rvs(size=num_samples, loc=0.5, scale=0.2),
'Auth Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Product Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Order Service': halfnorm.rvs(size=num_samples, loc=0.5, scale=0.2),
'Shipping Cost Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Caching Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.1),
'Order DB': truncexpon.rvs(size=num_samples, b=5, scale=0.2),
'Customer DB': truncexpon.rvs(size=num_samples, b=6, scale=0.2),
'Product DB': truncexpon.rvs(size=num_samples, b=10, scale=0.2)
}
normal_data = create_observed_latency_data(unobserved_intrinsic_latencies_normal(10000))
def unobserved_intrinsic_latencies_anomalous(num_samples):
return {
'Website': truncexpon.rvs(size=num_samples, b=3, scale=0.2),
'www': truncexpon.rvs(size=num_samples, b=2, scale=0.2),
'API': halfnorm.rvs(size=num_samples, loc=0.5, scale=0.2),
'Auth Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Product Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Order Service': halfnorm.rvs(size=num_samples, loc=0.5, scale=0.2),
'Shipping Cost Service': halfnorm.rvs(size=num_samples, loc=0.1, scale=0.2),
'Caching Service': 2 + halfnorm.rvs(size=num_samples, loc=0.1, scale=0.1),
'Order DB': truncexpon.rvs(size=num_samples, b=5, scale=0.2),
'Customer DB': truncexpon.rvs(size=num_samples, b=6, scale=0.2),
'Product DB': truncexpon.rvs(size=num_samples, b=10, scale=0.2)
}
anomalous_data = create_observed_latency_data(unobserved_intrinsic_latencies_anomalous(1000))
| 0.658088 | 0.985594 |
## Identifiability Test of Linear VAE on Synthetic Dataset
```
%load_ext autoreload
%autoreload 2
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader, random_split
import ltcl
import numpy as np
from ltcl.datasets.sim_dataset import SimulationDatasetTSTwoSample
from ltcl.modules.srnn import SRNNSynthetic
from ltcl.tools.utils import load_yaml
import random
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
use_cuda = True
device = torch.device("cuda:0" if use_cuda else "cpu")
latent_size = 8
data = SimulationDatasetTSTwoSample(directory = '/srv/data/ltcl/data/',
transition='linear_nongaussian_ts')
num_validation_samples = 2500
train_data, val_data = random_split(data, [len(data)-num_validation_samples, num_validation_samples])
train_loader = DataLoader(train_data, batch_size=12800, shuffle=True, pin_memory=True)
val_loader = DataLoader(val_data, batch_size=16, shuffle=False, pin_memory=True)
cfg = load_yaml('../ltcl/configs/toy_linear_ts.yaml')
model = SRNNSynthetic.load_from_checkpoint(checkpoint_path="/srv/data/ltcl/log/weiran/toy_linear_ts/lightning_logs/version_1/checkpoints/epoch=299-step=228599.ckpt",
input_dim=cfg['VAE']['INPUT_DIM'],
length=cfg['VAE']['LENGTH'],
z_dim=cfg['VAE']['LATENT_DIM'],
lag=cfg['VAE']['LAG'],
hidden_dim=cfg['VAE']['ENC']['HIDDEN_DIM'],
trans_prior=cfg['VAE']['TRANS_PRIOR'],
bound=cfg['SPLINE']['BOUND'],
count_bins=cfg['SPLINE']['BINS'],
order=cfg['SPLINE']['ORDER'],
beta=cfg['VAE']['BETA'],
gamma=cfg['VAE']['GAMMA'],
sigma=cfg['VAE']['SIGMA'],
lr=cfg['VAE']['LR'],
bias=cfg['VAE']['BIAS'],
use_warm_start=cfg['SPLINE']['USE_WARM_START'],
spline_pth=cfg['SPLINE']['PATH'],
decoder_dist=cfg['VAE']['DEC']['DIST'],
correlation=cfg['MCC']['CORR'])
```
### Load model checkpoint
```
model.eval()
model.to('cpu')
```
### Compute permutation and sign flip
```
for batch in train_loader:
break
batch_size = batch['s1']['xt'].shape[0]
zs.shape
zs, mu, logvar = model.forward(batch['s1'])
mu = mu.view(batch_size, -1, latent_size)
A = mu[:,0,:].detach().cpu().numpy()
B = batch['s1']['yt'][:,0,:].detach().cpu().numpy()
C = np.zeros((latent_size,latent_size))
for i in range(latent_size):
C[i] = -np.abs(np.corrcoef(B, A, rowvar=False)[i,latent_size:])
from scipy.optimize import linear_sum_assignment
row_ind, col_ind = linear_sum_assignment(C)
A = A[:, col_ind]
mask = np.ones(latent_size)
for i in range(latent_size):
if np.corrcoef(B, A, rowvar=False)[i,latent_size:][i] > 0:
mask[i] = -1
print("Permutation:",col_ind)
print("Sign Flip:", mask)
fig = plt.figure(figsize=(4,4))
sns.heatmap(-C, vmin=0, vmax=1, annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.xlabel("Estimated latents ")
plt.ylabel("True latents ")
plt.title("MCC=%.3f"%np.abs(C[row_ind, col_ind]).mean());
figure_path = '/home/weiran/figs/'
from matplotlib.backends.backend_pdf import PdfPages
with PdfPages(figure_path + '/mcc_var.pdf') as pdf:
fig = plt.figure(figsize=(4,4))
sns.heatmap(-C, vmin=0, vmax=1, annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.xlabel("Estimated latents ")
plt.ylabel("True latents ")
plt.title("MCC=%.3f"%np.abs(C[row_ind, col_ind]).mean());
pdf.savefig(fig, bbox_inches="tight")
# Permute column here
mu = mu[:,:,col_ind]
# Flip sign here
mu = mu * torch.Tensor(mask, device=mu.device).view(1,1,latent_size)
mu = -mu
fig = plt.figure(figsize=(8,2))
col = 0
plt.plot(mu[:250,-1,col].detach().cpu().numpy(), color='b', label='True', alpha=0.75)
plt.plot(batch['yt_'].squeeze()[:250,col].detach().cpu().numpy(), color='r', label="Estimated", alpha=0.75)
plt.legend()
plt.title("Current latent variable $z_t$")
fig = plt.figure(figsize=(8,2))
col = 3
l = 1
plt.plot(batch['yt'].squeeze()[:250,l,col].detach().cpu().numpy(), color='b', label='True')
plt.plot(mu[:,:-1,:][:250,l,col].detach().cpu().numpy(), color='r', label="Estimated")
plt.xlabel("Sample index")
plt.ylabel("Latent variable value")
plt.legend()
plt.title("Past latent variable $z_l$")
fig = plt.figure(figsize=(2,2))
eps = model.sample(batch["xt"].cpu())
eps = eps.detach().cpu().numpy()
component_idx = 4
sns.distplot(eps[:,component_idx], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2});
plt.title("Learned noise prior")
```
### System identification (causal discovery)
```
from ltcl.modules.components.base import GroupLinearLayer
trans_func = GroupLinearLayer(din = 8,
dout = 8,
num_blocks = 2,
diagonal = False)
b = torch.nn.Parameter(0.001 * torch.randn(1, 8))
opt = torch.optim.Adam(trans_func.parameters(),lr=0.01)
lossfunc = torch.nn.L1Loss()
max_iters = 2
counter = 0
for step in range(max_iters):
for batch in train_loader:
batch_size = batch['yt'].shape[0]
x_recon, mu, logvar, z = model.forward(batch)
mu = mu.view(batch_size, -1, 8)
# Fix permutation before training
mu = mu[:,:,col_ind]
# Fix sign flip before training
mu = mu * torch.Tensor(mask, device=mu.device).view(1,1,8)
mu = -mu
pred = trans_func(mu[:,:-1,:]).sum(dim=1) + b
true = mu[:,-1,:]
loss = lossfunc(pred, true) #+ torch.mean(adaptive.lossfun((pred - true)))
opt.zero_grad()
loss.backward()
opt.step()
if counter % 100 == 0:
print(loss.item())
counter += 1
```
### Visualize causal matrix
```
B2 = model.transition_prior.transition.w[0][col_ind][:, col_ind].detach().cpu().numpy()
B1 = model.transition_prior.transition.w[1][col_ind][:, col_ind].detach().cpu().numpy()
B1 = B1 * mask.reshape(1,-1) * (mask).reshape(-1,1)
B2 = B2 * mask.reshape(1,-1) * (mask).reshape(-1,1)
BB2 = np.load("/srv/data/ltcl/data/linear_nongaussian_ts/W2.npy")
BB1 = np.load("/srv/data/ltcl/data/linear_nongaussian_ts/W1.npy")
# b = np.concatenate((B1,B2), axis=0)
# bb = np.concatenate((BB1,BB2), axis=0)
# b = b / np.linalg.norm(b, axis=0).reshape(1, -1)
# bb = bb / np.linalg.norm(bb, axis=0).reshape(1, -1)
# pred = (b / np.linalg.norm(b, axis=0).reshape(1, -1)).reshape(-1)
# true = (bb / np.linalg.norm(bb, axis=0).reshape(1, -1)).reshape(-1)
bs = [B1, B2]
bbs = [BB1, BB2]
with PdfPages(figure_path + '/entries.pdf') as pdf:
fig, axs = plt.subplots(1,2, figsize=(4,2))
for tau in range(2):
ax = axs[tau]
b = bs[tau]
bb = bbs[tau]
b = b / np.linalg.norm(b, axis=0).reshape(1, -1)
bb = bb / np.linalg.norm(bb, axis=0).reshape(1, -1)
pred = (b / np.linalg.norm(b, axis=0).reshape(1, -1)).reshape(-1)
true = (bb / np.linalg.norm(bb, axis=0).reshape(1, -1)).reshape(-1)
ax.scatter(pred, true, s=10, cmap=plt.cm.coolwarm, zorder=10, color='b')
lims = [-0.75,0.75
]
# now plot both limits against eachother
ax.plot(lims, lims, '-.', alpha=0.75, zorder=0)
# ax.set_xlim(lims)
# ax.set_ylim(lims)
ax.set_xlabel("Estimated weight")
ax.set_ylabel("Truth weight")
ax.set_title(r"Entries of $\mathbf{B}_%d$"%(tau+1))
plt.tight_layout()
pdf.savefig(fig, bbox_inches="tight")
fig, axs = plt.subplots(2,4, figsize=(4,2))
for i in range(8):
row = i // 4
col = i % 4
ax = axs[row,col]
ax.scatter(B[:,i], A[:,i], s=4, color='b', alpha=0.25)
ax.axis('off')
# ax.set_xlabel('Ground truth latent')
# ax.set_ylabel('Estimated latent')
# ax.grid('..')
fig.tight_layout()
import numpy as numx
def calculate_amari_distance(matrix_one,
matrix_two,
version=1):
""" Calculate the Amari distance between two input matrices.
:param matrix_one: the first matrix
:type matrix_one: numpy array
:param matrix_two: the second matrix
:type matrix_two: numpy array
:param version: Variant to use.
:type version: int
:return: The amari distance between two input matrices.
:rtype: float
"""
if matrix_one.shape != matrix_two.shape:
return "Two matrices must have the same shape."
product_matrix = numx.abs(numx.dot(matrix_one,
numx.linalg.inv(matrix_two)))
product_matrix_max_col = numx.array(product_matrix.max(0))
product_matrix_max_row = numx.array(product_matrix.max(1))
n = product_matrix.shape[0]
""" Formula from ESLII
Here they refered to as "amari error"
The value is in [0, N-1].
reference:
Bach, F. R.; Jordan, M. I. Kernel Independent Component
Analysis, J MACH LEARN RES, 2002, 3, 1--48
"""
amari_distance = product_matrix / numx.tile(product_matrix_max_col, (n, 1))
amari_distance += product_matrix / numx.tile(product_matrix_max_row, (n, 1)).T
amari_distance = amari_distance.sum() / (2 * n) - 1
amari_distance = amari_distance / (n-1)
return amari_distance
print("Amari distance for B1:", calculate_amari_distance(B1, BB1))
print("Amari distance for B2:", calculate_amari_distance(B2, BB2))
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader, random_split
import ltcl
import numpy as np
from ltcl.datasets.sim_dataset import SimulationDatasetTSTwoSample
from ltcl.modules.srnn import SRNNSynthetic
from ltcl.tools.utils import load_yaml
import random
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
use_cuda = True
device = torch.device("cuda:0" if use_cuda else "cpu")
latent_size = 8
data = SimulationDatasetTSTwoSample(directory = '/srv/data/ltcl/data/',
transition='linear_nongaussian_ts')
num_validation_samples = 2500
train_data, val_data = random_split(data, [len(data)-num_validation_samples, num_validation_samples])
train_loader = DataLoader(train_data, batch_size=12800, shuffle=True, pin_memory=True)
val_loader = DataLoader(val_data, batch_size=16, shuffle=False, pin_memory=True)
cfg = load_yaml('../ltcl/configs/toy_linear_ts.yaml')
model = SRNNSynthetic.load_from_checkpoint(checkpoint_path="/srv/data/ltcl/log/weiran/toy_linear_ts/lightning_logs/version_1/checkpoints/epoch=299-step=228599.ckpt",
input_dim=cfg['VAE']['INPUT_DIM'],
length=cfg['VAE']['LENGTH'],
z_dim=cfg['VAE']['LATENT_DIM'],
lag=cfg['VAE']['LAG'],
hidden_dim=cfg['VAE']['ENC']['HIDDEN_DIM'],
trans_prior=cfg['VAE']['TRANS_PRIOR'],
bound=cfg['SPLINE']['BOUND'],
count_bins=cfg['SPLINE']['BINS'],
order=cfg['SPLINE']['ORDER'],
beta=cfg['VAE']['BETA'],
gamma=cfg['VAE']['GAMMA'],
sigma=cfg['VAE']['SIGMA'],
lr=cfg['VAE']['LR'],
bias=cfg['VAE']['BIAS'],
use_warm_start=cfg['SPLINE']['USE_WARM_START'],
spline_pth=cfg['SPLINE']['PATH'],
decoder_dist=cfg['VAE']['DEC']['DIST'],
correlation=cfg['MCC']['CORR'])
model.eval()
model.to('cpu')
for batch in train_loader:
break
batch_size = batch['s1']['xt'].shape[0]
zs.shape
zs, mu, logvar = model.forward(batch['s1'])
mu = mu.view(batch_size, -1, latent_size)
A = mu[:,0,:].detach().cpu().numpy()
B = batch['s1']['yt'][:,0,:].detach().cpu().numpy()
C = np.zeros((latent_size,latent_size))
for i in range(latent_size):
C[i] = -np.abs(np.corrcoef(B, A, rowvar=False)[i,latent_size:])
from scipy.optimize import linear_sum_assignment
row_ind, col_ind = linear_sum_assignment(C)
A = A[:, col_ind]
mask = np.ones(latent_size)
for i in range(latent_size):
if np.corrcoef(B, A, rowvar=False)[i,latent_size:][i] > 0:
mask[i] = -1
print("Permutation:",col_ind)
print("Sign Flip:", mask)
fig = plt.figure(figsize=(4,4))
sns.heatmap(-C, vmin=0, vmax=1, annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.xlabel("Estimated latents ")
plt.ylabel("True latents ")
plt.title("MCC=%.3f"%np.abs(C[row_ind, col_ind]).mean());
figure_path = '/home/weiran/figs/'
from matplotlib.backends.backend_pdf import PdfPages
with PdfPages(figure_path + '/mcc_var.pdf') as pdf:
fig = plt.figure(figsize=(4,4))
sns.heatmap(-C, vmin=0, vmax=1, annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.xlabel("Estimated latents ")
plt.ylabel("True latents ")
plt.title("MCC=%.3f"%np.abs(C[row_ind, col_ind]).mean());
pdf.savefig(fig, bbox_inches="tight")
# Permute column here
mu = mu[:,:,col_ind]
# Flip sign here
mu = mu * torch.Tensor(mask, device=mu.device).view(1,1,latent_size)
mu = -mu
fig = plt.figure(figsize=(8,2))
col = 0
plt.plot(mu[:250,-1,col].detach().cpu().numpy(), color='b', label='True', alpha=0.75)
plt.plot(batch['yt_'].squeeze()[:250,col].detach().cpu().numpy(), color='r', label="Estimated", alpha=0.75)
plt.legend()
plt.title("Current latent variable $z_t$")
fig = plt.figure(figsize=(8,2))
col = 3
l = 1
plt.plot(batch['yt'].squeeze()[:250,l,col].detach().cpu().numpy(), color='b', label='True')
plt.plot(mu[:,:-1,:][:250,l,col].detach().cpu().numpy(), color='r', label="Estimated")
plt.xlabel("Sample index")
plt.ylabel("Latent variable value")
plt.legend()
plt.title("Past latent variable $z_l$")
fig = plt.figure(figsize=(2,2))
eps = model.sample(batch["xt"].cpu())
eps = eps.detach().cpu().numpy()
component_idx = 4
sns.distplot(eps[:,component_idx], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2});
plt.title("Learned noise prior")
from ltcl.modules.components.base import GroupLinearLayer
trans_func = GroupLinearLayer(din = 8,
dout = 8,
num_blocks = 2,
diagonal = False)
b = torch.nn.Parameter(0.001 * torch.randn(1, 8))
opt = torch.optim.Adam(trans_func.parameters(),lr=0.01)
lossfunc = torch.nn.L1Loss()
max_iters = 2
counter = 0
for step in range(max_iters):
for batch in train_loader:
batch_size = batch['yt'].shape[0]
x_recon, mu, logvar, z = model.forward(batch)
mu = mu.view(batch_size, -1, 8)
# Fix permutation before training
mu = mu[:,:,col_ind]
# Fix sign flip before training
mu = mu * torch.Tensor(mask, device=mu.device).view(1,1,8)
mu = -mu
pred = trans_func(mu[:,:-1,:]).sum(dim=1) + b
true = mu[:,-1,:]
loss = lossfunc(pred, true) #+ torch.mean(adaptive.lossfun((pred - true)))
opt.zero_grad()
loss.backward()
opt.step()
if counter % 100 == 0:
print(loss.item())
counter += 1
B2 = model.transition_prior.transition.w[0][col_ind][:, col_ind].detach().cpu().numpy()
B1 = model.transition_prior.transition.w[1][col_ind][:, col_ind].detach().cpu().numpy()
B1 = B1 * mask.reshape(1,-1) * (mask).reshape(-1,1)
B2 = B2 * mask.reshape(1,-1) * (mask).reshape(-1,1)
BB2 = np.load("/srv/data/ltcl/data/linear_nongaussian_ts/W2.npy")
BB1 = np.load("/srv/data/ltcl/data/linear_nongaussian_ts/W1.npy")
# b = np.concatenate((B1,B2), axis=0)
# bb = np.concatenate((BB1,BB2), axis=0)
# b = b / np.linalg.norm(b, axis=0).reshape(1, -1)
# bb = bb / np.linalg.norm(bb, axis=0).reshape(1, -1)
# pred = (b / np.linalg.norm(b, axis=0).reshape(1, -1)).reshape(-1)
# true = (bb / np.linalg.norm(bb, axis=0).reshape(1, -1)).reshape(-1)
bs = [B1, B2]
bbs = [BB1, BB2]
with PdfPages(figure_path + '/entries.pdf') as pdf:
fig, axs = plt.subplots(1,2, figsize=(4,2))
for tau in range(2):
ax = axs[tau]
b = bs[tau]
bb = bbs[tau]
b = b / np.linalg.norm(b, axis=0).reshape(1, -1)
bb = bb / np.linalg.norm(bb, axis=0).reshape(1, -1)
pred = (b / np.linalg.norm(b, axis=0).reshape(1, -1)).reshape(-1)
true = (bb / np.linalg.norm(bb, axis=0).reshape(1, -1)).reshape(-1)
ax.scatter(pred, true, s=10, cmap=plt.cm.coolwarm, zorder=10, color='b')
lims = [-0.75,0.75
]
# now plot both limits against eachother
ax.plot(lims, lims, '-.', alpha=0.75, zorder=0)
# ax.set_xlim(lims)
# ax.set_ylim(lims)
ax.set_xlabel("Estimated weight")
ax.set_ylabel("Truth weight")
ax.set_title(r"Entries of $\mathbf{B}_%d$"%(tau+1))
plt.tight_layout()
pdf.savefig(fig, bbox_inches="tight")
fig, axs = plt.subplots(2,4, figsize=(4,2))
for i in range(8):
row = i // 4
col = i % 4
ax = axs[row,col]
ax.scatter(B[:,i], A[:,i], s=4, color='b', alpha=0.25)
ax.axis('off')
# ax.set_xlabel('Ground truth latent')
# ax.set_ylabel('Estimated latent')
# ax.grid('..')
fig.tight_layout()
import numpy as numx
def calculate_amari_distance(matrix_one,
matrix_two,
version=1):
""" Calculate the Amari distance between two input matrices.
:param matrix_one: the first matrix
:type matrix_one: numpy array
:param matrix_two: the second matrix
:type matrix_two: numpy array
:param version: Variant to use.
:type version: int
:return: The amari distance between two input matrices.
:rtype: float
"""
if matrix_one.shape != matrix_two.shape:
return "Two matrices must have the same shape."
product_matrix = numx.abs(numx.dot(matrix_one,
numx.linalg.inv(matrix_two)))
product_matrix_max_col = numx.array(product_matrix.max(0))
product_matrix_max_row = numx.array(product_matrix.max(1))
n = product_matrix.shape[0]
""" Formula from ESLII
Here they refered to as "amari error"
The value is in [0, N-1].
reference:
Bach, F. R.; Jordan, M. I. Kernel Independent Component
Analysis, J MACH LEARN RES, 2002, 3, 1--48
"""
amari_distance = product_matrix / numx.tile(product_matrix_max_col, (n, 1))
amari_distance += product_matrix / numx.tile(product_matrix_max_row, (n, 1)).T
amari_distance = amari_distance.sum() / (2 * n) - 1
amari_distance = amari_distance / (n-1)
return amari_distance
print("Amari distance for B1:", calculate_amari_distance(B1, BB1))
print("Amari distance for B2:", calculate_amari_distance(B2, BB2))
| 0.584983 | 0.722356 |
# 5.3 – Open Systems and Enthalpy
---
## 5.3.0 – Learning Objectives
By the end of this section you should be able to:
1. Understand the definition of enthalpy.
2. Explain how enthalpy differs from internal energy
3. Look at the steps to solving an enthalpy problem.
---
## 5.3.1 – Introduction
To account for __open__ systems. Enthalpy is used as a __measurement of Energy__ in a system. This This notebook will go over a short explanation of enthalpy and go over problem 7.4-2 of the textbook.
Recall that the energy equation in a **closed system** is $E_{tot} = Q - W$ and after negating potential and kinetic energy becomes: $U = Q-W$.
In an open system, there is a **transfer of material**. This means that there needs to be a correction for internal energy $U$ to account for the particles leaving the system. Enthalpy $H$ is a measure of energy accounts for open systems and the energy balance becomes:
$$H = Q-W$$ and $$ \Delta H = \Delta Q- \Delta W $$
Enthalpy is extremely useful because many processes in chemical engineering are open systems and as a result, there are extensive databases that contain enthalpy values of various chemicals.
---
## 5.3.2 – Definition of Enthalpy
Enthalpy is defined as $H= U+PV$ and a change of enthalpy is defined as $\Delta H = \Delta U + \Delta PV$. This accounts for the open system by noticing that the pressure of the system remains constant with the pressure of the surroundings.
---
## 5.3.3 – Problem Statement
Steam powers a turbine with a flowrate of 500 kg/h at 44 atm and $450^{\circ} \space C$. The Steam enters the turbine at an average linear velocity of 60 m/s and exits 5 m below the turbine inlet at 360 m/s. The turbine produces 70 kW of shaft work and has a heat loss of $10^4$ kcal/h. Calculate the specific enthalpy change based on this process.
$$ \Delta \dot{H} + \Delta \dot{E_k} + \Delta \dot{E_p} = \dot{Q} - \dot {W_s}$$
Useful conversions: 
#### 1. List the assumptions and definitions
Our major assumption is that there is no extra work done except for the 70W of electrical energy and no other forms of heat transfer. Essentially, the only data that is relevant is given to us explicitly.
For simplicity, we will say that 4.4 MPa is close enough to 4.5MPa on the steam table. __(all teachers are different; check with yours first!)__
Let any inlet flow take the subscript $i$ and outlet flow take the subscript $o$.
#### 2. Draw a flowchart and write the full equation to solve

$$ \Delta \dot{H} + \Delta \dot{E_k} + \Delta \dot{E_p} = \dot{Q} - \dot {W_s}$$
Expanded this becomes
$$(\dot{H_o}-\dot{H_i} )+ (\dot{E_{ko}} -\dot{E_{ki}}) + (\dot{E_{po}}-\dot{E_{pi}} )= \dot{Q} + \dot {W_s}$$
#### 3. Identify the known and unknown variables
__be careful with units__; we want everything in terms of $\frac{Kj}{hr}$
$\dot{H}_i$ for 500kg of steam at 44 atm and 450° = $3324.2 \space \frac{kJ}{kg \cdot hr} \cdot 500 \space kg$ = $1,662,100 \space \frac{kJ}{hr}$
from the [nist](https://www.nist.gov/sites/default/files/documents/srd/NISTIR5078-Tab3.pdf) steam table
$H_o$ is __unknown__ because we do not know the temperature of steam leaving the turbine.
$\dot{E}_{ki}= \frac{1}{2} \cdot \dot{m} \cdot v^2 = 500 \space \frac{kg}{hr} \cdot \frac{1}{2} \cdot (60 \space \frac{m}{s})^2 \cdot \frac{1 \space kJ}{1000 \space J} = 900 \space \frac{kJ}{hr}$
$\dot{E}_{ko} = \frac{1}{2} \cdot \dot{m} \cdot v^2 = 500 \space \frac{kg}{hr} \cdot \frac{1}{2} \cdot (360 \space \frac{m}{s})^2 \cdot \frac{1k \space J}{1000 \space J} = 32,400 \space \frac{kJ}{hr}$
$\dot{E}_{pi}= \dot{m} \cdot g \cdot h = 500 \space \frac{kg}{hr} \cdot 9.81 \space \frac{m}{s}^2 \cdot \frac{1 \space kJ}{1000 \space J} \cdot 5 \space m = 24.525 \space \frac{kJ}{hr}$
$\dot{E}_{po}= \dot{m} \cdot g \cdot h = 500 \space \frac{kg}{hr} \cdot 9.81\frac{m}{s}^2 \cdot \frac{1 \space kJ}{1000 \space J} \cdot 0m = 0 \space \frac{kJ}{hr}$
$\dot{Q} = lost = -10^4 \space kcal \times 4184 \frac{J}{kcal} \times \frac{1 \space kJ}{1000 \space J} = -41,840 \space \frac{kJ}{hr}$
$\dot{W}_{net} = +W_e = 70 \space kW \cdot \frac{3600 \space s}{1 \space hr} = 252,000 \space \frac{kJ}{hr}$
#### 4. Solving the system
__Note:__ This is a relatively easy question with the challenge in the details. A DOF analysis will not be performed as a result
$$(\dot{H_o}-\dot{H_i} )+ (\dot{E_{ko}} -\dot{E_{ki}}) + (\dot{E_{po}}-\dot{E_{pi}} )= \dot{Q} - \dot {W_s}$$
Becomes (units are omitted for clarity):
$$ (\dot{H_o} - 1,662,100) + (32,400 - 900) + (0 - 24.525) = - (41,840 + 252,000) $$
$$\dot{H_o} = 1,336,784 \space \frac{kJ}{hr} $$
$$\therefore \space \Delta \dot{H} = 1,336,784 - 1,662,100 = -325,316 \space \frac{kJ}{hr} $$
Returning the enthalpy to specific property
$$\Delta H = \frac{-325,316 \space \frac{kJ}{hr} }{500 \space Kg} = - 651 \space \frac{kJ}{kg}$$
|
github_jupyter
|
# 5.3 – Open Systems and Enthalpy
---
## 5.3.0 – Learning Objectives
By the end of this section you should be able to:
1. Understand the definition of enthalpy.
2. Explain how enthalpy differs from internal energy
3. Look at the steps to solving an enthalpy problem.
---
## 5.3.1 – Introduction
To account for __open__ systems. Enthalpy is used as a __measurement of Energy__ in a system. This This notebook will go over a short explanation of enthalpy and go over problem 7.4-2 of the textbook.
Recall that the energy equation in a **closed system** is $E_{tot} = Q - W$ and after negating potential and kinetic energy becomes: $U = Q-W$.
In an open system, there is a **transfer of material**. This means that there needs to be a correction for internal energy $U$ to account for the particles leaving the system. Enthalpy $H$ is a measure of energy accounts for open systems and the energy balance becomes:
$$H = Q-W$$ and $$ \Delta H = \Delta Q- \Delta W $$
Enthalpy is extremely useful because many processes in chemical engineering are open systems and as a result, there are extensive databases that contain enthalpy values of various chemicals.
---
## 5.3.2 – Definition of Enthalpy
Enthalpy is defined as $H= U+PV$ and a change of enthalpy is defined as $\Delta H = \Delta U + \Delta PV$. This accounts for the open system by noticing that the pressure of the system remains constant with the pressure of the surroundings.
---
## 5.3.3 – Problem Statement
Steam powers a turbine with a flowrate of 500 kg/h at 44 atm and $450^{\circ} \space C$. The Steam enters the turbine at an average linear velocity of 60 m/s and exits 5 m below the turbine inlet at 360 m/s. The turbine produces 70 kW of shaft work and has a heat loss of $10^4$ kcal/h. Calculate the specific enthalpy change based on this process.
$$ \Delta \dot{H} + \Delta \dot{E_k} + \Delta \dot{E_p} = \dot{Q} - \dot {W_s}$$
Useful conversions: 
#### 1. List the assumptions and definitions
Our major assumption is that there is no extra work done except for the 70W of electrical energy and no other forms of heat transfer. Essentially, the only data that is relevant is given to us explicitly.
For simplicity, we will say that 4.4 MPa is close enough to 4.5MPa on the steam table. __(all teachers are different; check with yours first!)__
Let any inlet flow take the subscript $i$ and outlet flow take the subscript $o$.
#### 2. Draw a flowchart and write the full equation to solve

$$ \Delta \dot{H} + \Delta \dot{E_k} + \Delta \dot{E_p} = \dot{Q} - \dot {W_s}$$
Expanded this becomes
$$(\dot{H_o}-\dot{H_i} )+ (\dot{E_{ko}} -\dot{E_{ki}}) + (\dot{E_{po}}-\dot{E_{pi}} )= \dot{Q} + \dot {W_s}$$
#### 3. Identify the known and unknown variables
__be careful with units__; we want everything in terms of $\frac{Kj}{hr}$
$\dot{H}_i$ for 500kg of steam at 44 atm and 450° = $3324.2 \space \frac{kJ}{kg \cdot hr} \cdot 500 \space kg$ = $1,662,100 \space \frac{kJ}{hr}$
from the [nist](https://www.nist.gov/sites/default/files/documents/srd/NISTIR5078-Tab3.pdf) steam table
$H_o$ is __unknown__ because we do not know the temperature of steam leaving the turbine.
$\dot{E}_{ki}= \frac{1}{2} \cdot \dot{m} \cdot v^2 = 500 \space \frac{kg}{hr} \cdot \frac{1}{2} \cdot (60 \space \frac{m}{s})^2 \cdot \frac{1 \space kJ}{1000 \space J} = 900 \space \frac{kJ}{hr}$
$\dot{E}_{ko} = \frac{1}{2} \cdot \dot{m} \cdot v^2 = 500 \space \frac{kg}{hr} \cdot \frac{1}{2} \cdot (360 \space \frac{m}{s})^2 \cdot \frac{1k \space J}{1000 \space J} = 32,400 \space \frac{kJ}{hr}$
$\dot{E}_{pi}= \dot{m} \cdot g \cdot h = 500 \space \frac{kg}{hr} \cdot 9.81 \space \frac{m}{s}^2 \cdot \frac{1 \space kJ}{1000 \space J} \cdot 5 \space m = 24.525 \space \frac{kJ}{hr}$
$\dot{E}_{po}= \dot{m} \cdot g \cdot h = 500 \space \frac{kg}{hr} \cdot 9.81\frac{m}{s}^2 \cdot \frac{1 \space kJ}{1000 \space J} \cdot 0m = 0 \space \frac{kJ}{hr}$
$\dot{Q} = lost = -10^4 \space kcal \times 4184 \frac{J}{kcal} \times \frac{1 \space kJ}{1000 \space J} = -41,840 \space \frac{kJ}{hr}$
$\dot{W}_{net} = +W_e = 70 \space kW \cdot \frac{3600 \space s}{1 \space hr} = 252,000 \space \frac{kJ}{hr}$
#### 4. Solving the system
__Note:__ This is a relatively easy question with the challenge in the details. A DOF analysis will not be performed as a result
$$(\dot{H_o}-\dot{H_i} )+ (\dot{E_{ko}} -\dot{E_{ki}}) + (\dot{E_{po}}-\dot{E_{pi}} )= \dot{Q} - \dot {W_s}$$
Becomes (units are omitted for clarity):
$$ (\dot{H_o} - 1,662,100) + (32,400 - 900) + (0 - 24.525) = - (41,840 + 252,000) $$
$$\dot{H_o} = 1,336,784 \space \frac{kJ}{hr} $$
$$\therefore \space \Delta \dot{H} = 1,336,784 - 1,662,100 = -325,316 \space \frac{kJ}{hr} $$
Returning the enthalpy to specific property
$$\Delta H = \frac{-325,316 \space \frac{kJ}{hr} }{500 \space Kg} = - 651 \space \frac{kJ}{kg}$$
| 0.806358 | 0.983723 |
# Ray RLlib Multi-Armed Bandits - A Simple Bandit Example
© 2019-2021, Anyscale. All Rights Reserved

Let's explore a very simple contextual bandit example with three arms. We'll run trials using RLlib and [Tune](http://tune.io), Ray's hyperparameter tuning library.
```
import gym
from gym.spaces import Discrete, Box
import numpy as np
import random
import time
import ray
```
We define the bandit as a subclass of an OpenAI Gym environment. We set the action space to have three discrete variables, one action for each arm, and an observation space (the context) in the range -1.0 to 1.0, inclusive. (See the [configuring environments](https://docs.ray.io/en/latest/rllib-env.html#configuring-environments) documentation for more details about creating custom environments.)
There are two contexts defined. Note that we'll randomly pick one of them to use when `reset` is called, but it stays fixed (static) throughout the episode (the set of steps between calls to `reset`).
```
class SimpleContextualBandit (gym.Env):
def __init__ (self, config=None):
self.action_space = Discrete(3) # 3 arms
self.observation_space = Box(low=-1., high=1., shape=(2, ), dtype=np.float64) # Random (x,y), where x,y from -1 to 1
self.current_context = None
self.rewards_for_context = { # 2 contexts: -1 and 1
-1.: [-10, 0, 10],
1.: [10, 0, -10],
}
def reset (self):
self.current_context = random.choice([-1., 1.])
return np.array([-self.current_context, self.current_context])
def step (self, action):
reward = self.rewards_for_context[self.current_context][action]
return (np.array([-self.current_context, self.current_context]), reward, True,
{
"regret": 10 - reward
})
def __repr__(self):
return f'SimpleContextualBandit(action_space={self.action_space}, observation_space={self.observation_space}, current_context={self.current_context}, rewards per context={self.rewards_for_context})'
```
Look at the definition of `self.rewards_for_context`. For context `-1.`, choosing the **third** arm (index 2 in the array) maximizes the reward, yielding `10.0` for each pull. Similarly, for context `1.`, choosing the **first** arm (index 0 in the array) maximizes the reward. It is never advantageous to choose the second arm.
We'll see if our training results agree ;)
Try repeating the next two code cells enough times to see the `current_context` set to `1.0` and `-1.0`, which is initialized randomly in `reset()`.
```
bandit = SimpleContextualBandit()
observation = bandit.reset()
f'Initial observation = {observation}, bandit = {repr(bandit)}'
```
The `bandit.current_context` and the observation of the current environment will remain fixed through the episode.
```
print(f'current_context = {bandit.current_context}')
for i in range(10):
action = bandit.action_space.sample()
observation, reward, done, info = bandit.step(action)
print(f'observation = {observation}, action = {action}, reward = {reward:4d}, done = {str(done):5s}, info = {info}')
```
Look at the `current_context`. If it's `1.0`, does the `0` (first) action yield the highest reward and lowest regret? If it's `-1.0`, does the `2` (third) action yield the highest reward and lowest regret? The `1` (second) action always returns `0` reward, so it's never optimal.
## Using LinUCB
For this simple example, we can easily determine the best actions to take. Let's see how well our system does. We'll train with [LinUCB](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-upper-confidence-bound-contrib-linucb), a linear version of _Upper Confidence Bound_, for the exploration-exploitation strategy. _LinUCB_ assumes a linear dependency between the expected reward of an action and its context. Recall that a linear function is of the form $z = ax + by + c$, for example, where $x$, $y$, and $z$ are variables and $a$, $b$, and $c$ are constants. _LinUCB_ models the representation space using a set of linear predictors. Hence, the $Q_t(a)$ _value_ function discussed for UCB in the [previous lesson](02-Exploration-vs-Exploitation-Strategies.ipynb) is assumed to be a linear function here.
Look again at how we defined `rewards_for_context`. Is it linear as expected for _LinUCB_?
```python
self.rewards_for_context = {
-1.: [-10, 0, 10],
1.: [10, 0, -10],
}
```
Yes, for each arm, the reward is linear in the context. For example, the first arm has a reward of `-10` for context `-1.0` and `10` for context `1.0`. Crucially, the _same_ linear function that works for the first arm will work for the other two arms if you multiplied the constants in the linear function by `0` and `-1`, respectively. Hence, we expect _LinUCB_ to work well for this example.
Now use Tune to train the policy for this bandit. But first, we want to start Ray on your laptop or connect to the running Ray cluster if you are working on the Anyscale platform.
```
ray.init(ignore_reinit_error=True)
stop = {
"training_iteration": 200,
"timesteps_total": 100000,
"episode_reward_mean": 10.0,
}
config = {
"env": SimpleContextualBandit,
}
from ray.tune.progress_reporter import JupyterNotebookReporter
```
Calling `ray.tune.run` below would handle Ray initialization for us, if Ray isn't already running. If you want to prevent this and have Tune exit with an error when Ray isn't already initialized, then pass `ray_auto_init=False`.
```
analysis = ray.tune.run("contrib/LinUCB", config=config, stop=stop,
progress_reporter=JupyterNotebookReporter(overwrite=False), # This is the default, actually.
verbose=2, # Change to 0 or 1 to reduce the output.
)
```
(A lot of output is printed with `verbose` set to `2`. Use `0` for no output and `1` for short summaries.)
How long did it take?
```
stats = analysis.stats()
secs = stats["timestamp"] - stats["start_time"]
print(f'{secs:7.2f} seconds, {secs/60.0:7.2f} minutes')
```
We can see some of the final data as a dataframe:
```
df = analysis.dataframe(metric="episode_reward_mean", mode="max")
df
```
The easiest way to inspect the progression of training is to use TensorBoard.
1. If you are runnng on the Anyscale Platform, click the _TensorBoard_ link.
2. If you running this notebook on a laptop, open a terminal window using the `+` under the _Edit_ menu, run the following command, then open the URL shown.
```
tensorboard --logdir ~/ray_results
```
You may have many data sets plotted from previous tutorial lessons. In the _Runs_ on the left, look for one named something like this:
```
contrib/LinUCB/contrib_LinUCB_SimpleContextualBandit_0_YYYY-MM-DD_HH-MM-SSxxxxxxxx
```
If you have several of them, you want the one with the latest timestamp. To select just that one, click _toggler all runs_ below the list of runs, then select the one you want. You should see something like [this image](../../images/rllib/TensorBoard1.png).
The graph for the metric we were optimizing, the mean reward, is shown with a rectangle surrounding it. It improved steadily during the training runs. For this simple example, the reward mean is easily found in 200 steps.
## Exercise 1
Change the the `step` method to randomly change the `current_context` on each invocation:
```python
def step(self, action):
result = super().step(action)
self.current_context = random.choice([-1.,1.])
return (np.array([-self.current_context, self.current_context]), reward, True,
{
"regret": 10 - reward
})
```
Repeat the training and analysis. Does the training behavior change in any appreciable way? Why or why not?
See the [solutions notebook](solutions/Multi-Armed-Bandits-Solutions.ipynb) for discussion of this and the following exercises.
## Exercise 2
Recall the `rewards_for_context` we used:
```python
self.rewards_for_context = {
-1.: [-10, 0, 10],
1.: [10, 0, -10],
}
```
We said that Linear Upper Confidence Bound assumes a linear dependency between the expected reward of an action and its context. It models the representation space using a set of linear predictors.
Change the values for the rewards as follows, so they no longer have the same simple linear relationship:
```python
self.rewards_for_context = {
-1.: [-10, 10, 0],
1.: [0, 10, -10],
}
```
Run the training again and look at the results for the reward mean in TensorBoard. How successful was the training? How smooth is the plot for `episode_reward_mean`? How many steps were taken in the training?
## Exercise 3 (Optional)
We briefly discussed another algorithm for selecting the next action, _Thompson Sampling_, in the [previous lesson](02-Exploration-vs-Exploitation-Strategies.ipynb). Repeat exercises 1 and 2 using linear version, called _Linear Thompson Sampling_ ([RLlib documentation](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-thompson-sampling-contrib-lints)). To make this change, look at this code we used above:
```python
analysis = ray.tune.run("contrib/LinUCB", config=config, stop=stop,
progress_reporter=JupyterNotebookReporter(overwrite=False), # This is the default, actually.
verbose=1)
```
Change `contrib/LinUCB` to `contrib/LinTS`.
We'll continue exploring usage of _LinUCB_ in the next lesson, [04 Linear Upper Confidence Bound](04-Linear-Upper-Confidence-Bound.ipynb) and _LinTS_ in the following lesson, [05 Thompson Sampling](05-Linear-Thompson-Sampling.ipynb).
```
ray.shutdown()
```
|
github_jupyter
|
import gym
from gym.spaces import Discrete, Box
import numpy as np
import random
import time
import ray
class SimpleContextualBandit (gym.Env):
def __init__ (self, config=None):
self.action_space = Discrete(3) # 3 arms
self.observation_space = Box(low=-1., high=1., shape=(2, ), dtype=np.float64) # Random (x,y), where x,y from -1 to 1
self.current_context = None
self.rewards_for_context = { # 2 contexts: -1 and 1
-1.: [-10, 0, 10],
1.: [10, 0, -10],
}
def reset (self):
self.current_context = random.choice([-1., 1.])
return np.array([-self.current_context, self.current_context])
def step (self, action):
reward = self.rewards_for_context[self.current_context][action]
return (np.array([-self.current_context, self.current_context]), reward, True,
{
"regret": 10 - reward
})
def __repr__(self):
return f'SimpleContextualBandit(action_space={self.action_space}, observation_space={self.observation_space}, current_context={self.current_context}, rewards per context={self.rewards_for_context})'
bandit = SimpleContextualBandit()
observation = bandit.reset()
f'Initial observation = {observation}, bandit = {repr(bandit)}'
print(f'current_context = {bandit.current_context}')
for i in range(10):
action = bandit.action_space.sample()
observation, reward, done, info = bandit.step(action)
print(f'observation = {observation}, action = {action}, reward = {reward:4d}, done = {str(done):5s}, info = {info}')
self.rewards_for_context = {
-1.: [-10, 0, 10],
1.: [10, 0, -10],
}
ray.init(ignore_reinit_error=True)
stop = {
"training_iteration": 200,
"timesteps_total": 100000,
"episode_reward_mean": 10.0,
}
config = {
"env": SimpleContextualBandit,
}
from ray.tune.progress_reporter import JupyterNotebookReporter
analysis = ray.tune.run("contrib/LinUCB", config=config, stop=stop,
progress_reporter=JupyterNotebookReporter(overwrite=False), # This is the default, actually.
verbose=2, # Change to 0 or 1 to reduce the output.
)
stats = analysis.stats()
secs = stats["timestamp"] - stats["start_time"]
print(f'{secs:7.2f} seconds, {secs/60.0:7.2f} minutes')
df = analysis.dataframe(metric="episode_reward_mean", mode="max")
df
tensorboard --logdir ~/ray_results
contrib/LinUCB/contrib_LinUCB_SimpleContextualBandit_0_YYYY-MM-DD_HH-MM-SSxxxxxxxx
def step(self, action):
result = super().step(action)
self.current_context = random.choice([-1.,1.])
return (np.array([-self.current_context, self.current_context]), reward, True,
{
"regret": 10 - reward
})
self.rewards_for_context = {
-1.: [-10, 0, 10],
1.: [10, 0, -10],
}
self.rewards_for_context = {
-1.: [-10, 10, 0],
1.: [0, 10, -10],
}
analysis = ray.tune.run("contrib/LinUCB", config=config, stop=stop,
progress_reporter=JupyterNotebookReporter(overwrite=False), # This is the default, actually.
verbose=1)
ray.shutdown()
| 0.714927 | 0.983863 |
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from scipy import signal
from scipy.fftpack import fft, ifft
import seaborn as sns
from obspy.io.segy.segy import _read_segy
from las import LASReader
from tabulate import tabulate
from scipy.optimize import curve_fit
import pandas as pd
%matplotlib inline
plt.style.use('seaborn-white')
tk1 = LASReader('tokal1-final.las', null_subs=np.nan)
print (tk1.curves.names)
print(tk1.curves.DEPTH)#Unidades de profundidad en metros.
print(tk1.curves.DTCO)#Unidades de la lentitud compresional en us/ft, por lo que se convieren a us/m.
z = tk1.data['DEPTH']
dtco = tk1.data['DTCO']*3.28084 #Convertir a us/m
dtco_ori = tk1.data['DTCO']
KB = tk1.well.EKB.data
DF = tk1.well.EDF.data
lec1 = 601.288 #Profundidad de inicio del registro DTCO
lecn = tk1.stop #Profundidad final del registro
lec1_corr = float(lec1) - float(DF)
vel_remp = 1840 # m/s
remp1_twt = 2 * lec1_corr/vel_remp
tiempo_lec1 = remp1_twt
print(tabulate([['Parámetro de referencia','Magnitud', 'Unidad'],
['KB: Kelly Bushing', KB,'m'],
['Piso de perforación (DF)',DF,'m'],
['Inicio medición (MD)',np.round(lec1,2),'m'],
['Fin medición (MD)',lecn,'m'],
['Tiempo de inicio de registro',np.round(tiempo_lec1,2),'s'],
['Inicio medición SRD',np.round(lec1_corr,2),'m']],
headers="firstrow", tablefmt='grid', numalign='center'))
dtco_medfilt = signal.medfilt(dtco,9) #DTCO suavizada
dtco_medfilt_ori = signal.medfilt(dtco_ori,9) #DTCO original us/ft suavizada
plt.figure(figsize=[18,6])
plt.subplot(2,1,1)
_ = plt.plot(z, dtco, 'lightblue', alpha=0.8, linewidth=3, label = 'Original')
_ = plt.plot(z, dtco_medfilt, 'b', linewidth=1, label = 'Suavizado')
_ = plt.xlim(500, 4500)
_ = plt.xticks(np.linspace(500,4500,17), [500,750,1000,1250,1500,1750,2000,2250,2500,27500,3000,3250,3500,3750,4000,4250,4500])
_ = plt.grid(True, alpha = 0.8, linestyle=':')
_ = plt.legend()
_ = plt.xlabel('Profundidad [m]', fontsize=11)
_ = plt.ylabel('Lentitud [us/m]', fontsize=11)
_ = plt.title('Lentitud sónica DTCO', fontsize=11, weight = 'semibold', color='black')
plt.figure(figsize=[18,6])
plt.subplot(2,1,1)
_ = plt.plot(z, 1000000/dtco, 'gray', alpha=0.8, linewidth=3, label = 'Original')
_ = plt.plot(z, 1000000/dtco_medfilt, 'k', linewidth=1, label = 'Suavizado')
_ = plt.xlim(500, 4500)
#_ = plt.ylim(2000,3000)
_ = plt.xticks(np.linspace(500,4500,17), [500,750,1000,1250,1500,1750,2000,2250,2500,27500,3000,3250,3500,3750,4000,4250,4500])
_ = plt.grid(True, alpha = 0.8, linestyle=':')
_ = plt.legend()
_ = plt.xlabel('Profundidad [m]', fontsize=11)
_ = plt.ylabel('Velocidad [m/s]', fontsize=11)
_ = plt.title('Velocidad sónica DTCO', fontsize=11, weight = 'semibold', color='black')
scaled_dt = 0.1525 *np.nan_to_num(dtco_medfilt[3892:])/1e6
tcum = 2 * np.cumsum(scaled_dt)
tdr = tcum + tiempo_lec1 #Curva TZ
plt.figure(figsize=[18,3])
_ = plt.plot(z[3892:],tdr, lw=2)
_ = plt.xlim(0, 4500)
_ = plt.ylim(0, 3.5)
_ = plt.grid(True, alpha = 0.6, linestyle=':')
_ = plt.xlabel('Profundidad [m]', fontsize=11)
_ = plt.ylabel('Tiempo [s]', fontsize=11)
_ = plt.title('Función de conversión profunidad a tiempo', fontsize=11, weight = 'semibold', color='black')
dt = 0.00002 #Intervalo de muestreo
maxt = 3.5 #Tiempo máximo
t = np.arange(tiempo_lec1, maxt, dt) #Vector de tiempo
dtco_t = np.interp(x = t, xp = tdr, fp = dtco_medfilt[3892:])
plt.figure(figsize=[18,3])
_ = plt.plot(t,1000000/dtco_t, 'k')
_ = plt.xlabel('Tiempo [s]', fontsize=11)
_ = plt.ylabel('Velocidad [m/s]', fontsize=11)
_ = plt.title('Velocidad sónica DTCO', fontsize=11, weight = 'semibold', color='black')
_ = plt.grid(True, alpha = 0.6, linestyle=':')
tops = {}
with open('cimas.txt') as f:
for line in f.readlines():
if not line.startswith('#'):
temp = line.strip().split('\t')
tops[temp[-1].replace('_',' ')] = float(temp[1])
tops
tops.items() , tops.values()
def find_nearest(array, value):
idx = (np.abs(array - value)).argmin()
return idx
tops_twt = {}
for key, val in tops.items():
tops_twt[key] = tdr[find_nearest(z[3892:], val)]
tops_twt
f2 = plt.figure(figsize=[12,10])
ax1 = f2.add_axes([0.05, 0.1, 0.2, 0.9])
ax1.plot(dtco,z,'steelblue', alpha=1, lw=1.2)
ax1.set_title('Lentitud sónica DTCO', style = 'normal', fontsize = 12, weight = 'black')
ax1.set_ylabel('Profundidad [m]', fontsize = 10, weight='black')
ax1.set_xlabel('[us/m]', fontsize = 10)
ax1.set_ylim(3700, 4000)
#ax1.set_xticks( [0.0e7, 0.5e7, 1.0e7, 1.5e7, 2.0e7 ] )
ax1.invert_yaxis()
ax1.grid(True, alpha = 0.6, linestyle=':')
ax2 = f2.add_axes([0.325, 0.1, 0.2, 0.9])
ax2.plot(dtco_t, t,'gray', alpha=1, lw=1.2)
ax2.set_title('Lentitud sónica DTCO', style = 'normal', fontsize = 12, weight = 'black')
ax2.set_ylabel('Tiempo doble de viaje [s]', fontsize = 10, weight= 'black' )
ax2.set_xlabel('[us/m]', fontsize = 10)
ax2.set_ylim(2.70, 2.9)
ax2.invert_yaxis()
ax2.grid(True, alpha = 0.6, linestyle=':')
ax3 = f2.add_axes([0.675, 0.1, 0.2, 0.9])
ax3.plot(1000000/dtco_t, t,'gray', alpha=1, lw=1.2)
ax3.set_title('Velocidad sónica DTCO', style = 'normal', fontsize = 12, weight = 'black')
ax3.set_xlabel('[m/s]', fontsize = 10)
ax3.set_ylim(2.70, 2.9)
ax3.invert_yaxis()
ax3.set_yticklabels('')
ax3.grid(True, alpha = 0.6, linestyle=':')
for i in range(1):
for top, depth in tops.items():
f2.axes[i].axhline( y = float(depth), color = 'r', lw = 1,
alpha = 0.5, xmin = 0.05, xmax = 0.95, ls ='--' )
f2.axes[i].text( x = 20, y = float(depth), s = top,
alpha=0.75, color='k',
fontsize = 9,
horizontalalignment = 'center',
verticalalignment = 'center',
bbox=dict(facecolor='white', alpha=0.1, lw = 0.5),
weight = 'bold')
for i in range(1,3):
for twt in tops_twt.values():
f2.axes[i].axhline( y = float(twt), color = 'r', lw = 1,
alpha = 0.5, xmin = 0.05, xmax = 0.95, ls='--')
for i in range(1,2):
for top, twt in tops_twt.items():
f2.axes[i].text( x = 590, y = float(twt), s = top,
alpha=0.75, color='k',
fontsize = 9,
horizontalalignment = 'center',
verticalalignment = 'center',
bbox=dict(facecolor='white', alpha=1, lw = 0.5),
weight = 'semibold')
#plt.savefig('Registros.png', transparent=False, dpi=400, bbox_inches='tight')
xline = _read_segy('sfsg_2007xline244.sgy', headonly=True)
seisx = np.stack(t.data for t in xline.traces)
horz=pd.read_csv('hor_arena_3.csv')
horz.columns
f3 = plt.figure(figsize=[18,11])
gs = gridspec.GridSpec(1,1)
ax1 = plt.subplot(gs[0])
percen = np.percentile(seisx,99)
im1 = ax1.imshow(seisx.T[:,:],vmin=-percen, vmax=percen, cmap="binary", aspect='auto', interpolation='gaussian')
ax1.plot(horz['y'],horz['z_delta'],'o',c='y')
ax1.plot([82,82], [0, 800], 'k--', lw=2) # Posición del pozo Tokal-1
ax1.set_title('Línea Transversal 244-SFSG', fontsize = 14, weight = 'semibold')
ax1.set_xlabel('No. traza', fontsize = 10)
ax1.set_ylabel('Tiempo [s]', fontsize = 12)
plt.xlim(72,92)
plt.ylim(750,675)
plt.yticks(np.linspace(675,750,13),[2.700,2.725,2.750,2.775,2.800,2.825,2.850,2.875,2.900,2.925,2.950,2.975,3.000])
ax1.grid(True, alpha = 0.6, linestyle=':')
base_log = ax1.get_position().get_points()[0][1]
cima_log = ax1.get_position().get_points()[1][1]
ax2 = ax1.figure.add_axes([0.46, base_log, 0.1, cima_log-base_log])
ax2.plot(1000000/dtco_t, t,'b', alpha=1, lw=0.8)
ax2.set_xlabel('', fontsize = '12')
plt.xlim(1000, 5000)
plt.ylim(2.7,3.0)
ax2.invert_yaxis()
ax2.set_axis_off()
ax2.grid(True, alpha = 0.6, linestyle=':')
for i in range(1,2):
for twt in tops_twt.values():
f3.axes[i].axhline( y = float(twt), color = 'b', lw = 2,
alpha = 0.5, xmin = -5, xmax = 8, ls='--')
for i in range(1,2):
for top, twt in tops_twt.items():
f3.axes[i].text( x = 1, y = float(twt), s = top,
alpha=0.75, color='k',
fontsize = 9,
horizontalalignment = 'center',
verticalalignment = 'center',
bbox=dict(facecolor='white', alpha=1, lw = 0.5),
weight = 'semibold')
#plt.savefig('xline244_gray.png', transparent=False, dpi=400, bbox_inches='tight')
f3 = plt.figure(figsize=[18,11])
gs = gridspec.GridSpec(1,1)
ax1 = plt.subplot(gs[0])
percen = np.percentile(seisx,99.8)
im1 = ax1.imshow(seisx.T[:,:],vmin=-percen, vmax=percen, cmap="seismic", aspect='auto', interpolation='gaussian')
ax1.plot(horz['y'],horz['z_delta'],'o',c='y')
ax1.plot([82,82], [0, 800], 'k--', lw=2) # Posición del pozo Tokal-1
ax1.set_title('Línea Transversal 244-SFSG', fontsize = 14, weight = 'semibold')
ax1.set_xlabel('No. traza', fontsize = 10)
ax1.set_ylabel('Tiempo [s]', fontsize = 12)
plt.xlim(72,92)
plt.ylim(750,675)
plt.yticks(np.linspace(675,750,13),[2.700,2.725,2.750,2.775,2.800,2.825,2.850,2.875,2.900,2.925,2.950,2.975,3.000])
ax1.grid(True, alpha = 0.6, linestyle=':')
base_log = ax1.get_position().get_points()[0][1]
cima_log = ax1.get_position().get_points()[1][1]
ax2 = ax1.figure.add_axes([0.46, base_log, 0.1, cima_log-base_log])
ax2.plot(1000000/dtco_t, t,'k', alpha=1, lw=0.8)
ax2.set_xlabel('', fontsize = '12')
plt.xlim(1000, 5000)
plt.ylim(2.7,3.0)
ax2.invert_yaxis()
ax2.set_axis_off()
ax2.grid(True, alpha = 0.6, linestyle=':')
for i in range(1,2):
for twt in tops_twt.values():
f3.axes[i].axhline( y = float(twt), color = 'k', lw = 2,
alpha = 0.5, xmin = -5, xmax = 8, ls='--')
for i in range(1,2):
for top, twt in tops_twt.items():
f3.axes[i].text( x = 1, y = float(twt), s = top,
alpha=0.75, color='k',
fontsize = 9,
horizontalalignment = 'center',
verticalalignment = 'center',
bbox=dict(facecolor='white', alpha=1, lw = 0.5),
weight = 'semibold')
#plt.savefig('xline244_seismic.png', transparent=False, dpi=400, bbox_inches='tight')
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from scipy import signal
from scipy.fftpack import fft, ifft
import seaborn as sns
from obspy.io.segy.segy import _read_segy
from las import LASReader
from tabulate import tabulate
from scipy.optimize import curve_fit
import pandas as pd
%matplotlib inline
plt.style.use('seaborn-white')
tk1 = LASReader('tokal1-final.las', null_subs=np.nan)
print (tk1.curves.names)
print(tk1.curves.DEPTH)#Unidades de profundidad en metros.
print(tk1.curves.DTCO)#Unidades de la lentitud compresional en us/ft, por lo que se convieren a us/m.
z = tk1.data['DEPTH']
dtco = tk1.data['DTCO']*3.28084 #Convertir a us/m
dtco_ori = tk1.data['DTCO']
KB = tk1.well.EKB.data
DF = tk1.well.EDF.data
lec1 = 601.288 #Profundidad de inicio del registro DTCO
lecn = tk1.stop #Profundidad final del registro
lec1_corr = float(lec1) - float(DF)
vel_remp = 1840 # m/s
remp1_twt = 2 * lec1_corr/vel_remp
tiempo_lec1 = remp1_twt
print(tabulate([['Parámetro de referencia','Magnitud', 'Unidad'],
['KB: Kelly Bushing', KB,'m'],
['Piso de perforación (DF)',DF,'m'],
['Inicio medición (MD)',np.round(lec1,2),'m'],
['Fin medición (MD)',lecn,'m'],
['Tiempo de inicio de registro',np.round(tiempo_lec1,2),'s'],
['Inicio medición SRD',np.round(lec1_corr,2),'m']],
headers="firstrow", tablefmt='grid', numalign='center'))
dtco_medfilt = signal.medfilt(dtco,9) #DTCO suavizada
dtco_medfilt_ori = signal.medfilt(dtco_ori,9) #DTCO original us/ft suavizada
plt.figure(figsize=[18,6])
plt.subplot(2,1,1)
_ = plt.plot(z, dtco, 'lightblue', alpha=0.8, linewidth=3, label = 'Original')
_ = plt.plot(z, dtco_medfilt, 'b', linewidth=1, label = 'Suavizado')
_ = plt.xlim(500, 4500)
_ = plt.xticks(np.linspace(500,4500,17), [500,750,1000,1250,1500,1750,2000,2250,2500,27500,3000,3250,3500,3750,4000,4250,4500])
_ = plt.grid(True, alpha = 0.8, linestyle=':')
_ = plt.legend()
_ = plt.xlabel('Profundidad [m]', fontsize=11)
_ = plt.ylabel('Lentitud [us/m]', fontsize=11)
_ = plt.title('Lentitud sónica DTCO', fontsize=11, weight = 'semibold', color='black')
plt.figure(figsize=[18,6])
plt.subplot(2,1,1)
_ = plt.plot(z, 1000000/dtco, 'gray', alpha=0.8, linewidth=3, label = 'Original')
_ = plt.plot(z, 1000000/dtco_medfilt, 'k', linewidth=1, label = 'Suavizado')
_ = plt.xlim(500, 4500)
#_ = plt.ylim(2000,3000)
_ = plt.xticks(np.linspace(500,4500,17), [500,750,1000,1250,1500,1750,2000,2250,2500,27500,3000,3250,3500,3750,4000,4250,4500])
_ = plt.grid(True, alpha = 0.8, linestyle=':')
_ = plt.legend()
_ = plt.xlabel('Profundidad [m]', fontsize=11)
_ = plt.ylabel('Velocidad [m/s]', fontsize=11)
_ = plt.title('Velocidad sónica DTCO', fontsize=11, weight = 'semibold', color='black')
scaled_dt = 0.1525 *np.nan_to_num(dtco_medfilt[3892:])/1e6
tcum = 2 * np.cumsum(scaled_dt)
tdr = tcum + tiempo_lec1 #Curva TZ
plt.figure(figsize=[18,3])
_ = plt.plot(z[3892:],tdr, lw=2)
_ = plt.xlim(0, 4500)
_ = plt.ylim(0, 3.5)
_ = plt.grid(True, alpha = 0.6, linestyle=':')
_ = plt.xlabel('Profundidad [m]', fontsize=11)
_ = plt.ylabel('Tiempo [s]', fontsize=11)
_ = plt.title('Función de conversión profunidad a tiempo', fontsize=11, weight = 'semibold', color='black')
dt = 0.00002 #Intervalo de muestreo
maxt = 3.5 #Tiempo máximo
t = np.arange(tiempo_lec1, maxt, dt) #Vector de tiempo
dtco_t = np.interp(x = t, xp = tdr, fp = dtco_medfilt[3892:])
plt.figure(figsize=[18,3])
_ = plt.plot(t,1000000/dtco_t, 'k')
_ = plt.xlabel('Tiempo [s]', fontsize=11)
_ = plt.ylabel('Velocidad [m/s]', fontsize=11)
_ = plt.title('Velocidad sónica DTCO', fontsize=11, weight = 'semibold', color='black')
_ = plt.grid(True, alpha = 0.6, linestyle=':')
tops = {}
with open('cimas.txt') as f:
for line in f.readlines():
if not line.startswith('#'):
temp = line.strip().split('\t')
tops[temp[-1].replace('_',' ')] = float(temp[1])
tops
tops.items() , tops.values()
def find_nearest(array, value):
idx = (np.abs(array - value)).argmin()
return idx
tops_twt = {}
for key, val in tops.items():
tops_twt[key] = tdr[find_nearest(z[3892:], val)]
tops_twt
f2 = plt.figure(figsize=[12,10])
ax1 = f2.add_axes([0.05, 0.1, 0.2, 0.9])
ax1.plot(dtco,z,'steelblue', alpha=1, lw=1.2)
ax1.set_title('Lentitud sónica DTCO', style = 'normal', fontsize = 12, weight = 'black')
ax1.set_ylabel('Profundidad [m]', fontsize = 10, weight='black')
ax1.set_xlabel('[us/m]', fontsize = 10)
ax1.set_ylim(3700, 4000)
#ax1.set_xticks( [0.0e7, 0.5e7, 1.0e7, 1.5e7, 2.0e7 ] )
ax1.invert_yaxis()
ax1.grid(True, alpha = 0.6, linestyle=':')
ax2 = f2.add_axes([0.325, 0.1, 0.2, 0.9])
ax2.plot(dtco_t, t,'gray', alpha=1, lw=1.2)
ax2.set_title('Lentitud sónica DTCO', style = 'normal', fontsize = 12, weight = 'black')
ax2.set_ylabel('Tiempo doble de viaje [s]', fontsize = 10, weight= 'black' )
ax2.set_xlabel('[us/m]', fontsize = 10)
ax2.set_ylim(2.70, 2.9)
ax2.invert_yaxis()
ax2.grid(True, alpha = 0.6, linestyle=':')
ax3 = f2.add_axes([0.675, 0.1, 0.2, 0.9])
ax3.plot(1000000/dtco_t, t,'gray', alpha=1, lw=1.2)
ax3.set_title('Velocidad sónica DTCO', style = 'normal', fontsize = 12, weight = 'black')
ax3.set_xlabel('[m/s]', fontsize = 10)
ax3.set_ylim(2.70, 2.9)
ax3.invert_yaxis()
ax3.set_yticklabels('')
ax3.grid(True, alpha = 0.6, linestyle=':')
for i in range(1):
for top, depth in tops.items():
f2.axes[i].axhline( y = float(depth), color = 'r', lw = 1,
alpha = 0.5, xmin = 0.05, xmax = 0.95, ls ='--' )
f2.axes[i].text( x = 20, y = float(depth), s = top,
alpha=0.75, color='k',
fontsize = 9,
horizontalalignment = 'center',
verticalalignment = 'center',
bbox=dict(facecolor='white', alpha=0.1, lw = 0.5),
weight = 'bold')
for i in range(1,3):
for twt in tops_twt.values():
f2.axes[i].axhline( y = float(twt), color = 'r', lw = 1,
alpha = 0.5, xmin = 0.05, xmax = 0.95, ls='--')
for i in range(1,2):
for top, twt in tops_twt.items():
f2.axes[i].text( x = 590, y = float(twt), s = top,
alpha=0.75, color='k',
fontsize = 9,
horizontalalignment = 'center',
verticalalignment = 'center',
bbox=dict(facecolor='white', alpha=1, lw = 0.5),
weight = 'semibold')
#plt.savefig('Registros.png', transparent=False, dpi=400, bbox_inches='tight')
xline = _read_segy('sfsg_2007xline244.sgy', headonly=True)
seisx = np.stack(t.data for t in xline.traces)
horz=pd.read_csv('hor_arena_3.csv')
horz.columns
f3 = plt.figure(figsize=[18,11])
gs = gridspec.GridSpec(1,1)
ax1 = plt.subplot(gs[0])
percen = np.percentile(seisx,99)
im1 = ax1.imshow(seisx.T[:,:],vmin=-percen, vmax=percen, cmap="binary", aspect='auto', interpolation='gaussian')
ax1.plot(horz['y'],horz['z_delta'],'o',c='y')
ax1.plot([82,82], [0, 800], 'k--', lw=2) # Posición del pozo Tokal-1
ax1.set_title('Línea Transversal 244-SFSG', fontsize = 14, weight = 'semibold')
ax1.set_xlabel('No. traza', fontsize = 10)
ax1.set_ylabel('Tiempo [s]', fontsize = 12)
plt.xlim(72,92)
plt.ylim(750,675)
plt.yticks(np.linspace(675,750,13),[2.700,2.725,2.750,2.775,2.800,2.825,2.850,2.875,2.900,2.925,2.950,2.975,3.000])
ax1.grid(True, alpha = 0.6, linestyle=':')
base_log = ax1.get_position().get_points()[0][1]
cima_log = ax1.get_position().get_points()[1][1]
ax2 = ax1.figure.add_axes([0.46, base_log, 0.1, cima_log-base_log])
ax2.plot(1000000/dtco_t, t,'b', alpha=1, lw=0.8)
ax2.set_xlabel('', fontsize = '12')
plt.xlim(1000, 5000)
plt.ylim(2.7,3.0)
ax2.invert_yaxis()
ax2.set_axis_off()
ax2.grid(True, alpha = 0.6, linestyle=':')
for i in range(1,2):
for twt in tops_twt.values():
f3.axes[i].axhline( y = float(twt), color = 'b', lw = 2,
alpha = 0.5, xmin = -5, xmax = 8, ls='--')
for i in range(1,2):
for top, twt in tops_twt.items():
f3.axes[i].text( x = 1, y = float(twt), s = top,
alpha=0.75, color='k',
fontsize = 9,
horizontalalignment = 'center',
verticalalignment = 'center',
bbox=dict(facecolor='white', alpha=1, lw = 0.5),
weight = 'semibold')
#plt.savefig('xline244_gray.png', transparent=False, dpi=400, bbox_inches='tight')
f3 = plt.figure(figsize=[18,11])
gs = gridspec.GridSpec(1,1)
ax1 = plt.subplot(gs[0])
percen = np.percentile(seisx,99.8)
im1 = ax1.imshow(seisx.T[:,:],vmin=-percen, vmax=percen, cmap="seismic", aspect='auto', interpolation='gaussian')
ax1.plot(horz['y'],horz['z_delta'],'o',c='y')
ax1.plot([82,82], [0, 800], 'k--', lw=2) # Posición del pozo Tokal-1
ax1.set_title('Línea Transversal 244-SFSG', fontsize = 14, weight = 'semibold')
ax1.set_xlabel('No. traza', fontsize = 10)
ax1.set_ylabel('Tiempo [s]', fontsize = 12)
plt.xlim(72,92)
plt.ylim(750,675)
plt.yticks(np.linspace(675,750,13),[2.700,2.725,2.750,2.775,2.800,2.825,2.850,2.875,2.900,2.925,2.950,2.975,3.000])
ax1.grid(True, alpha = 0.6, linestyle=':')
base_log = ax1.get_position().get_points()[0][1]
cima_log = ax1.get_position().get_points()[1][1]
ax2 = ax1.figure.add_axes([0.46, base_log, 0.1, cima_log-base_log])
ax2.plot(1000000/dtco_t, t,'k', alpha=1, lw=0.8)
ax2.set_xlabel('', fontsize = '12')
plt.xlim(1000, 5000)
plt.ylim(2.7,3.0)
ax2.invert_yaxis()
ax2.set_axis_off()
ax2.grid(True, alpha = 0.6, linestyle=':')
for i in range(1,2):
for twt in tops_twt.values():
f3.axes[i].axhline( y = float(twt), color = 'k', lw = 2,
alpha = 0.5, xmin = -5, xmax = 8, ls='--')
for i in range(1,2):
for top, twt in tops_twt.items():
f3.axes[i].text( x = 1, y = float(twt), s = top,
alpha=0.75, color='k',
fontsize = 9,
horizontalalignment = 'center',
verticalalignment = 'center',
bbox=dict(facecolor='white', alpha=1, lw = 0.5),
weight = 'semibold')
#plt.savefig('xline244_seismic.png', transparent=False, dpi=400, bbox_inches='tight')
| 0.356895 | 0.471162 |
# Data Preprocessing
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
# Importing the Datasets
```
dataset=pd.read_csv("E:\\Edu\\Data Science and ML\\Machinelearningaz\\Datasets\\Part 2 - Regression\\Section 6 - Polynomial Regression\\Position_Salaries.csv")
dataset.head()
dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False)
plt.show()
dataset.shape
dataset.hist()
plt.show()
dataset.describe()
X=dataset.iloc[:,1:2].values # (Matrix)
y=dataset.iloc[:,2].values # (Vector)
print(X)
print(y)
```
# Splitting Dataset into TrainingSet and TestSet
```
# WE are not splitting the dataset because we have small dataset so we need accurate solution
# So we are not splitting
```
# Fitting Simple Linear Regression to Training Set
```
from sklearn.linear_model import LinearRegression
lin_reg=LinearRegression()
lin_reg.fit(X,y)
```
# Fitting Polynomial Regression to Training Set
```
from sklearn.preprocessing import PolynomialFeatures
poly_reg=PolynomialFeatures(degree=2)
X_poly=poly_reg.fit_transform(X) # It adds x^2 and x^0 to the X dataset
print(X_poly)
lin_reg_2=LinearRegression()
lin_reg_2.fit(X_poly,y)
```
# Visualising the Linear Regression results
```
plt.scatter(X,y,color='red')
plt.plot(X,lin_reg.predict(X),color='blue')
plt.title("Truth or Bluff (Linear Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
```
# Visualising the Polynomial Regression results
```
plt.scatter(X,y,color='red')
plt.plot(X,lin_reg_2.predict(X_poly),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
# Its not good fit change the degree SO degree=3
from sklearn.preprocessing import PolynomialFeatures
poly_reg=PolynomialFeatures(degree=3)
X_poly=poly_reg.fit_transform(X) # It adds x^2 and x^0 to the X dataset
print(X_poly)
lin_reg_3=LinearRegression()
lin_reg_3.fit(X_poly,y)
plt.scatter(X,y,color='red')
plt.plot(X,lin_reg_3.predict(X_poly),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
# change the degree SO degree=4
from sklearn.preprocessing import PolynomialFeatures
poly_reg=PolynomialFeatures(degree=4)
X_poly=poly_reg.fit_transform(X) # It adds x^2 and x^0 to the X dataset
print(X_poly)
lin_reg_4=LinearRegression()
lin_reg_4.fit(X_poly,y)
plt.scatter(X,y,color='red')
plt.plot(X,lin_reg_4.predict(X_poly),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
```
# Visualising the Polynomial Regression with Higher Resolution and smoother Curve
```
X_grid=np.arange(min(X),max(X),0.1)
X_grid=X_grid.reshape((len(X_grid),1))
plt.scatter(X,y,color='red')
plt.plot(X_grid,lin_reg_4.predict(poly_reg.fit_transform(X_grid)),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
```
# Predicting new Result with Linear Regression
```
lin_reg.predict(np.array([[6.5]])) # Predict salry for the level 6.5
```
# Predicting new Result with Polynomial Regression
```
lin_reg_4.predict(poly_reg.fit_transform(np.array([[6.5]])))
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset=pd.read_csv("E:\\Edu\\Data Science and ML\\Machinelearningaz\\Datasets\\Part 2 - Regression\\Section 6 - Polynomial Regression\\Position_Salaries.csv")
dataset.head()
dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False)
plt.show()
dataset.shape
dataset.hist()
plt.show()
dataset.describe()
X=dataset.iloc[:,1:2].values # (Matrix)
y=dataset.iloc[:,2].values # (Vector)
print(X)
print(y)
# WE are not splitting the dataset because we have small dataset so we need accurate solution
# So we are not splitting
from sklearn.linear_model import LinearRegression
lin_reg=LinearRegression()
lin_reg.fit(X,y)
from sklearn.preprocessing import PolynomialFeatures
poly_reg=PolynomialFeatures(degree=2)
X_poly=poly_reg.fit_transform(X) # It adds x^2 and x^0 to the X dataset
print(X_poly)
lin_reg_2=LinearRegression()
lin_reg_2.fit(X_poly,y)
plt.scatter(X,y,color='red')
plt.plot(X,lin_reg.predict(X),color='blue')
plt.title("Truth or Bluff (Linear Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
plt.scatter(X,y,color='red')
plt.plot(X,lin_reg_2.predict(X_poly),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
# Its not good fit change the degree SO degree=3
from sklearn.preprocessing import PolynomialFeatures
poly_reg=PolynomialFeatures(degree=3)
X_poly=poly_reg.fit_transform(X) # It adds x^2 and x^0 to the X dataset
print(X_poly)
lin_reg_3=LinearRegression()
lin_reg_3.fit(X_poly,y)
plt.scatter(X,y,color='red')
plt.plot(X,lin_reg_3.predict(X_poly),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
# change the degree SO degree=4
from sklearn.preprocessing import PolynomialFeatures
poly_reg=PolynomialFeatures(degree=4)
X_poly=poly_reg.fit_transform(X) # It adds x^2 and x^0 to the X dataset
print(X_poly)
lin_reg_4=LinearRegression()
lin_reg_4.fit(X_poly,y)
plt.scatter(X,y,color='red')
plt.plot(X,lin_reg_4.predict(X_poly),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
X_grid=np.arange(min(X),max(X),0.1)
X_grid=X_grid.reshape((len(X_grid),1))
plt.scatter(X,y,color='red')
plt.plot(X_grid,lin_reg_4.predict(poly_reg.fit_transform(X_grid)),color='blue')
plt.title("Truth or Bluff (Polynomial Regression)")
plt.xlabel('Level/Position of work')
plt.ylabel('Salary')
plt.show()
lin_reg.predict(np.array([[6.5]])) # Predict salry for the level 6.5
lin_reg_4.predict(poly_reg.fit_transform(np.array([[6.5]])))
| 0.700383 | 0.977414 |
<h1> 2. Creating a sampled dataset </h1>
This notebook illustrates:
<ol>
<li> Sampling a BigQuery dataset to create datasets for ML
<li> Preprocessing with Pandas
</ol>
```
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
<h2> Create ML dataset by sampling using BigQuery </h2>
<p>
Let's sample the BigQuery data to create smaller datasets.
</p>
```
# Create SQL query using natality data after the year 2000
import google.datalab.bigquery as bq
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ABS(FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING)))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
```
## Lab Task #1
Sample the BigQuery resultset (above) so that you have approximately 12,000 training examples and 3000 evaluation examples.
The training and evaluation datasets have to be well-distributed (not all the babies are born in Jan 2005, for example)
and should not overlap (no baby is part of both training and evaluation datasets).
Hint (highlight to see): <p style='color:white'>You will use MOD() on the hashmonth to divide the dataset into non-overlapping training and evaluation datasets, and RAND() to sample these to the desired size.</p>
## Lab Task #2
Use Pandas to:
* Clean up the data to remove rows that are missing any of the fields.
* Simulate the lack of ultrasound.
* Change the plurality column to be a string.
Hint (highlight to see): <p>
Filtering:
<pre style='color:white'>
df = df[df.weight_pounds > 0]
</pre>
Lack of ultrasound:
<pre style='color:white'>
nous = df.copy(deep=True)
nous['is_male'] = 'Unknown'
</pre>
Modify plurality to be a string:
<pre style='color:white'>
twins_etc = dict(zip([1,2,3,4,5],
['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)']))
df['plurality'].replace(twins_etc, inplace=True)
</pre>
</p>
## Lab Task #3
Write the cleaned out data into CSV files. Change the name of the Pandas dataframes (traindf, evaldf) appropriately.
```
traindf.to_csv('train.csv', index=False, header=False)
evaldf.to_csv('eval.csv', index=False, header=False)
%bash
wc -l *.csv
head *.csv
tail *.csv
```
Copyright 2017-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
|
github_jupyter
|
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
# Create SQL query using natality data after the year 2000
import google.datalab.bigquery as bq
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ABS(FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING)))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
traindf.to_csv('train.csv', index=False, header=False)
evaldf.to_csv('eval.csv', index=False, header=False)
%bash
wc -l *.csv
head *.csv
tail *.csv
| 0.252384 | 0.934634 |
```
import csv, time, requests, json, datetime
import hmac
import hashlib
import sys
!{sys.executable} -m pip install websocket-client
import websocket
import ssl
webSocket ='wss://stream.binance.com:9443/ws/xvgbtc@ticker'
ws = websocket.WebSocket(sslopt={"cert_reqs": ssl.CERT_NONE})
connect = ws.connect(webSocket)
binTick = ws.recv()
binPrice = json.loads(binTick)
def calculate_signature(secret, data=None, params=None):
ordered_data = data
query_string = '&'.join(["{}={}".format(d[0], d[1]) for d in ordered_data])
m = hmac.new(bytes(secret, 'latin-1'), query_string.encode('utf-8'), hashlib.sha256)
return m.hexdigest()
MY_API_KEY = ''
CLIENT_SECRET = ''
timeStamp = []
totalArray = []
orderUrl = 'https://api.binance.com/api/v3/order'
accountUrl = 'https://api.binance.com/api/v3/account'
tradeUrl = 'https://api.binance.com/api/v3/myTrades'
headers = {
'X-MBX-APIKEY': MY_API_KEY,
}
btcAccount = .0011
buyPrice = 0
sellPrice = 0
buy = False
sell = False
buyOrder = []
sellOrder = []
iBuy = 0
iSell = 0
while(True):
binTick = ws.recv()
binPrice = json.loads(binTick)
currentTargetBid = float(binPrice["b"])
currentTargetAsk = float(binPrice["a"])
if(buy==False):
buyPrice = currentTargetBid
data = [
('symbol', binPrice["s"]),
('side', 'BUY'),
('type', 'LIMIT'),
('timeInForce', 'GTC'),
('quantity', '{:.8f}'.format(int(btcAccount/(currentTargetBid-0.00000001)))),
('price', '{:.8f}'.format(currentTargetBid-0.00000001)),
('recvWindow', '6000000'),
('timestamp', binPrice["E"]),
]
data.append(
('signature', calculate_signature(CLIENT_SECRET, data=data)))
r1 = requests.post(orderUrl, data=data, headers=headers)
r1Text = r1.text
print(r1Text)
r1ID = json.loads(r1Text)
try:
buyOrder.append(r1ID["orderId"])
print("here1")
buy = True
iBuy = iBuy + 1
except KeyError:
z = 0
while(z < iBuy):
data = [
('symbol', binPrice["s"]),
('orderId', buyOrder[z]),
('timestamp', binPrice["E"]),
('recvWindow', '6000000'),
]
data.append(
('signature', calculate_signature(CLIENT_SECRET, data=data)))
p1 = requests.delete(orderUrl, data=data, headers=headers)
print(p1.text)
z = z + 1
buyOrder = []
iBuy = 0
if(sell==False):
sellPrice = currentTargetAsk
data = [
('symbol', binPrice["s"]),
('side', 'SELL'),
('type', 'LIMIT'),
('timeInForce', 'GTC'),
('quantity', '{:.8f}'.format(int(btcAccount/(currentTargetAsk+0.00000001)))),
('price', '{:.8f}'.format(currentTargetAsk+0.00000001)),
('recvWindow', '6000000'),
('timestamp', binPrice["E"]),
]
data.append(
('signature', calculate_signature(CLIENT_SECRET, data=data)))
r2 = requests.post(orderUrl, data=data, headers=headers)
r2Text = r2.text
print(r2Text)
r2ID = json.loads(r2Text)
try:
sellOrder.append(r2ID["orderId"])
print("here2")
sell = True
iSell = iSell + 1
except KeyError:
z = 0
while(z < iSell):
data = [
('symbol', binPrice["s"]),
('orderId', sellOrder[z]),
('timestamp', binPrice["E"]),
('recvWindow', '6000000'),
]
data.append(
('signature', calculate_signature(CLIENT_SECRET, data=data)))
p1 = requests.delete(orderUrl, data=data, headers=headers)
print(p1.text)
z = z + 1
sellOrder = []
iSell = 0
if(currentTargetBid > buyPrice):
sell = False
buy = False
if(currentTargetAsk < sellPrice):
buy = False
sell = False
ws.close()
```
|
github_jupyter
|
import csv, time, requests, json, datetime
import hmac
import hashlib
import sys
!{sys.executable} -m pip install websocket-client
import websocket
import ssl
webSocket ='wss://stream.binance.com:9443/ws/xvgbtc@ticker'
ws = websocket.WebSocket(sslopt={"cert_reqs": ssl.CERT_NONE})
connect = ws.connect(webSocket)
binTick = ws.recv()
binPrice = json.loads(binTick)
def calculate_signature(secret, data=None, params=None):
ordered_data = data
query_string = '&'.join(["{}={}".format(d[0], d[1]) for d in ordered_data])
m = hmac.new(bytes(secret, 'latin-1'), query_string.encode('utf-8'), hashlib.sha256)
return m.hexdigest()
MY_API_KEY = ''
CLIENT_SECRET = ''
timeStamp = []
totalArray = []
orderUrl = 'https://api.binance.com/api/v3/order'
accountUrl = 'https://api.binance.com/api/v3/account'
tradeUrl = 'https://api.binance.com/api/v3/myTrades'
headers = {
'X-MBX-APIKEY': MY_API_KEY,
}
btcAccount = .0011
buyPrice = 0
sellPrice = 0
buy = False
sell = False
buyOrder = []
sellOrder = []
iBuy = 0
iSell = 0
while(True):
binTick = ws.recv()
binPrice = json.loads(binTick)
currentTargetBid = float(binPrice["b"])
currentTargetAsk = float(binPrice["a"])
if(buy==False):
buyPrice = currentTargetBid
data = [
('symbol', binPrice["s"]),
('side', 'BUY'),
('type', 'LIMIT'),
('timeInForce', 'GTC'),
('quantity', '{:.8f}'.format(int(btcAccount/(currentTargetBid-0.00000001)))),
('price', '{:.8f}'.format(currentTargetBid-0.00000001)),
('recvWindow', '6000000'),
('timestamp', binPrice["E"]),
]
data.append(
('signature', calculate_signature(CLIENT_SECRET, data=data)))
r1 = requests.post(orderUrl, data=data, headers=headers)
r1Text = r1.text
print(r1Text)
r1ID = json.loads(r1Text)
try:
buyOrder.append(r1ID["orderId"])
print("here1")
buy = True
iBuy = iBuy + 1
except KeyError:
z = 0
while(z < iBuy):
data = [
('symbol', binPrice["s"]),
('orderId', buyOrder[z]),
('timestamp', binPrice["E"]),
('recvWindow', '6000000'),
]
data.append(
('signature', calculate_signature(CLIENT_SECRET, data=data)))
p1 = requests.delete(orderUrl, data=data, headers=headers)
print(p1.text)
z = z + 1
buyOrder = []
iBuy = 0
if(sell==False):
sellPrice = currentTargetAsk
data = [
('symbol', binPrice["s"]),
('side', 'SELL'),
('type', 'LIMIT'),
('timeInForce', 'GTC'),
('quantity', '{:.8f}'.format(int(btcAccount/(currentTargetAsk+0.00000001)))),
('price', '{:.8f}'.format(currentTargetAsk+0.00000001)),
('recvWindow', '6000000'),
('timestamp', binPrice["E"]),
]
data.append(
('signature', calculate_signature(CLIENT_SECRET, data=data)))
r2 = requests.post(orderUrl, data=data, headers=headers)
r2Text = r2.text
print(r2Text)
r2ID = json.loads(r2Text)
try:
sellOrder.append(r2ID["orderId"])
print("here2")
sell = True
iSell = iSell + 1
except KeyError:
z = 0
while(z < iSell):
data = [
('symbol', binPrice["s"]),
('orderId', sellOrder[z]),
('timestamp', binPrice["E"]),
('recvWindow', '6000000'),
]
data.append(
('signature', calculate_signature(CLIENT_SECRET, data=data)))
p1 = requests.delete(orderUrl, data=data, headers=headers)
print(p1.text)
z = z + 1
sellOrder = []
iSell = 0
if(currentTargetBid > buyPrice):
sell = False
buy = False
if(currentTargetAsk < sellPrice):
buy = False
sell = False
ws.close()
| 0.158304 | 0.122052 |
# Name
Data processing by creating a cluster in Cloud Dataproc
# Label
Cloud Dataproc, cluster, GCP, Cloud Storage, KubeFlow, Pipeline
# Summary
A Kubeflow Pipeline component to create a cluster in Cloud Dataproc.
# Details
## Intended use
Use this component at the start of a Kubeflow Pipeline to create a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline.
## Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | |
| region | The Cloud Dataproc region to create the cluster in. | No | GCPRegion | | |
| name | The name of the cluster. Cluster names within a project must be unique. You can reuse the names of deleted clusters. | Yes | String | | None |
| name_prefix | The prefix of the cluster name. | Yes | String | | None |
| initialization_actions | A list of Cloud Storage URIs identifying executables to execute on each node after the configuration is completed. By default, executables are run on the master and all the worker nodes. | Yes | List | | None |
| config_bucket | The Cloud Storage bucket to use to stage the job dependencies, the configuration files, and the job driver console’s output. | Yes | GCSPath | | None |
| image_version | The version of the software inside the cluster. | Yes | String | | None |
| cluster | The full [cluster configuration](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters#Cluster). | Yes | Dict | | None |
| wait_interval | The number of seconds to pause before polling the operation. | Yes | Integer | | 30 |
## Output
Name | Description | Type
:--- | :---------- | :---
cluster_name | The name of the cluster. | String
Note: You can recycle the cluster by using the [Dataproc delete cluster component](https://github.com/kubeflow/pipelines/tree/master/components/gcp/dataproc/delete_cluster).
## Cautions & requirements
To use the component, you must:
* Set up the GCP project by following these [steps](https://cloud.google.com/dataproc/docs/guides/setup-project).
* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/#gcp-service-accounts) in a Kubeflow cluster. For example:
```
component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
* Grant the following types of access to the Kubeflow user service account:
* Read access to the Cloud Storage buckets which contains initialization action files.
* The role, `roles/dataproc.editor` on the project.
## Detailed description
This component creates a new Dataproc cluster by using the [Dataproc create cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/create).
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
```
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
```
2. Load the component using KFP SDK
```
import kfp.components as comp
dataproc_create_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/eb830cd73ca148e5a1a6485a9374c2dc068314bc/components/gcp/dataproc/create_cluster/component.yaml')
help(dataproc_create_cluster_op)
```
### Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
#### Set sample parameters
```
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
# Optional Parameters
EXPERIMENT_NAME = 'Dataproc - Create Cluster'
```
#### Example pipeline that uses the component
```
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc create cluster pipeline',
description='Dataproc create cluster pipeline'
)
def dataproc_create_cluster_pipeline(
project_id = PROJECT_ID,
region = 'us-central1',
name='',
name_prefix='',
initialization_actions='',
config_bucket='',
image_version='',
cluster='',
wait_interval='30'
):
dataproc_create_cluster_op(
project_id=project_id,
region=region,
name=name,
name_prefix=name_prefix,
initialization_actions=initialization_actions,
config_bucket=config_bucket,
image_version=image_version,
cluster=cluster,
wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
#### Compile the pipeline
```
pipeline_func = dataproc_create_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
## References
* [Kubernetes Engine for Kubeflow](https://www.kubeflow.org/docs/started/getting-started-gke/#gcp-service-accounts)
* [Component Python code](https://github.com/kubeflow/pipelines/blob/master/component_sdk/python/kfp_component/google/dataproc/_create_cluster.py)
* [Component Docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)
* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataproc/create_cluster/sample.ipynb)
* [Dataproc create cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/create)
## License
By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
|
github_jupyter
|
component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
* Grant the following types of access to the Kubeflow user service account:
* Read access to the Cloud Storage buckets which contains initialization action files.
* The role, `roles/dataproc.editor` on the project.
## Detailed description
This component creates a new Dataproc cluster by using the [Dataproc create cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/create).
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
2. Load the component using KFP SDK
### Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
#### Set sample parameters
#### Example pipeline that uses the component
#### Compile the pipeline
#### Submit the pipeline for execution
| 0.845017 | 0.944382 |
# 函数
- 函数可以用来定义可重复代码,组织和简化
- 一般来说一个函数在实际开发中为一个小功能
- 一个类为一个大功能
- 同样函数的长度不要超过一屏
Python中的所有函数实际上都是有返回值(return None),
如果你没有设置return,那么Python将不显示None.
如果你设置return,那么将返回出return这个值.
## 定义一个函数
def function_name(list of parameters):
do something

- 以前使用的random 或者range 或者print.. 其实都是函数或者类
```
def uuu():#r任何函数值都有默认值,
print('hello')
b='uuu'
print(b)
import random
com=random.randint(0,5)
while 1:
com = eval(input(''))
if com
def num(x1,x2,x3):
if x1>x2 and x1>x3
result = x1
elif x2>x3 and x2>x1:
result = x2
elif x3>x1 and x3>x2:
result = x3
num(x1=4,x2=2,x3=10)
def www():
print('HHHHH')
def www(name1,name2):
print(name1,'矮子')
print(name2,'pangzi')
www(name1='lll',name2='hhh')
def www(name1,name2='aaaa'):#默认是参数放在后main
print(name1,'矮子')
print(name2,'pangzi')
www(name1='hhhhh')
www(name1='ssss',name2='bbbb')
www()
#函数名+)()调用函数
www
def www(name):
print(name,'hahah')
www('zzz')
def shu(a):
if a%2==0:
print('偶数')
else:
print('奇数')
shu(4)
```
函数的参数如果有默认值的情况,当你调用该函数的时候:
可以不给予参数值,那么就会走该参数的默认值
否则的话,就走你给予的参数值.
## 调用一个函数
- functionName()
- "()" 就代表调用
```
def u():
print('xxxxxx')
def g():
u()
g()
def a(f):
f()
a(g)
```

```
def fei(*,name):#带星号在返回值时要加定义等于什么
print('he',name)
fei(name='nnnn')
def thn(*ccc):#不定长参数,可以传很多
print(ccc)
thn(1,2,3,4)
def shu(*bbb):
res = 0
for i in bbb:
if i>res:
res = i
return res
shu(1,2,3,4)
def nba(*zzz):
nn = 0
for i in zzz:
nn=i+nn
return nn
nba(1,2,3,4)
def nba(*zzz):
nn = 0
count=1
for i in zzz:
nn=i+nn
count=count+1
mean = nn /(count-1)
return nn,mean
nba(3,4,5,6,7)
#写方差
def fc(*var):
nn = 0
count=1
hhh=0
for i in var:
nn=i+nn
count=count+1
mean = nn /(count-1)
for j in var:
hhh=hhh+(j-mean)**2
Dx=hhh/(count-1)
return nn,mean,Dx
fc(1,0,3,1,0)
```
## 带返回值和不带返回值的函数
- return 返回的内容
- return 返回多个值
- 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值

- 当然也可以自定义返回None
## EP:

## 类型和关键字参数
- 普通参数
- 多个参数
- 默认值参数
- 不定长参数
## 普通参数
## 多个参数
## 默认值参数
## 强制命名
## 不定长参数
- \*args
> - 不定长,来多少装多少,不装也是可以的
- 返回的数据类型是元组
- args 名字是可以修改的,只是我们约定俗成的是args
- \**kwargs
> - 返回的字典
- 输入的一定要是表达式(键值对)
- name,\*args,name2,\**kwargs 使用参数名
## 变量的作用域
- 局部变量 local
- 全局变量 global
- globals 函数返回一个全局变量的字典,包括所有导入的变量
- locals() 函数会以字典类型返回当前位置的全部局部变量。
```
r=1000
def rr():
print(r)
rr()
r=1000
def zwj(*daxie):
da=0
xiao=0
shu=0
for i in daxie:
ASCLL = ord(i)
if 65<=ASCLL<=90:
da=da+1
elif 97<=ASCLL<=122:
xiao=xiao+1
elif 48<=ASCLL<=57:
shu=shu+1
return da,xiao,shu
zwj('1','3','v','z','a')
def sum(n):
res = 0
if n%2==0:
for i in range(2,n+1,2):
res += 1/i
else:
for j in range(1,n+1,2):
res = res+1/j
return res
sum(6)
def er():
num=input('')
n=input('')
res=0
for i in range(1,int(n)+1):
print(num*i)
res=res+int(num*i)
return res
er()
def er():
num=input('数')
n=input('几次')
res=0
for i in range(n):
for j in range(i+1):
print(num*i)
res=res+int(num*i)
return r
res=0
for j in range(5):
for i in range(j+1):
res= res+5*10**i
print(res)
res = 0
for i in range(1,21):
rres = 1
for j in range(1,i+1):
rres *= j
print(rres)
#一个球从100米高度自由落下,每次落地后反跳回原来的一半;在落下,在反弹,求他第十次落地时,共经过多少米
b=100
n=0
for i in range(1,11):
n=n+b/2+b
b=b/2
print(n)
```
## 注意:
- global :在进行赋值操作的时候需要声明
- 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.
- 
```
#方差
def fc(*var):
nn = 0
count=1
hhh=0
for i in var:
nn=i+nn
count=count+1
mean = nn /(count-1)
for j in var:
hhh=hhh+(j-mean)**2
Dx=hhh/(count-1)
return nn,mean,Dx
fc(1,2,1,0)
```
# Homework
- 1

```
def getpentagonalNnmeber(n):
n=0
k=0
for i in range(1,101):
for j in range(1,i+1):
n=j*(3*j-1)/2
print(n,end=' ')
k=k+1
if k%10==0:
print()
getpentagonalNnmeber(100)
```
- 2

```
def sumDigits(n):
ge=n//1000
shi=n%1000//100
bai=n%1000%100//10
qian=n%1000%100%10
shu=ge+shi+bai+qian
print(shu)
sumDigits(670)
```
- 3

```
def displaySortedNumbers(num1,num2,num3):
one,two,three=eval(input('Enter three numbers:'))
if one>two and two>three:
return('The sorted numbers are' ,three,two,one)
elif one>three and three>two:
return ('The sorted numbers are' ,two,three,one)
elif two>one and one>three:
return ('The sorted numbers are' ,three,one,two)
elif two>three and three>one:
return ('The sorted numbers are',one,three,two)
elif three>one and one>two:
return ('The sorted numbers are' ,two,one,three)
elif three>two and two>one:
return ('The sorted numbers are',one,two,three )
displaySortedNumbers(3,2.4,5)
displaySortedNumbers(31,12.4,15)
def displaySortedNumbers(one,two,three):
if one>two and two>three:
return('The sorted numbers are' ,three,two,one)
elif one>three and three>two:
return ('The sorted numbers are' ,two,three,one)
elif two>one and one>three:
return ('The sorted numbers are' ,three,one,two)
elif two>three and three>one:
return ('The sorted numbers are',one,three,two)
elif three>one and one>two:
return ('The sorted numbers are' ,two,one,three)
elif three>two and two>one:
return ('The sorted numbers are',one,two,three )
displaySortedNumbers(31,12.4,15)
```
- 4

```
def futureInvestmentValue(investemntAmount,monthlyInterestRate, years):
zhi=eval(input('The amount invested:'))
nian=eval(input('Annual interest:'))
```
- 5

```
def hanshu(ch1,ch2):
n=0
for i in (ch1,ch2):
Ascll = ord(i)
print(Ascll)
k=k+1
if k%10==0:
print()
hanshu(49,91)
```
- 6

```
def numberOfDaysInAYear(year):
for i in range(2010,2021):
if i%4 == 0and i % 100!= 0 or i % 400 == 0:
print(i,'366天')
else:
print(i,'365天')
numberOfDaysInAYear(2020)
```
- 7

```
def distance(x1,y1,x2,y2):
cm=((x1-x2)**2+(y1-y2)**2)**0.5
print(cm)
distance(1,2,1,0)
```
- 8

```
def sushu(uuu):
for z in uuu:
for i in range(2,z+2):
for j in range(2,z+2):
if i%j == 0:
break
else:
print(i)
if i==2**(z-1):
print(i)
sushu(31)
```
- 9


- 10

- 11
### 去网上寻找如何用Python代码发送邮件
|
github_jupyter
|
def uuu():#r任何函数值都有默认值,
print('hello')
b='uuu'
print(b)
import random
com=random.randint(0,5)
while 1:
com = eval(input(''))
if com
def num(x1,x2,x3):
if x1>x2 and x1>x3
result = x1
elif x2>x3 and x2>x1:
result = x2
elif x3>x1 and x3>x2:
result = x3
num(x1=4,x2=2,x3=10)
def www():
print('HHHHH')
def www(name1,name2):
print(name1,'矮子')
print(name2,'pangzi')
www(name1='lll',name2='hhh')
def www(name1,name2='aaaa'):#默认是参数放在后main
print(name1,'矮子')
print(name2,'pangzi')
www(name1='hhhhh')
www(name1='ssss',name2='bbbb')
www()
#函数名+)()调用函数
www
def www(name):
print(name,'hahah')
www('zzz')
def shu(a):
if a%2==0:
print('偶数')
else:
print('奇数')
shu(4)
def u():
print('xxxxxx')
def g():
u()
g()
def a(f):
f()
a(g)
def fei(*,name):#带星号在返回值时要加定义等于什么
print('he',name)
fei(name='nnnn')
def thn(*ccc):#不定长参数,可以传很多
print(ccc)
thn(1,2,3,4)
def shu(*bbb):
res = 0
for i in bbb:
if i>res:
res = i
return res
shu(1,2,3,4)
def nba(*zzz):
nn = 0
for i in zzz:
nn=i+nn
return nn
nba(1,2,3,4)
def nba(*zzz):
nn = 0
count=1
for i in zzz:
nn=i+nn
count=count+1
mean = nn /(count-1)
return nn,mean
nba(3,4,5,6,7)
#写方差
def fc(*var):
nn = 0
count=1
hhh=0
for i in var:
nn=i+nn
count=count+1
mean = nn /(count-1)
for j in var:
hhh=hhh+(j-mean)**2
Dx=hhh/(count-1)
return nn,mean,Dx
fc(1,0,3,1,0)
r=1000
def rr():
print(r)
rr()
r=1000
def zwj(*daxie):
da=0
xiao=0
shu=0
for i in daxie:
ASCLL = ord(i)
if 65<=ASCLL<=90:
da=da+1
elif 97<=ASCLL<=122:
xiao=xiao+1
elif 48<=ASCLL<=57:
shu=shu+1
return da,xiao,shu
zwj('1','3','v','z','a')
def sum(n):
res = 0
if n%2==0:
for i in range(2,n+1,2):
res += 1/i
else:
for j in range(1,n+1,2):
res = res+1/j
return res
sum(6)
def er():
num=input('')
n=input('')
res=0
for i in range(1,int(n)+1):
print(num*i)
res=res+int(num*i)
return res
er()
def er():
num=input('数')
n=input('几次')
res=0
for i in range(n):
for j in range(i+1):
print(num*i)
res=res+int(num*i)
return r
res=0
for j in range(5):
for i in range(j+1):
res= res+5*10**i
print(res)
res = 0
for i in range(1,21):
rres = 1
for j in range(1,i+1):
rres *= j
print(rres)
#一个球从100米高度自由落下,每次落地后反跳回原来的一半;在落下,在反弹,求他第十次落地时,共经过多少米
b=100
n=0
for i in range(1,11):
n=n+b/2+b
b=b/2
print(n)
#方差
def fc(*var):
nn = 0
count=1
hhh=0
for i in var:
nn=i+nn
count=count+1
mean = nn /(count-1)
for j in var:
hhh=hhh+(j-mean)**2
Dx=hhh/(count-1)
return nn,mean,Dx
fc(1,2,1,0)
def getpentagonalNnmeber(n):
n=0
k=0
for i in range(1,101):
for j in range(1,i+1):
n=j*(3*j-1)/2
print(n,end=' ')
k=k+1
if k%10==0:
print()
getpentagonalNnmeber(100)
def sumDigits(n):
ge=n//1000
shi=n%1000//100
bai=n%1000%100//10
qian=n%1000%100%10
shu=ge+shi+bai+qian
print(shu)
sumDigits(670)
def displaySortedNumbers(num1,num2,num3):
one,two,three=eval(input('Enter three numbers:'))
if one>two and two>three:
return('The sorted numbers are' ,three,two,one)
elif one>three and three>two:
return ('The sorted numbers are' ,two,three,one)
elif two>one and one>three:
return ('The sorted numbers are' ,three,one,two)
elif two>three and three>one:
return ('The sorted numbers are',one,three,two)
elif three>one and one>two:
return ('The sorted numbers are' ,two,one,three)
elif three>two and two>one:
return ('The sorted numbers are',one,two,three )
displaySortedNumbers(3,2.4,5)
displaySortedNumbers(31,12.4,15)
def displaySortedNumbers(one,two,three):
if one>two and two>three:
return('The sorted numbers are' ,three,two,one)
elif one>three and three>two:
return ('The sorted numbers are' ,two,three,one)
elif two>one and one>three:
return ('The sorted numbers are' ,three,one,two)
elif two>three and three>one:
return ('The sorted numbers are',one,three,two)
elif three>one and one>two:
return ('The sorted numbers are' ,two,one,three)
elif three>two and two>one:
return ('The sorted numbers are',one,two,three )
displaySortedNumbers(31,12.4,15)
def futureInvestmentValue(investemntAmount,monthlyInterestRate, years):
zhi=eval(input('The amount invested:'))
nian=eval(input('Annual interest:'))
def hanshu(ch1,ch2):
n=0
for i in (ch1,ch2):
Ascll = ord(i)
print(Ascll)
k=k+1
if k%10==0:
print()
hanshu(49,91)
def numberOfDaysInAYear(year):
for i in range(2010,2021):
if i%4 == 0and i % 100!= 0 or i % 400 == 0:
print(i,'366天')
else:
print(i,'365天')
numberOfDaysInAYear(2020)
def distance(x1,y1,x2,y2):
cm=((x1-x2)**2+(y1-y2)**2)**0.5
print(cm)
distance(1,2,1,0)
def sushu(uuu):
for z in uuu:
for i in range(2,z+2):
for j in range(2,z+2):
if i%j == 0:
break
else:
print(i)
if i==2**(z-1):
print(i)
sushu(31)
| 0.239083 | 0.761073 |
# HW3
### Samir Patel
### DATA 515a
### 4/20/2017
For this homework, you are a data scientist working for Pronto (before the end of their contract with the City of Seattle). Your job is to assist in determining how to do end-of-day adjustments in the number of bikes at stations so that all stations will have enough bikes for the next day of operation (as estimated by the weekday average for the station for the year). Your assistance will help in constructing a plan for each day of the week that specifies how many bikes should be moved from each station and how many bikes must be delievered to each station.
Your assignment is to construct plots of the differences between 'from' and 'to' counts for each station by day of the week. Do this as a set of 7 subplots. You should use at least one function to construct your plots.
### Grading
- 2-pts: create a dataframe with station counts averages by day-of-week
- 1-pt: structure the 7 day-of-week plots as subplots
- 1-pt: label the plots by day-of-week
- 1-pt: label the x-axis for plots in the last row and label the y-axis for plots in the left-most column
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
df = pd.read_csv("2015_trip_data.csv")
```
Obtaining day of the week for start_weekday and stop_weekday columns using Pandas "to_datetime" function.
Obtaining date (sans time) using Pandas "to_datetime" function.
Then adding to dataframe.
```
start_weekday = [pd.to_datetime(x).dayofweek for x in df.starttime]
stop_weekday = [pd.to_datetime(x).dayofweek for x in df.stoptime]
df['startweekday'] = start_weekday # Creates a new column named 'startday'
df['stopweekday'] = stop_weekday
startdate = pd.DatetimeIndex(df['starttime'])
df['date'] = startdate.date
```
Taking new date column, removing duplicate dates and then grouping to determine the unique counts of each of the 7 weekdays (i.e. how many days of the week occurred in the dataset)
```
t1 = startdate.date
t2 = pd.DataFrame(t1)
t3 = t2.drop_duplicates()
t4 = pd.DatetimeIndex(t3[0])
t5 = pd.DataFrame(t4.dayofweek)
day_counts = pd.value_counts(t5[0]).sort_index()
groupby_day_from = df.groupby(['from_station_id', 'startweekday']).size()
groupby_day_to = df.groupby(['to_station_id', 'stopweekday']).size()
df_counts = pd.DataFrame({'From': groupby_day_from, 'To': groupby_day_to})
```
Calculating average counts ("From", "To" and "To-From" (delta)) and adding into a dataframe with new headers.
```
dict_fromavg = {}
dict_toavg = {}
dict_deltaavg = {}
for station in df_counts.index.levels[0]:
dict_fromavg[station] = df_counts.From[station]/day_counts
dict_toavg[station] = df_counts.To[station]/day_counts
dict_deltaavg[station] = (df_counts.To[station]- df_counts.From[station])/day_counts
from_avg = pd.DataFrame(dict_fromavg)
from_avg = pd.DataFrame(from_avg.unstack())
to_avg = pd.DataFrame(dict_toavg)
to_avg = pd.DataFrame(to_avg.unstack())
delta_avg = pd.DataFrame(dict_deltaavg)
delta_avg = pd.DataFrame(delta_avg.unstack())
df_counts = df_counts.join(from_avg)
df_counts.columns.values[2] = 'From_Counts_Avg'
df_counts = df_counts.join(to_avg)
df_counts.columns.values[3] = 'To_Counts_Avg'
df_counts = df_counts.join(delta_avg)
df_counts.columns.values[4] = 'Delta_Avg'
df_counts2 = df_counts.unstack()
df_counts2 = df_counts2[df_counts2.index != 'Pronto shop']
```
Dataframe below contains station counts by day-of-week for station for bikes "From" and "To" a station.
In addition, it contains "From", "To" and "Delta_Avg" averages for each day-of-week.
```
df_counts2.head()
```
Adding plot_bar1 function for single plot creation (given inputs of dataframe, columns to plot and plotting options)
```
def plot_bar1(df, column, opts):
"""
Does a bar plot for a single column.
:param pd.DataFrame df:
:param str column: name of the column to plot
:param dict opts: key is plot attribute
"""
n_groups = len(df.index)
index = np.arange(n_groups) # The "raw" x-axis of the bar plot
rects1 = plt.bar(index, df[column])
if 'xlabel' in opts:
plt.xlabel(opts['xlabel'])
if 'ylabel' in opts:
plt.ylabel(opts['ylabel'])
if 'xticks' in opts and opts['xticks']:
plt.xticks(index, df.index) # Convert "raw" x-axis into labels
_, labels = plt.xticks() # Get the new labels of the plot
plt.setp(labels, rotation=90) # Rotate labels to make them readable
else:
labels = ['' for x in df.index]
plt.xticks(index, labels)
if 'ylim' in opts:
plt.ylim(opts['ylim'])
if 'title' in opts:
plt.title(opts['title'])
```
Adding plot_barN function for creating subplots.
```
def plot_barN(df, columns, opts):
"""
Does a bar plot for a single column.
:param pd.DataFrame df:
:param list-of-str columns: names of the column to plot
:param dict opts: key is plot attribute
"""
num_columns = len(columns)
local_opts = dict(opts) # Make a deep copy of the object
idx = 0
for column in columns:
idx += 1
local_opts['xticks'] = False
local_opts['xlabel'] = ''
if idx == num_columns:
local_opts['xticks'] = True
local_opts['xlabel'] = opts['xlabel']
plt.subplot(num_columns, 1, idx)
plot_bar1(df, column, local_opts)
local_opts['title'] = columns #add title to opts at the end of loop
plt.title(opts['title'][idx-1]) #add title to each subplot
```
Creating subplots for the average difference of bike counts "To" a station minus "From" a station" for each day of the week.
```
fig = plt.figure(figsize=(20, 40)) # Controls global properties of the bar plot
opts = {'xlabel': 'Stations', 'ylabel': 'From-To Delta Counts', 'ylim': [-15, 15], 'title': ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']}
plot_barN(df_counts2.Delta_Avg, [0,1,2,3,4,5,6], opts)
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
df = pd.read_csv("2015_trip_data.csv")
start_weekday = [pd.to_datetime(x).dayofweek for x in df.starttime]
stop_weekday = [pd.to_datetime(x).dayofweek for x in df.stoptime]
df['startweekday'] = start_weekday # Creates a new column named 'startday'
df['stopweekday'] = stop_weekday
startdate = pd.DatetimeIndex(df['starttime'])
df['date'] = startdate.date
t1 = startdate.date
t2 = pd.DataFrame(t1)
t3 = t2.drop_duplicates()
t4 = pd.DatetimeIndex(t3[0])
t5 = pd.DataFrame(t4.dayofweek)
day_counts = pd.value_counts(t5[0]).sort_index()
groupby_day_from = df.groupby(['from_station_id', 'startweekday']).size()
groupby_day_to = df.groupby(['to_station_id', 'stopweekday']).size()
df_counts = pd.DataFrame({'From': groupby_day_from, 'To': groupby_day_to})
dict_fromavg = {}
dict_toavg = {}
dict_deltaavg = {}
for station in df_counts.index.levels[0]:
dict_fromavg[station] = df_counts.From[station]/day_counts
dict_toavg[station] = df_counts.To[station]/day_counts
dict_deltaavg[station] = (df_counts.To[station]- df_counts.From[station])/day_counts
from_avg = pd.DataFrame(dict_fromavg)
from_avg = pd.DataFrame(from_avg.unstack())
to_avg = pd.DataFrame(dict_toavg)
to_avg = pd.DataFrame(to_avg.unstack())
delta_avg = pd.DataFrame(dict_deltaavg)
delta_avg = pd.DataFrame(delta_avg.unstack())
df_counts = df_counts.join(from_avg)
df_counts.columns.values[2] = 'From_Counts_Avg'
df_counts = df_counts.join(to_avg)
df_counts.columns.values[3] = 'To_Counts_Avg'
df_counts = df_counts.join(delta_avg)
df_counts.columns.values[4] = 'Delta_Avg'
df_counts2 = df_counts.unstack()
df_counts2 = df_counts2[df_counts2.index != 'Pronto shop']
df_counts2.head()
def plot_bar1(df, column, opts):
"""
Does a bar plot for a single column.
:param pd.DataFrame df:
:param str column: name of the column to plot
:param dict opts: key is plot attribute
"""
n_groups = len(df.index)
index = np.arange(n_groups) # The "raw" x-axis of the bar plot
rects1 = plt.bar(index, df[column])
if 'xlabel' in opts:
plt.xlabel(opts['xlabel'])
if 'ylabel' in opts:
plt.ylabel(opts['ylabel'])
if 'xticks' in opts and opts['xticks']:
plt.xticks(index, df.index) # Convert "raw" x-axis into labels
_, labels = plt.xticks() # Get the new labels of the plot
plt.setp(labels, rotation=90) # Rotate labels to make them readable
else:
labels = ['' for x in df.index]
plt.xticks(index, labels)
if 'ylim' in opts:
plt.ylim(opts['ylim'])
if 'title' in opts:
plt.title(opts['title'])
def plot_barN(df, columns, opts):
"""
Does a bar plot for a single column.
:param pd.DataFrame df:
:param list-of-str columns: names of the column to plot
:param dict opts: key is plot attribute
"""
num_columns = len(columns)
local_opts = dict(opts) # Make a deep copy of the object
idx = 0
for column in columns:
idx += 1
local_opts['xticks'] = False
local_opts['xlabel'] = ''
if idx == num_columns:
local_opts['xticks'] = True
local_opts['xlabel'] = opts['xlabel']
plt.subplot(num_columns, 1, idx)
plot_bar1(df, column, local_opts)
local_opts['title'] = columns #add title to opts at the end of loop
plt.title(opts['title'][idx-1]) #add title to each subplot
fig = plt.figure(figsize=(20, 40)) # Controls global properties of the bar plot
opts = {'xlabel': 'Stations', 'ylabel': 'From-To Delta Counts', 'ylim': [-15, 15], 'title': ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']}
plot_barN(df_counts2.Delta_Avg, [0,1,2,3,4,5,6], opts)
| 0.479016 | 0.962883 |
```
from source_files import SNOW_DEPTH_DIR, CASI_COLORS
from plot_helpers import *
from raster_compare.plots import PlotBase
from raster_compare.base import RasterFile
import math
from matplotlib.patches import Rectangle
from matplotlib.collections import PatchCollection
aso_snow_depth = RasterFile(
SNOW_DEPTH_DIR / '1m/20180524_ASO_snow_depth_1m.tif',
band_number=1
)
aso_snow_depth_values = aso_snow_depth.band_values()
np.ma.masked_where(
aso_snow_depth_values <= 0.0,
aso_snow_depth_values,
copy=False
)
sfm_snow_depth = RasterFile(
SNOW_DEPTH_DIR / '1m/20180524_Agisoft_snow_depth_1m.tif',
band_number=1
)
assert aso_snow_depth.geo_transform == sfm_snow_depth.geo_transform
sfm_snow_depth_values = sfm_snow_depth.band_values()
np.ma.masked_where(
aso_snow_depth_values.mask,
sfm_snow_depth_values,
copy=False
)
HS_SNOW_ON = SNOW_DEPTH_DIR / 'hillshade/20180524_Lidar_hs_ERW_basin_3m.tif'
hillshade_snow_on = RasterFile(HS_SNOW_ON, band_number=1);
```
# Snow Depth Difference
```
HIST_BIN_WIDTH = .25
bins = np.concatenate((
np.arange(0, 2. + HIST_BIN_WIDTH, HIST_BIN_WIDTH),
[math.ceil(np.nanmax(sfm_snow_depth_values))],
))
AREA_PLOT_OPTS = dict(
nrows=1, ncols=2, sharex=True, sharey=True, figsize=(10,8), dpi=750
)
COLORBAR_POSITION = dict(
right=0.90, rect=[0.91, 0.217, 0.02, 0.568],
)
SFM_UNDER = dict(color='indigo', alpha=0.4)
BLUE_CMAP.set_under(**SFM_UNDER)
imshow_opts = dict(
extent=sfm_snow_depth.extent,
norm=colors.BoundaryNorm(
boundaries=bins, ncolors=BLUE_CMAP.N,
),
cmap=BLUE_CMAP,
)
def area_plot():
fig, axes = plt.subplots(**AREA_PLOT_OPTS)
for ax in axes:
ax.imshow(
hillshade_snow_on.band_values(),
extent=sfm_snow_depth.extent,
cmap='gray', clim=(1, 255), alpha=0.25,
)
ax.tick_params(axis='both', direction='inout', size=5)
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_facecolor('whitesmoke')
return fig, axes
fig, (ax1, ax2) = area_plot()
ax1.imshow(
sfm_snow_depth_values,
**imshow_opts
)
ax1.annotate('a)', xy=(335600, 4306300), fontsize=14)
ax1.add_artist(mpatches.Circle((323000, 4319750), 250, **SFM_UNDER))
ax1.annotate('No SfM snow depth', xy=(323400, 4319950), fontsize=LABEL_SIZE)
ax1.add_artist(
ScaleBar(1.0, location='lower left', pad=0.5, scale_loc='top', box_color='none')
)
ax1.annotate(
'N', size=LABEL_SIZE,
xy=(325500, 4320050), xytext=(325500, 4322000),
ha="center", va="center",
arrowprops=dict(arrowstyle="wedge,tail_width=1.25", facecolor='black')
)
im_data = ax2.imshow(
aso_snow_depth_values,
**imshow_opts,
)
ax2.annotate('b)', xy=(335600, 4306300), fontsize=14)
ax2.add_patch(
Rectangle((327200, 4320470 - 13879), 1000, 1000, ec='darkred', ls='--', fill=False),
)
ax2.legend(handles=[
mlines.Line2D([0], [0], label='Figure 5', color='darkred', ls='--'),
], loc='lower left', facecolor='none', edgecolor='none')
matplotlib.rcParams["ytick.major.pad"] = 2
PlotBase.insert_colorbar(
ax2,
im_data,
SNOW_DEPTH_LABEL,
ticks=[0, .5, 1, 1.5, 2],
labelpad=12,
**COLORBAR_POSITION
)
del im_data
```
## Zoom comparison
```
rgb_zoom_tif = plt.imread((SNOW_DEPTH_DIR / 'ERW_zoom/ERW_zoom.tif').as_posix())
casi_zoom_values = RasterFile(
SNOW_DEPTH_DIR / '1m/20180524_ASO_CASI_ERW_basin_1m_zoom.vrt',
band_number=1
).band_values()
matplotlib.rcParams['axes.titlesize'] = LABEL_SIZE
matplotlib.rcParams['axes.titlepad'] = 4
matplotlib.rcParams['axes.labelsize'] = LABEL_SIZE + 2
matplotlib.rcParams['axes.labelpad'] = 0
matplotlib.rcParams['xtick.labelsize'] = 0
matplotlib.rcParams['ytick.labelsize'] = 0
fig = plt.figure(dpi=500, figsize=(9, 4.7))
grid_spec = fig.add_gridspec(
nrows=2, ncols=2, wspace=0.11, hspace=0.12
)
for cell in range(3):
cell_grid = grid_spec[cell].subgridspec(1, 2, wspace=0.05)
ax_1_1 = fig.add_subplot(cell_grid[0, 0])
ax_1_2 = fig.add_subplot(cell_grid[0, 1:])
if cell == 0:
resolution = 1
elif cell == 1:
resolution = 3
elif cell == 2:
resolution = 50
aso_snow_depth_zoom = RasterFile(
SNOW_DEPTH_DIR / f'{resolution}m/20180524_ASO_snow_depth_{resolution}m_zoom.vrt',
band_number=1
)
aso_snow_depth_zoom_values = aso_snow_depth_zoom.band_values()
aso_snow_depth_zoom_values = np.ma.masked_where(
aso_snow_depth_zoom_values <= 0.0,
aso_snow_depth_zoom_values,
copy=False
)
sfm_snow_depth_zoom = RasterFile(
SNOW_DEPTH_DIR / f'{resolution}m/20180524_Agisoft_snow_depth_{resolution}m_zoom.vrt',
band_number=1
)
sfm_snow_depth_zoom_values = np.ma.masked_where(
aso_snow_depth_zoom_values.mask,
sfm_snow_depth_zoom.band_values(),
copy=False
)
imshow_opts['extent'] = sfm_snow_depth_zoom.extent
ax_1_1.imshow(
sfm_snow_depth_zoom_values,
**imshow_opts
)
ax_1_1.set_title('SfM')
ax_1_1.set_ylabel(f' {resolution}m Resolution')
ax_1_2.imshow(
aso_snow_depth_zoom_values,
**imshow_opts
)
ax_1_2.set_title('ASO')
cell_4 = grid_spec[3].subgridspec(1, 2)
ax_4_1 = fig.add_subplot(cell_4[0, 0])
ax_4_2 = fig.add_subplot(cell_4[0, 1:])
ax_4_1.imshow(
casi_zoom_values,
extent=sfm_snow_depth_zoom.extent,
cmap=colors.ListedColormap(CASI_COLORS),
alpha=0.8,
)
ax_4_1.set_title('Classification')
ax_4_2.imshow(
rgb_zoom_tif,
extent=sfm_snow_depth_zoom.extent,
)
ax_4_2.set_title('Orthomosaic')
for ax in fig.get_axes():
ax.tick_params(axis='both', direction='inout', size=5)
ax.set_xticklabels([])
ax.set_yticklabels([])
```
## CASI Classification
```
classifier_data = load_classifier_data(aso_snow_depth_values.mask)
sd_difference_values = sfm_snow_depth_values - aso_snow_depth_values
np.ma.masked_where(
sfm_snow_depth_values <= 0.0,
sd_difference_values,
copy=False
)
classification_plot = np.ma.masked_where(
sfm_snow_depth_values <= 0.0,
classifier_data,
).astype(np.int8);
bins = np.concatenate((
[math.floor(np.nanmin(sd_difference_values))],
np.arange(-2., 2. + HIST_BIN_WIDTH, HIST_BIN_WIDTH),
[math.ceil(np.nanmax(sd_difference_values))],
))
imshow_opts = dict(
extent=sfm_snow_depth.extent,
norm=colors.BoundaryNorm(
boundaries=bins, ncolors=RED_BLUE_CMAP.N,
),
cmap=RED_BLUE_CMAP,
)
fig, (ax1, ax2) = area_plot()
fig.subplots_adjust(wspace=.15)
im_data = ax1.imshow(
sd_difference_values,
**imshow_opts
)
ax1.set_title("Snow Depth Differences")
PlotBase.insert_colorbar(ax1, im_data, 'Snow Depth Difference (m)')
im_data = ax2.imshow(
classification_plot,
extent=sfm_snow_depth.extent,
cmap=colors.ListedColormap(CASI_COLORS),
alpha=0.8,
)
ax2.set_title("Snow Depth Differences - Classification")
PlotBase.insert_colorbar(ax2, im_data, 'Classification');
dem_values = load_reference_dem(aso_snow_depth_values.mask)
high_elevation = np.ma.masked_where(
dem_values <= 3500,
sd_difference_values,
)
low_elevation = np.ma.masked_where(
dem_values > 3500,
sd_difference_values,
)
fig, (ax1, ax2) = area_plot()
ax1.imshow(
high_elevation,
**imshow_opts
)
ax1.set_title("Snow Depth Differences - High Elevation >= 3500m")
im_data = ax2.imshow(
low_elevation,
**imshow_opts,
)
ax2.set_title("Snow Depth Differences - Low Elevation < 3500")
PlotBase.insert_colorbar(ax2, im_data, 'Snow Depth Difference (m)', **COLORBAR_POSITION);
high_elevation = np.ma.masked_where(
np.ma.masked_outside(dem_values, 3100, 3500).mask,
sd_difference_values,
)
low_elevation = np.ma.masked_where(
np.ma.masked_outside(dem_values, 3800, 3900).mask,
sd_difference_values,
)
fig, (ax1, ax2) = area_plot()
ax1.imshow(
high_elevation,
**imshow_opts
)
ax1.set_title("Snow Depth Differences - 3100m < Elevation < 3200m")
im_data = ax2.imshow(
low_elevation,
**imshow_opts,
)
ax2.set_title("Snow Depth Differences - 3800 < Elevation < 3900")
PlotBase.insert_colorbar(ax2, im_data, r'$\Delta$ Snow Depth (m)', **COLORBAR_POSITION);
fig, (ax1, ax2) = area_plot()
ax1.imshow(
sd_difference_values,
**imshow_opts
)
ax1.set_title("Snow Depth Differences")
im_data = ax2.imshow(
sfm_snow_free_values - dem_values,
**imshow_opts,
)
ax2.set_title("Snow Free - Reference DEM")
PlotBase.insert_colorbar(ax2, im_data, **COLORBAR_POSITION);
data = [
{
'data': sfm_snow_free_values - dem_values,
'label': 'SfM snow free - DEM',
'color': 'dodgerblue',
},
]
with plot_histogram(data, (-6, 6), figsize=(10, 8)) as ax:
ax.set_title('Elevation Differences (SFM snow free - Reference DEM)');
```
|
github_jupyter
|
from source_files import SNOW_DEPTH_DIR, CASI_COLORS
from plot_helpers import *
from raster_compare.plots import PlotBase
from raster_compare.base import RasterFile
import math
from matplotlib.patches import Rectangle
from matplotlib.collections import PatchCollection
aso_snow_depth = RasterFile(
SNOW_DEPTH_DIR / '1m/20180524_ASO_snow_depth_1m.tif',
band_number=1
)
aso_snow_depth_values = aso_snow_depth.band_values()
np.ma.masked_where(
aso_snow_depth_values <= 0.0,
aso_snow_depth_values,
copy=False
)
sfm_snow_depth = RasterFile(
SNOW_DEPTH_DIR / '1m/20180524_Agisoft_snow_depth_1m.tif',
band_number=1
)
assert aso_snow_depth.geo_transform == sfm_snow_depth.geo_transform
sfm_snow_depth_values = sfm_snow_depth.band_values()
np.ma.masked_where(
aso_snow_depth_values.mask,
sfm_snow_depth_values,
copy=False
)
HS_SNOW_ON = SNOW_DEPTH_DIR / 'hillshade/20180524_Lidar_hs_ERW_basin_3m.tif'
hillshade_snow_on = RasterFile(HS_SNOW_ON, band_number=1);
HIST_BIN_WIDTH = .25
bins = np.concatenate((
np.arange(0, 2. + HIST_BIN_WIDTH, HIST_BIN_WIDTH),
[math.ceil(np.nanmax(sfm_snow_depth_values))],
))
AREA_PLOT_OPTS = dict(
nrows=1, ncols=2, sharex=True, sharey=True, figsize=(10,8), dpi=750
)
COLORBAR_POSITION = dict(
right=0.90, rect=[0.91, 0.217, 0.02, 0.568],
)
SFM_UNDER = dict(color='indigo', alpha=0.4)
BLUE_CMAP.set_under(**SFM_UNDER)
imshow_opts = dict(
extent=sfm_snow_depth.extent,
norm=colors.BoundaryNorm(
boundaries=bins, ncolors=BLUE_CMAP.N,
),
cmap=BLUE_CMAP,
)
def area_plot():
fig, axes = plt.subplots(**AREA_PLOT_OPTS)
for ax in axes:
ax.imshow(
hillshade_snow_on.band_values(),
extent=sfm_snow_depth.extent,
cmap='gray', clim=(1, 255), alpha=0.25,
)
ax.tick_params(axis='both', direction='inout', size=5)
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_facecolor('whitesmoke')
return fig, axes
fig, (ax1, ax2) = area_plot()
ax1.imshow(
sfm_snow_depth_values,
**imshow_opts
)
ax1.annotate('a)', xy=(335600, 4306300), fontsize=14)
ax1.add_artist(mpatches.Circle((323000, 4319750), 250, **SFM_UNDER))
ax1.annotate('No SfM snow depth', xy=(323400, 4319950), fontsize=LABEL_SIZE)
ax1.add_artist(
ScaleBar(1.0, location='lower left', pad=0.5, scale_loc='top', box_color='none')
)
ax1.annotate(
'N', size=LABEL_SIZE,
xy=(325500, 4320050), xytext=(325500, 4322000),
ha="center", va="center",
arrowprops=dict(arrowstyle="wedge,tail_width=1.25", facecolor='black')
)
im_data = ax2.imshow(
aso_snow_depth_values,
**imshow_opts,
)
ax2.annotate('b)', xy=(335600, 4306300), fontsize=14)
ax2.add_patch(
Rectangle((327200, 4320470 - 13879), 1000, 1000, ec='darkred', ls='--', fill=False),
)
ax2.legend(handles=[
mlines.Line2D([0], [0], label='Figure 5', color='darkred', ls='--'),
], loc='lower left', facecolor='none', edgecolor='none')
matplotlib.rcParams["ytick.major.pad"] = 2
PlotBase.insert_colorbar(
ax2,
im_data,
SNOW_DEPTH_LABEL,
ticks=[0, .5, 1, 1.5, 2],
labelpad=12,
**COLORBAR_POSITION
)
del im_data
rgb_zoom_tif = plt.imread((SNOW_DEPTH_DIR / 'ERW_zoom/ERW_zoom.tif').as_posix())
casi_zoom_values = RasterFile(
SNOW_DEPTH_DIR / '1m/20180524_ASO_CASI_ERW_basin_1m_zoom.vrt',
band_number=1
).band_values()
matplotlib.rcParams['axes.titlesize'] = LABEL_SIZE
matplotlib.rcParams['axes.titlepad'] = 4
matplotlib.rcParams['axes.labelsize'] = LABEL_SIZE + 2
matplotlib.rcParams['axes.labelpad'] = 0
matplotlib.rcParams['xtick.labelsize'] = 0
matplotlib.rcParams['ytick.labelsize'] = 0
fig = plt.figure(dpi=500, figsize=(9, 4.7))
grid_spec = fig.add_gridspec(
nrows=2, ncols=2, wspace=0.11, hspace=0.12
)
for cell in range(3):
cell_grid = grid_spec[cell].subgridspec(1, 2, wspace=0.05)
ax_1_1 = fig.add_subplot(cell_grid[0, 0])
ax_1_2 = fig.add_subplot(cell_grid[0, 1:])
if cell == 0:
resolution = 1
elif cell == 1:
resolution = 3
elif cell == 2:
resolution = 50
aso_snow_depth_zoom = RasterFile(
SNOW_DEPTH_DIR / f'{resolution}m/20180524_ASO_snow_depth_{resolution}m_zoom.vrt',
band_number=1
)
aso_snow_depth_zoom_values = aso_snow_depth_zoom.band_values()
aso_snow_depth_zoom_values = np.ma.masked_where(
aso_snow_depth_zoom_values <= 0.0,
aso_snow_depth_zoom_values,
copy=False
)
sfm_snow_depth_zoom = RasterFile(
SNOW_DEPTH_DIR / f'{resolution}m/20180524_Agisoft_snow_depth_{resolution}m_zoom.vrt',
band_number=1
)
sfm_snow_depth_zoom_values = np.ma.masked_where(
aso_snow_depth_zoom_values.mask,
sfm_snow_depth_zoom.band_values(),
copy=False
)
imshow_opts['extent'] = sfm_snow_depth_zoom.extent
ax_1_1.imshow(
sfm_snow_depth_zoom_values,
**imshow_opts
)
ax_1_1.set_title('SfM')
ax_1_1.set_ylabel(f' {resolution}m Resolution')
ax_1_2.imshow(
aso_snow_depth_zoom_values,
**imshow_opts
)
ax_1_2.set_title('ASO')
cell_4 = grid_spec[3].subgridspec(1, 2)
ax_4_1 = fig.add_subplot(cell_4[0, 0])
ax_4_2 = fig.add_subplot(cell_4[0, 1:])
ax_4_1.imshow(
casi_zoom_values,
extent=sfm_snow_depth_zoom.extent,
cmap=colors.ListedColormap(CASI_COLORS),
alpha=0.8,
)
ax_4_1.set_title('Classification')
ax_4_2.imshow(
rgb_zoom_tif,
extent=sfm_snow_depth_zoom.extent,
)
ax_4_2.set_title('Orthomosaic')
for ax in fig.get_axes():
ax.tick_params(axis='both', direction='inout', size=5)
ax.set_xticklabels([])
ax.set_yticklabels([])
classifier_data = load_classifier_data(aso_snow_depth_values.mask)
sd_difference_values = sfm_snow_depth_values - aso_snow_depth_values
np.ma.masked_where(
sfm_snow_depth_values <= 0.0,
sd_difference_values,
copy=False
)
classification_plot = np.ma.masked_where(
sfm_snow_depth_values <= 0.0,
classifier_data,
).astype(np.int8);
bins = np.concatenate((
[math.floor(np.nanmin(sd_difference_values))],
np.arange(-2., 2. + HIST_BIN_WIDTH, HIST_BIN_WIDTH),
[math.ceil(np.nanmax(sd_difference_values))],
))
imshow_opts = dict(
extent=sfm_snow_depth.extent,
norm=colors.BoundaryNorm(
boundaries=bins, ncolors=RED_BLUE_CMAP.N,
),
cmap=RED_BLUE_CMAP,
)
fig, (ax1, ax2) = area_plot()
fig.subplots_adjust(wspace=.15)
im_data = ax1.imshow(
sd_difference_values,
**imshow_opts
)
ax1.set_title("Snow Depth Differences")
PlotBase.insert_colorbar(ax1, im_data, 'Snow Depth Difference (m)')
im_data = ax2.imshow(
classification_plot,
extent=sfm_snow_depth.extent,
cmap=colors.ListedColormap(CASI_COLORS),
alpha=0.8,
)
ax2.set_title("Snow Depth Differences - Classification")
PlotBase.insert_colorbar(ax2, im_data, 'Classification');
dem_values = load_reference_dem(aso_snow_depth_values.mask)
high_elevation = np.ma.masked_where(
dem_values <= 3500,
sd_difference_values,
)
low_elevation = np.ma.masked_where(
dem_values > 3500,
sd_difference_values,
)
fig, (ax1, ax2) = area_plot()
ax1.imshow(
high_elevation,
**imshow_opts
)
ax1.set_title("Snow Depth Differences - High Elevation >= 3500m")
im_data = ax2.imshow(
low_elevation,
**imshow_opts,
)
ax2.set_title("Snow Depth Differences - Low Elevation < 3500")
PlotBase.insert_colorbar(ax2, im_data, 'Snow Depth Difference (m)', **COLORBAR_POSITION);
high_elevation = np.ma.masked_where(
np.ma.masked_outside(dem_values, 3100, 3500).mask,
sd_difference_values,
)
low_elevation = np.ma.masked_where(
np.ma.masked_outside(dem_values, 3800, 3900).mask,
sd_difference_values,
)
fig, (ax1, ax2) = area_plot()
ax1.imshow(
high_elevation,
**imshow_opts
)
ax1.set_title("Snow Depth Differences - 3100m < Elevation < 3200m")
im_data = ax2.imshow(
low_elevation,
**imshow_opts,
)
ax2.set_title("Snow Depth Differences - 3800 < Elevation < 3900")
PlotBase.insert_colorbar(ax2, im_data, r'$\Delta$ Snow Depth (m)', **COLORBAR_POSITION);
fig, (ax1, ax2) = area_plot()
ax1.imshow(
sd_difference_values,
**imshow_opts
)
ax1.set_title("Snow Depth Differences")
im_data = ax2.imshow(
sfm_snow_free_values - dem_values,
**imshow_opts,
)
ax2.set_title("Snow Free - Reference DEM")
PlotBase.insert_colorbar(ax2, im_data, **COLORBAR_POSITION);
data = [
{
'data': sfm_snow_free_values - dem_values,
'label': 'SfM snow free - DEM',
'color': 'dodgerblue',
},
]
with plot_histogram(data, (-6, 6), figsize=(10, 8)) as ax:
ax.set_title('Elevation Differences (SFM snow free - Reference DEM)');
| 0.60054 | 0.597079 |

### ODPi Egeria Hands-On Lab
# Welcome to the Understanding Cohort Configuration Lab
## Introduction
ODPi Egeria is an open source project that provides open standards and implementation libraries to connect tools,
catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.
The ODPi Egeria repository services provide APIs for understanding the make up of the cohorts that an OMAG Server
is connected to.
This hands-on lab steps through each of the repository services operations for understanding a cohort, providing a explaination and the code to call each operation.
## The Scenario
Gary Geeke is the IT Infrastructure leader at Coco Pharmaceuticals. He has set up a number of OMAG Servers and
is validating they are operating correctly. Gary's userId is `garygeeke`.

In the **[Server Configuration Lab](../egeria-server-config.ipynb)**, Gary configured servers for the OMAG Server Platforms shown in Figure 1:

> **Figure 1:** Coco Pharmaceuticals' OMAG Server Platforms
The following command ensures these platforms, and the servers that run on them, are started. Click on the play triangle at the top of the page to run it.
```
%run ../common/environment-check.ipynb
```
----
Figure 2 shows which metadata servers belong to each cohort.

> **Figure 2:** Membership of Coco Pharmaceuticals' cohorts
The open metadata repository cohort protocols are peer-to-peer. This means that each member of
the cohort maintains its own view of the other members of the cohort. This information is
stored in the **cohort registry store**. All of the queries that follow are being made to the
cohort registry stores of Coco Pharmaceuticals metadata servers.
## Querying a server's cohorts
The code below queries each server's cohort registry store and retrieves the names of the cohort that it is connected to.
```
print (" ")
print ('Cohort(s) for cocoMDS1 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS1Name, cocoMDS1PlatformName, cocoMDS1PlatformURL))))
print ('Cohort(s) for cocoMDS2 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS2Name, cocoMDS2PlatformName, cocoMDS2PlatformURL))))
print ('Cohort(s) for cocoMDS3 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS3Name, cocoMDS3PlatformName, cocoMDS3PlatformURL))))
print ('Cohort(s) for cocoMDS4 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS4Name, cocoMDS4PlatformName, cocoMDS4PlatformURL))))
print ('Cohort(s) for cocoMDS5 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS5Name, cocoMDS5PlatformName, cocoMDS5PlatformURL))))
print ('Cohort(s) for cocoMDS6 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS6Name, cocoMDS6PlatformName, cocoMDS6PlatformURL))))
print ('Cohort(s) for cocoMDSx are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDSxName, cocoMDSxPlatformName, cocoMDSxPlatformURL))))
print (" ")
```
----
A quick check shows that the results of the query matches the diagram in figure 2.
## Querying local registration
The local registration describes the registration information that the metadata server
has shared with the cohorts it has connected to. The command below retrieves the
local registration information. Here we are looking at cocoMDS2.
```
printLocalRegistration(cocoMDS2Name, cocoMDS2PlatformName, cocoMDS2PlatformURL)
```
----
If we add in the name of the cohort, it is possible to see the time that it first registered
with that cohort.
```
printLocalRegistrationForCohort(cocoMDS2Name, cocoCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
printLocalRegistrationForCohort(cocoMDS2Name, devCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
printLocalRegistrationForCohort(cocoMDS2Name, iotCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
```
----
The times of registration are pretty close in this example because all of the cohorts were in the initial configuration for this server. If the registration time shows as blank it means that the server has not
registered with the cohort.
## Querying remote members
Finally each cohort registry store lists all of the remote members of the cohort that a server has exchanged
registration information with. These are the remote members from cocoMDS2's perspective.
```
print("Cohort " + cocoCohort + "...")
printRemoteRegistrations(cocoMDS2Name, cocoCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
print("Cohort " + devCohort + "...")
printRemoteRegistrations(cocoMDS2Name, devCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
print("Cohort " + iotCohort + "...")
printRemoteRegistrations(cocoMDS2Name, iotCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
```
----
|
github_jupyter
|
%run ../common/environment-check.ipynb
print (" ")
print ('Cohort(s) for cocoMDS1 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS1Name, cocoMDS1PlatformName, cocoMDS1PlatformURL))))
print ('Cohort(s) for cocoMDS2 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS2Name, cocoMDS2PlatformName, cocoMDS2PlatformURL))))
print ('Cohort(s) for cocoMDS3 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS3Name, cocoMDS3PlatformName, cocoMDS3PlatformURL))))
print ('Cohort(s) for cocoMDS4 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS4Name, cocoMDS4PlatformName, cocoMDS4PlatformURL))))
print ('Cohort(s) for cocoMDS5 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS5Name, cocoMDS5PlatformName, cocoMDS5PlatformURL))))
print ('Cohort(s) for cocoMDS6 are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDS6Name, cocoMDS6PlatformName, cocoMDS6PlatformURL))))
print ('Cohort(s) for cocoMDSx are [%s]' % ', '.join(map(str, queryServerCohorts(cocoMDSxName, cocoMDSxPlatformName, cocoMDSxPlatformURL))))
print (" ")
printLocalRegistration(cocoMDS2Name, cocoMDS2PlatformName, cocoMDS2PlatformURL)
printLocalRegistrationForCohort(cocoMDS2Name, cocoCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
printLocalRegistrationForCohort(cocoMDS2Name, devCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
printLocalRegistrationForCohort(cocoMDS2Name, iotCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print("Cohort " + cocoCohort + "...")
printRemoteRegistrations(cocoMDS2Name, cocoCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
print("Cohort " + devCohort + "...")
printRemoteRegistrations(cocoMDS2Name, devCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
print("Cohort " + iotCohort + "...")
printRemoteRegistrations(cocoMDS2Name, iotCohort, cocoMDS2PlatformName, cocoMDS2PlatformURL)
print(" ")
| 0.099563 | 0.941061 |
```
import gym
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import seaborn as sns
# Matplotlib
sns.set()
W = 8.27
plt.rcParams.update({
'figure.figsize': (W, W/(4/3)),
'figure.dpi': 150,
'font.size' : 11,
'axes.labelsize': 11,
'legend.fontsize': 11,
'font.family': 'lmodern',
'text.usetex': True,
'text.latex.preamble': (
r'\usepackage{lmodern}'
r'\usepackage{siunitx}'
r'\usepackage{physics}'
)
})
%config InlineBackend.figure_format = 'retina'
env = gym.make('CartPole-v0') # Simple environment
env.action_space
# Random Agent
rewards_test = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = env.action_space.sample()
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = env.action_space.sample(), obs_
print(np.mean(np.nansum(rewards_test, 1)))
# Alternating (trivial) Agent
rewards_test = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = 1
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = 1 - a, obs_
print(np.mean(np.nansum(rewards_test, 1)))
# Hyperparameters
epsilon = 0.1 # No GLIE
alpha = 0.1 # Learning rate
beta = 0 # L2-regularization
discount = 0.5
# Linear Agent
w = np.random.randn(8) * 1e-4 # Weights
x = lambda s, a: np.r_[(2*a - 1)*s, (2*a - 1)*s**2] # Feature vector
q = lambda s, a: w @ x(s, a) # Approximate Q function
pi = lambda s: max(range(env.action_space.n), key=lambda a: q(s, a)) # Greedy policy wrt. q(s, a)
pi_ = lambda s: pi(s) if np.random.rand() > epsilon else env.action_space.sample() # Epsilon-greedy policy
# Learning (Semi-gradient SARSA)
rewards_learn = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = pi_(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_learn[episode, t] = r
if d: break
a_ = pi_(obs_)
w = (1 - alpha*beta)*w + alpha*np.clip((r + discount*q(obs_, a_) - q(obs, a))*x(obs, a), -1000, 1000)
a, obs = a_, obs_
# Testing
rewards_test = np.full((100, 200), np.nan)
for episode in range(100):
obs = env.reset()
a = pi(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = pi(obs_), obs_
print(np.mean(np.nansum(rewards_test, 1)))
plt.plot(np.nansum(rewards_learn, 1))
plt.plot(np.nansum(rewards_test, 1))
# Random Agent: Find effective observation space for tilings
observations = np.full((1000, 200, 4), np.nan)
for episode in range(1000):
obs = env.reset()
a = env.action_space.sample()
for t in range(200):
observations[episode, t] = obs
obs_, r, d, info = env.step(a)
if d: break
a, obs = env.action_space.sample(), obs_
plt.hist([observations[..., i].flatten() for i in range(4)], 50)
plt.show()
# Hyperparameters
epsilon = 0.1 # No GLIE
alpha = 0.1 # Learning rate
beta = 0 # L2-regularization
discount = 0.9
# State aggregation agent
w = np.zeros((2, 10, 10, 10, 10)) # Weights
def x(s, a):
"""Feature vector"""
x = np.zeros_like(w)
x[a,
np.digitize(s[0], np.linspace(-.1, .1, 9)),
np.digitize(s[1], np.linspace(-.5, .5, 9)),
np.digitize(s[2], np.linspace(-.1, .1, 9)),
np.digitize(s[3], np.linspace(-.5, .5, 9))] = 1
return x
q = lambda s, a: np.sum(w * x(s, a)) # Approximate Q function
pi = lambda s: max(range(env.action_space.n), key=lambda a: q(s, a)) # Greedy policy wrt. q(s, a)
pi_ = lambda s: pi(s) if np.random.rand() > epsilon else env.action_space.sample() # Epsilon-greedy policy
# Learning (Semi-gradient SARSA)
rewards_learn = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = pi_(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_learn[episode, t] = r
if d: break
a_ = pi_(obs_)
w = (1 - alpha*beta)*w + alpha*np.clip((r + discount*q(obs_, a_) - q(obs, a))*x(obs, a), -1000, 1000)
a, obs = a_, obs_
# Testing
rewards_test = np.full((100, 200), np.nan)
for episode in range(100):
obs = env.reset()
a = pi(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = pi(obs_), obs_
# Mean reward per episode under learnt policy
print(np.mean(np.nansum(rewards_test, 1)))
# Learning
plt.plot(np.nansum(rewards_learn, 1))
plt.plot(np.nansum(rewards_test, 1))
# Analyze learnt value function
for i in range(10):
plt.plot(np.arange(10), w[0, np.arange(10), i, i, i])
# Hyperparameters
epsilon = 0.1 # No GLIE
alpha = 0.3 # Learning rate
beta = 0 # L2-regularization
discount = 0.9
# DQN Agent
class DNN(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(4, 32)
self.fc2 = nn.Linear(32, 32)
self.fc3 = nn.Linear(32, 32)
self.fc4 = nn.Linear(32, 2)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.relu(x)
x = self.fc4(x)
return x
model = DNN()
optim = torch.optim.Adam(model.parameters())
loss = nn.MSELoss()
q = lambda s, a: model(s)[a] # Approximate Q function
pi = lambda s: max(range(env.action_space.n), key=lambda a: q(s, a)) # Greedy policy wrt. q(s, a)
pi_ = lambda s: pi(s) if np.random.rand() > epsilon else env.action_space.sample() # Epsilon-greedy policy
# Learning (Semi-gradient SARSA)
rewards_learn = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = pi_(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_learn[episode, t] = r
if d: break
a_ = pi_(obs_)
optim.zero_grad()
outputs = model(s)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
w = (1 - alpha*beta)*w + alpha*np.clip((r + discount*q(obs_, a_) - q(obs, a))*x(obs, a), -1000, 1000)
a, obs = a_, obs_
# Testing
rewards_test = np.full((100, 200), np.nan)
for episode in range(100):
obs = env.reset()
a = pi(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = pi(obs_), obs_
# Mean reward per episode under learnt policy
print(np.mean(np.nansum(rewards_test, 1)))
# Learning
plt.plot(np.nansum(rewards, 1))
```
|
github_jupyter
|
import gym
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import seaborn as sns
# Matplotlib
sns.set()
W = 8.27
plt.rcParams.update({
'figure.figsize': (W, W/(4/3)),
'figure.dpi': 150,
'font.size' : 11,
'axes.labelsize': 11,
'legend.fontsize': 11,
'font.family': 'lmodern',
'text.usetex': True,
'text.latex.preamble': (
r'\usepackage{lmodern}'
r'\usepackage{siunitx}'
r'\usepackage{physics}'
)
})
%config InlineBackend.figure_format = 'retina'
env = gym.make('CartPole-v0') # Simple environment
env.action_space
# Random Agent
rewards_test = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = env.action_space.sample()
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = env.action_space.sample(), obs_
print(np.mean(np.nansum(rewards_test, 1)))
# Alternating (trivial) Agent
rewards_test = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = 1
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = 1 - a, obs_
print(np.mean(np.nansum(rewards_test, 1)))
# Hyperparameters
epsilon = 0.1 # No GLIE
alpha = 0.1 # Learning rate
beta = 0 # L2-regularization
discount = 0.5
# Linear Agent
w = np.random.randn(8) * 1e-4 # Weights
x = lambda s, a: np.r_[(2*a - 1)*s, (2*a - 1)*s**2] # Feature vector
q = lambda s, a: w @ x(s, a) # Approximate Q function
pi = lambda s: max(range(env.action_space.n), key=lambda a: q(s, a)) # Greedy policy wrt. q(s, a)
pi_ = lambda s: pi(s) if np.random.rand() > epsilon else env.action_space.sample() # Epsilon-greedy policy
# Learning (Semi-gradient SARSA)
rewards_learn = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = pi_(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_learn[episode, t] = r
if d: break
a_ = pi_(obs_)
w = (1 - alpha*beta)*w + alpha*np.clip((r + discount*q(obs_, a_) - q(obs, a))*x(obs, a), -1000, 1000)
a, obs = a_, obs_
# Testing
rewards_test = np.full((100, 200), np.nan)
for episode in range(100):
obs = env.reset()
a = pi(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = pi(obs_), obs_
print(np.mean(np.nansum(rewards_test, 1)))
plt.plot(np.nansum(rewards_learn, 1))
plt.plot(np.nansum(rewards_test, 1))
# Random Agent: Find effective observation space for tilings
observations = np.full((1000, 200, 4), np.nan)
for episode in range(1000):
obs = env.reset()
a = env.action_space.sample()
for t in range(200):
observations[episode, t] = obs
obs_, r, d, info = env.step(a)
if d: break
a, obs = env.action_space.sample(), obs_
plt.hist([observations[..., i].flatten() for i in range(4)], 50)
plt.show()
# Hyperparameters
epsilon = 0.1 # No GLIE
alpha = 0.1 # Learning rate
beta = 0 # L2-regularization
discount = 0.9
# State aggregation agent
w = np.zeros((2, 10, 10, 10, 10)) # Weights
def x(s, a):
"""Feature vector"""
x = np.zeros_like(w)
x[a,
np.digitize(s[0], np.linspace(-.1, .1, 9)),
np.digitize(s[1], np.linspace(-.5, .5, 9)),
np.digitize(s[2], np.linspace(-.1, .1, 9)),
np.digitize(s[3], np.linspace(-.5, .5, 9))] = 1
return x
q = lambda s, a: np.sum(w * x(s, a)) # Approximate Q function
pi = lambda s: max(range(env.action_space.n), key=lambda a: q(s, a)) # Greedy policy wrt. q(s, a)
pi_ = lambda s: pi(s) if np.random.rand() > epsilon else env.action_space.sample() # Epsilon-greedy policy
# Learning (Semi-gradient SARSA)
rewards_learn = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = pi_(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_learn[episode, t] = r
if d: break
a_ = pi_(obs_)
w = (1 - alpha*beta)*w + alpha*np.clip((r + discount*q(obs_, a_) - q(obs, a))*x(obs, a), -1000, 1000)
a, obs = a_, obs_
# Testing
rewards_test = np.full((100, 200), np.nan)
for episode in range(100):
obs = env.reset()
a = pi(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = pi(obs_), obs_
# Mean reward per episode under learnt policy
print(np.mean(np.nansum(rewards_test, 1)))
# Learning
plt.plot(np.nansum(rewards_learn, 1))
plt.plot(np.nansum(rewards_test, 1))
# Analyze learnt value function
for i in range(10):
plt.plot(np.arange(10), w[0, np.arange(10), i, i, i])
# Hyperparameters
epsilon = 0.1 # No GLIE
alpha = 0.3 # Learning rate
beta = 0 # L2-regularization
discount = 0.9
# DQN Agent
class DNN(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(4, 32)
self.fc2 = nn.Linear(32, 32)
self.fc3 = nn.Linear(32, 32)
self.fc4 = nn.Linear(32, 2)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.relu(x)
x = self.fc4(x)
return x
model = DNN()
optim = torch.optim.Adam(model.parameters())
loss = nn.MSELoss()
q = lambda s, a: model(s)[a] # Approximate Q function
pi = lambda s: max(range(env.action_space.n), key=lambda a: q(s, a)) # Greedy policy wrt. q(s, a)
pi_ = lambda s: pi(s) if np.random.rand() > epsilon else env.action_space.sample() # Epsilon-greedy policy
# Learning (Semi-gradient SARSA)
rewards_learn = np.full((1000, 200), np.nan)
for episode in range(1000):
obs = env.reset()
a = pi_(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_learn[episode, t] = r
if d: break
a_ = pi_(obs_)
optim.zero_grad()
outputs = model(s)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
w = (1 - alpha*beta)*w + alpha*np.clip((r + discount*q(obs_, a_) - q(obs, a))*x(obs, a), -1000, 1000)
a, obs = a_, obs_
# Testing
rewards_test = np.full((100, 200), np.nan)
for episode in range(100):
obs = env.reset()
a = pi(obs)
for t in range(200):
obs_, r, d, info = env.step(a)
rewards_test[episode, t] = r
if d: break
a, obs = pi(obs_), obs_
# Mean reward per episode under learnt policy
print(np.mean(np.nansum(rewards_test, 1)))
# Learning
plt.plot(np.nansum(rewards, 1))
| 0.764188 | 0.559771 |
# DSCI 525 - Web and Cloud Computing
Milestone 2: Your team is planning to migrate to the cloud. AWS gave 400$ (100$ each) to your team to support this. As part of this initiative, your team needs to set up a server in the cloud, a collaborative environment for your team, and later move your data to the cloud. After that, your team can wrangle the data in preparation for machine learning.
## Milestone 2 checklist
You will have mainly 2 tasks. Here is the checklist...
- To set up a collaborative environment
- Setup your EC2 instance with JupyterHub.
- Install all necessary things needed in your UNIX server (amazon ec2 instance).
- Set up your S3 bucket.
- Move the data that you wrangled in your last milestone to s3.
- To move data from s3.
- Wrangle the data in preparation for machine learning
- Get the data from S3 in your notebook and make data ready for machine learning.
**Keep in mind:**
- _All services you use are in region us-west-2._
- _Don't store anything in these servers or storage that represents your identity as a student (like your student ID number) ._
- _Use only default VPC and subnet._
- _No IP addresses are visible when you provide the screenshot._
- _You do proper budgeting so that you don't run out of credits._
- _We want one single notebook for grading, and it's up to your discretion on how you do it. ***So only one person in your group needs to spin up a big instance and a ```t2.xlarge``` is of decent size.***_
- _Please stop the instance when not in use. This can save you some bucks, but it's again up to you and how you budget your money. Maybe stop it if you or your team won't use it for the next 5 hours?
- _Your AWS lab will shut down after 3 hours 30 min. When you start it again, your AWS credentials (***access key***,***secret***, and ***session token***) will change, and you want to update your credentials file with the new one. _
- _Say something went wrong and you want to spin up another EC2 instance, then make sure you terminate the previous one._
- _We will be choosing the storage to be ```Delete on Termination```, which means that stored data in your instance will be lost upon termination. Make sure you save any data to S3 and download the notebooks to your laptop so that next time you have your jupyterHub in a different instance, you can upload your notebook there._
_***Outside of Milestone:*** If you are working as an individual just to practice setting up EC2 instances, make sure you select ```t2.large``` instance (not anything bigger than that as it can cost you money). I strongly recommend you spin up your own instance and experiment with the s3 bucket in doing something (there are many things that we learned and practical work from additional instructions and video series) to get comfortable with AWS. But we won't be looking at it for a grading purpose._
***NOTE:*** Everything you want for this notebook is discussed in lecture 3, lecture 4, and setup instructions.
### 1. Setup your EC2 instance
rubric={correctness:20}
#### Please attach this screen shots from your group for grading.
https://github.com/UBC-MDS/DSCI_525_Group_3/blob/main/notebooks/images/ec2png.png
### 2. Setup your JupyterHub
rubric={correctness:20}
#### Please attach this screen shots from your group for grading
I want to see all the group members here in this screenshot https://github.com/UBC-MDS/DSCI_525_Group_3/blob/main/notebooks/images/JupyterHub.png
### 3. Setup the server
rubric={correctness:20}
3.1) Add your team members to EC2 instance.
3.2) Setup a common data folder to download data, and this folder should be accessible by all users in the JupyterHub.
3.3)(***OPTIONAL***) Setup a sharing notebook environment.
3.4) Install and configure AWS CLI.
#### Please attach this screen shots from your group for grading
Make sure you mask the IP address refer [here](https://www.anysoftwaretools.com/blur-part-picture-mac/).
https://github.com/UBC-MDS/DSCI_525_Group_3/blob/main/notebooks/images/my_shared_data_folder.png
### 4. Get the data what we wrangled in our first milestone.
You have to install the packages that are needed. Refer this TLJH [document]( https://tljh.jupyter.org/en/latest/howto/env/user-environment.html).Refer ```pip``` section.
Don't forget to add option -E. This way, all packages that you install will be available to other users in your JupyterHub.
These packages you must install and install other packages needed for your wrangling.
sudo -E pip install pandas
sudo -E pip install pyarrow
sudo -E pip install s3fs
As in the last milestone, we looked at getting the data transferred from Python to R, and we have different solutions. Henceforth, I uploaded the parquet file format, which we can use moving forward.
```
import re
import os
import glob
import zipfile
import requests
from urllib.request import urlretrieve
import json
import pandas as pd
```
Rememeber here we gave the folder that we created in Step 3.2 as we made it available for all the users in a group.
```
# Necessary metadata
article_id = 14226968 # this is the unique identifier of the article on figshare
url = f"https://api.figshare.com/v2/articles/{article_id}"
headers = {"Content-Type": "application/json"}
output_directory = "/srv/data/my_shared_data_folder/"
response = requests.request("GET", url, headers=headers)
data = json.loads(response.text) # this contains all the articles data, feel free to check it out
files = data["files"] # this is just the data about the files, which is what we want
files
files_to_dl = ["combined_model_data_parti.parquet.zip"] ## Please download the partitioned
for file in files:
if file["name"] in files_to_dl:
os.makedirs(output_directory, exist_ok=True)
urlretrieve(file["download_url"], output_directory + file["name"])
with zipfile.ZipFile(os.path.join(output_directory, "combined_model_data_parti.parquet.zip"), 'r') as f:
f.extractall(output_directory)
```
### 5. Setup your S3 bucket and move data
rubric={correctness:20}
5.1) Create a bucket name should be mds-s3-xxx. Replace xxx with your "groupnumber".
5.2) Create your first folder called "output".
5.3) Move the "observed_daily_rainfall_SYD.csv" file from the Milestone1 data folder to your s3 bucket from your local computer.
5.4) Moving the parquet file we downloaded(combined_model_data_parti.parquet) in step 4 to S3 using the cli what we installed in step 3.4.
#### Please attach this screen shots from your group for grading
Make sure it has 3 objects.
https://github.com/UBC-MDS/DSCI_525_Group_3/blob/main/notebooks/images/mds_s3_group3_bucket.png
### 6. Wrangle the data in preparation for machine learning
rubric={correctness:20}
Our data currently covers all of NSW, but say that our client wants us to create a machine learning model to predict rainfall over Sydney only. There's a bit of wrangling that needs to be done for that:
1. We need to query our data for only the rows that contain information covering Sydney
2. We need to wrangle our data into a format suitable for training a machine learning model. That will require pivoting, resampling, grouping, etc.
To train an ML algorithm we need it to look like this:
||model-1_rainfall|model-2_rainfall|model-3_rainfall|...|observed_rainfall|
|---|---|---|---|---|---|
|0|0.12|0.43|0.35|...|0.31|
|1|1.22|0.91|1.68|...|1.34|
|2|0.68|0.29|0.41|...|0.57|
6.1) Get the data from s3 (```combined_model_data_parti.parquet``` and ```observed_daily_rainfall_SYD.csv```)
6.2) First query for Sydney data and then drop the lat and lon columns (we don't need them).
```
syd_lat = -33.86
syd_lon = 151.21
```
Expected shape ```(1150049, 2)```.
6.3) Save this processed file to s3 for later use:
Save as a csv file ```ml_data_SYD.csv``` to ```s3://mds-s3-xxx/output/```
expected shape ```(46020,26)``` - This includes all the models as columns and also adding additional column ```Observed``` loaded from ```observed_daily_rainfall_SYD.csv``` from s3.
```
### Do all your coding here
credentials = {'key': "ASIARJKIN254ONSW5KVJ",
'secret': "n5I8hpUAhGHgnj8UGYUV9pRjcWW4id+8Q/HbU6LA",
'token': "FwoGZXIvYXdzEEQaDDtLKHiobTPlPsxWJCLFATxHNpOgZxEi2EROZ2imaVYJAGTT3quzTGTH8OJ1qZBz4Ap6h5tJXLCfO+k9v3hFaS0BfvvptUoAx0X3dvQOaEoF2FLNrCpbumocUHTL1BjKCNmML1WlWhyMTeUQan3NlOL4B46AcWdRa5ZX+Ii7IVQyBpbp+kYwB+swY3Ov7qk9ADOhFFPikYwqdGlnBB9ZNRfr0mWSnriJ9bSnLQ5OnlhGV/xRyGVTBEwXx+iIAFcY93uoPNh+Z9t8eld8EeoSwyfH0CF8KMC3vpIGMi1BnjSmHZVeXXbi9jEh//REkO0kw4E6RpwV60LRcFtoRktWYgFweAA+AS0pOzU="
}
df = pd.read_parquet("s3://mds-s3-group3/combined_model_data_parti.parquet",
storage_options=credentials)
df.head()
df = df.query(
"lat_min <= -33.86 and lat_max >= -33.86 and lon_min <= 151.21 and lon_max >= 151.21"
)
df = df.drop(columns=["lat_min", "lat_max", "lon_min", "lon_max"])
df.shape
df.head()
df['time'] = pd.to_datetime(df["time"]).dt.date
df = df.set_index('time')
df.shape
df = df.pivot(columns="model", values="rain (mm/day)")
df.shape
df_obs = pd.read_csv("s3://mds-s3-group3/observed_daily_rainfall_SYD.csv",
storage_options=credentials,
parse_dates=["time"])
df_obs.head()
df_obs["time"] = df_obs["time"].dt.date
df_obs = df_obs.set_index("time")
df["observed_rainfall"] = df_obs["rain (mm/day)"]
df.head()
df.shape
df.to_csv("s3://mds-s3-group3/output/ml_data_SYD.csv",
storage_options = credentials)
```
How the final file format looks like
https://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone2/image/finaloutput.png
Shape ```(46020,26 )```
(***OPTIONAL***) If you are interested in doing some benchmarking!! How much time it took to read..
- Parquet file from your local disk ?
- Parquet file from s3 ?
- CSV file from s3 ?
For that, upload the CSV file (```combined_model_data.csv```
)to S3 and try to read it instead of parquet.
|
github_jupyter
|
import re
import os
import glob
import zipfile
import requests
from urllib.request import urlretrieve
import json
import pandas as pd
# Necessary metadata
article_id = 14226968 # this is the unique identifier of the article on figshare
url = f"https://api.figshare.com/v2/articles/{article_id}"
headers = {"Content-Type": "application/json"}
output_directory = "/srv/data/my_shared_data_folder/"
response = requests.request("GET", url, headers=headers)
data = json.loads(response.text) # this contains all the articles data, feel free to check it out
files = data["files"] # this is just the data about the files, which is what we want
files
files_to_dl = ["combined_model_data_parti.parquet.zip"] ## Please download the partitioned
for file in files:
if file["name"] in files_to_dl:
os.makedirs(output_directory, exist_ok=True)
urlretrieve(file["download_url"], output_directory + file["name"])
with zipfile.ZipFile(os.path.join(output_directory, "combined_model_data_parti.parquet.zip"), 'r') as f:
f.extractall(output_directory)
syd_lat = -33.86
syd_lon = 151.21
expected shape ```(46020,26)``` - This includes all the models as columns and also adding additional column ```Observed``` loaded from ```observed_daily_rainfall_SYD.csv``` from s3.
| 0.238373 | 0.89774 |
# Overview of pMuTT's Core Functionality
Originally written for Version 1.2.1
Last Updated for Version 1.2.13
## Topics Covered
- Using constants and converting units using the ``constants`` module
- Initializing ``StatMech`` objects by specifying all modes and by using ``presets`` dictionary
- Initializing empirical objects such as ``Nasa`` objects using a ``StatMech`` object or from a previously generated Nasa polynomial
- Initializing ``Reference`` and ``References`` objects to adjust DFT's reference to more traditional references
- Input (via Excel) and output ``Nasa`` polynomials to thermdat format
- Initializing ``Reaction`` objects from strings
## Useful Links:
- Github: https://github.com/VlachosGroup/pMuTT
- Documentation: https://vlachosgroup.github.io/pMuTT/index.html
- Examples: https://vlachosgroup.github.io/pMuTT/examples.html
## Constants
pMuTT has a wide variety of constants to increase readability of the code. See [Constants page][0] in the documentation for supported units.
[0]: https://vlachosgroup.github.io/pmutt/constants.html#constants
```
from pmutt import constants as c
1.987
print(c.R('kJ/mol/K'))
print('Some constants')
print('R (J/mol/K) = {}'.format(c.R('J/mol/K')))
print("Avogadro's number = {}\n".format(c.Na))
print('Unit conversions')
print('5 kJ/mol --> {} eV/molecule'.format(c.convert_unit(num=5., initial='kJ/mol', final='eV/molecule')))
print('Frequency of 1000 Hz --> Wavenumber of {} 1/cm\n'.format(c.freq_to_wavenumber(1000.)))
print('See expected inputs, supported units of different constants')
help(c.R)
help(c.convert_unit)
```
## StatMech Objects
Molecules show translational, vibrational, rotational, electronic, and nuclear modes.
<img src="images/statmech_modes.jpg" width=800>
The [``StatMech``][0] object allows us to specify translational, vibrational, rotational, electronic and nuclear modes independently, which gives flexibility in what behavior you would like.
[0]: https://vlachosgroup.github.io/pmutt/statmech.html#pmutt.statmech.StatMech
For this example, we will use a butane molecule as an ideal gas:
- translations with no interaction between molecules
- harmonic vibrations
- rigid rotor rotations
- ground state electronic structure -
- ground state nuclear structure).
<img src="images/butane.png" width=300>
```
from ase.build import molecule
from pmutt.statmech import StatMech, trans, vib, rot, elec
butane_atoms = molecule('trans-butane')
'''Translational'''
butane_trans = trans.FreeTrans(n_degrees=3, atoms=butane_atoms)
'''Vibrational'''
butane_vib = vib.HarmonicVib(vib_wavenumbers=[3054.622862, 3047.573455, 3037.53448,
3030.21322, 3029.947329, 2995.758708,
2970.12166, 2968.142985, 2951.122942,
2871.560685, 1491.354921, 1456.480829,
1455.224163, 1429.084081, 1423.153673,
1364.456094, 1349.778994, 1321.137752,
1297.412109, 1276.969173, 1267.783512,
1150.401492, 1027.841298, 1018.203753,
945.310074, 929.15992, 911.661049,
808.685354, 730.986587, 475.287654,
339.164649, 264.682213, 244.584138,
219.956713, 115.923768, 35.56194])
'''Rotational'''
butane_rot = rot.RigidRotor(symmetrynumber=2, atoms=butane_atoms)
'''Electronic'''
butane_elec = elec.GroundStateElec(potentialenergy=-73.7051, spin=0)
'''StatMech Initialization'''
butane_statmech = StatMech(name='butane',
trans_model=butane_trans,
vib_model=butane_vib,
rot_model=butane_rot,
elec_model=butane_elec)
H_statmech = butane_statmech.get_H(T=298., units='kJ/mol')
S_statmech = butane_statmech.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_statmech))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_statmech))
```
### Presets
The [``presets``][0] dictionary stores commonly used models to ease the initialization of [``StatMech``][1] objects. The same water molecule before can be initialized this way instead.
[0]: https://vlachosgroup.github.io/pmutt/statmech.html#presets
[1]: https://vlachosgroup.github.io/pmutt/statmech.html#pmutt.statmech.StatMech
```
from pprint import pprint
from ase.build import molecule
from pmutt.statmech import StatMech, presets
idealgas_defaults = presets['idealgas']
pprint(idealgas_defaults)
butane_preset = StatMech(name='butane',
atoms=molecule('trans-butane'),
vib_wavenumbers=[3054.622862, 3047.573455, 3037.53448,
3030.21322, 3029.947329, 2995.758708,
2970.12166, 2968.142985, 2951.122942,
2871.560685, 1491.354921, 1456.480829,
1455.224163, 1429.084081, 1423.153673,
1364.456094, 1349.778994, 1321.137752,
1297.412109, 1276.969173, 1267.783512,
1150.401492, 1027.841298, 1018.203753,
945.310074, 929.15992, 911.661049,
808.685354, 730.986587, 475.287654,
339.164649, 264.682213, 244.584138,
219.956713, 115.923768, 35.56194],
symmetrynumber=2,
potentialenergy=-73.7051,
spin=0,
**idealgas_defaults)
H_preset = butane_preset.get_H(T=298., units='kJ/mol')
S_preset = butane_preset.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_preset))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_preset))
```
### Empty Modes
The [``EmptyMode``][0] is a special object returns 1 for the partition function and 0 for all other thermodynamic properties. This is useful if you do not want any contribution from a mode.
[0]: https://vlachosgroup.github.io/pmutt/statmech.html#empty-mode
```
from pmutt.statmech import EmptyMode
empty = EmptyMode()
print('Some EmptyMode properties:')
print('q = {}'.format(empty.get_q()))
print('H/RT = {}'.format(empty.get_HoRT()))
print('S/R = {}'.format(empty.get_SoR()))
print('G/RT = {}'.format(empty.get_GoRT()))
```
## Empirical Objects
Empirical forms are polynomials that are fit to experimental or *ab-initio data*. These forms are useful because they can be evaluated relatively quickly, so that downstream software is not hindered by evaluation of thermochemical properties.
However, note that ``StatMech`` objects can calculate more properties than the currently supported empirical objects.
### NASA polynomial
The [``NASA``][0] format is used for our microkinetic modeling software, Chemkin.
#### Initializing Nasa from StatMech
Below, we initialize the NASA polynomial from the ``StatMech`` object we created earlier.
[0]: https://vlachosgroup.github.io/pmutt/empirical.html#nasa
```
from pmutt.empirical.nasa import Nasa
butane_nasa = Nasa.from_model(name='butane',
model=butane_statmech,
T_low=298.,
T_high=800.,
elements={'C': 4, 'H': 10},
phase='G')
H_nasa = butane_nasa.get_H(T=298., units='kJ/mol')
S_nasa = butane_nasa.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_nasa))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_nasa))
```
Although it is not covered here, you can also generate empirical objects from experimental data using the ``.from_data`` method. See [Experimental to Empirical][6] example.
[6]: https://vlachosgroup.github.io/pmutt/examples.html#experimental-to-empirical
#### Initializing Nasa Directly
We can also initialize the NASA polynomial if we have the polynomials. Using an entry from the [Reaction Mechanism Generator (RMG) database][0].
[0]: https://rmg.mit.edu/database/thermo/libraries/DFT_QCI_thermo/215/
```
import numpy as np
butane_nasa_direct = Nasa(name='butane',
T_low=100.,
T_mid=1147.61,
T_high=5000.,
a_low=np.array([ 2.16917742E+00,
3.43905384E-02,
-1.61588593E-06,
-1.30723691E-08,
5.17339469E-12,
-1.72505133E+04,
1.46546944E+01]),
a_high=np.array([ 6.78908025E+00,
3.03597365E-02,
-1.21261608E-05,
2.19944009E-09,
-1.50288488E-13,
-1.91058191E+04,
-1.17331911E+01]),
elements={'C': 4, 'H': 10},
phase='G')
H_nasa_direct = butane_nasa_direct.get_H(T=298., units='kJ/mol')
S_nasa_direct = butane_nasa_direct.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_nasa_direct))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_nasa_direct))
```
Compare the results between ``butane_nasa`` and ``butane_nasa_direct`` to the [Wikipedia page for butane][0].
[0]: https://en.wikipedia.org/wiki/Butane_(data_page)
```
print('H_nasa = {:.1f} kJ/mol'.format(H_nasa))
print('H_nasa_direct = {:.1f} kJ/mol'.format(H_nasa_direct))
print('H_wiki = -125.6 kJ/mol\n')
print('S_nasa = {:.2f} J/mol/K'.format(S_nasa))
print('S_nasa_direct = {:.2f} J/mol/K'.format(S_nasa_direct))
print('S_wiki = 310.23 J/mol/K')
```
Notice the values are very different for ``H_nasa``. This discrepancy is due to:
- different references
- error in DFT
We can account for this discrepancy by using the [``Reference``][0] and [``References``][1] objects.
[0]: https://vlachosgroup.github.io/pmutt/empirical.html#pmutt.empirical.references.Reference
[1]: https://vlachosgroup.github.io/pmutt/empirical.html#pmutt.empirical.references.References
### Referencing
To define a reference, you must have:
- enthalpy at some reference temperature (``HoRT_ref`` and ``T_ref``)
- a ``StatMech`` object
In general, use references that are similar to molecules in your mechanism. Also, the number of reference molecules must be equation to the number of elements (or other descriptor) in the mechanism. [Full description of referencing scheme here][0].
In this example, we will use ethane and propane as references.
<img src="images/ref_molecules1.png" width=800>
[0]: https://vlachosgroup.github.io/pmutt/referencing.html
```
from pmutt.empirical.references import Reference, References
ethane_ref = Reference(name='ethane',
elements={'C': 2, 'H': 6},
atoms=molecule('C2H6'),
vib_wavenumbers=[3050.5296, 3049.8428, 3025.2714,
3024.4304, 2973.5455, 2971.9261,
1455.4203, 1454.9941, 1454.2055,
1453.7038, 1372.4786, 1358.3593,
1176.4512, 1175.507, 992.55,
803.082, 801.4536, 298.4712],
symmetrynumber=6,
potentialenergy=-40.5194,
spin=0,
T_ref=298.15,
HoRT_ref=-33.7596,
**idealgas_defaults)
propane_ref = Reference(name='propane',
elements={'C': 3, 'H': 8},
atoms=molecule('C3H8'),
vib_wavenumbers=[3040.9733, 3038.992, 3036.8071,
3027.6062, 2984.8436, 2966.1692,
2963.4684, 2959.7431, 1462.5683,
1457.4221, 1446.858, 1442.0357,
1438.7871, 1369.6901, 1352.6287,
1316.215, 1273.9426, 1170.4456,
1140.9699, 1049.3866, 902.8507,
885.3209, 865.5958, 735.1924,
362.7372, 266.3928, 221.4547],
symmetrynumber=2,
potentialenergy=-57.0864,
spin=0,
T_ref=298.15,
HoRT_ref=-42.2333,
**idealgas_defaults)
refs = References(references=[ethane_ref, propane_ref])
print(refs.offset)
```
Passing the ``References`` object when we make our ``Nasa`` object produces a value closer to the one listed above.
```
butane_nasa_ref = Nasa.from_model(name='butane',
model=butane_statmech,
T_low=298.,
T_high=800.,
elements={'C': 4, 'H': 10},
references=refs)
H_nasa_ref = butane_nasa_ref.get_H(T=298., units='kJ/mol')
S_nasa_ref = butane_nasa_ref.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_nasa_ref))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_nasa_ref))
```
## Input and Output
### Excel
Encoding each object in Python can be tedious and so you can read from Excel spreadsheets using [``pmutt.io.excel.read_excel``][0]. Note that this function returns a list of dictionaries. This output allows you to initialize whichever object you want. There are also special rules that depend on the header name.
[0]: https://vlachosgroup.github.io/pmutt/io.html?highlight=read_excel#pmutt.io.excel.read_excel
```
import os
from pathlib import Path
from pmutt.io.excel import read_excel
# Find the location of Jupyter notebook
# Note that normally Python scripts have a __file__ variable but Jupyter notebook doesn't.
# Using pathlib can overcome this limiation
try:
notebook_folder = os.path.dirname(__file__)
except NameError:
notebook_folder = Path().resolve()
os.chdir(notebook_folder)
# The Excel spreadsheet is located in the same folder as the Jupyter notebook
refs_path = os.path.join(notebook_folder, 'refs.xlsx')
refs_data = read_excel(refs_path)
pprint(refs_data)
```
Initialize using \*\*kwargs syntax.
```
ref_list = []
for record in refs_data:
ref_list.append(Reference(**record))
refs_excel = References(references=ref_list)
print(refs_excel.offset)
```
Butane can be initialized in a similar way.
```
# The Excel spreadsheet is located in the same folder as the Jupyter notebook
butane_path = os.path.join(notebook_folder, 'butane.xlsx')
butane_data = read_excel(butane_path)[0] # [0] accesses the butane data
butane_excel = Nasa.from_model(T_low=298.,
T_high=800.,
references=refs_excel,
**butane_data)
H_excel = butane_excel.get_H(T=298., units='kJ/mol')
S_excel = butane_excel.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_excel))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_excel))
```
### Thermdat
The thermdat format uses NASA polynomials to represent several species. It has a very particular format so doing it manually is error-prone. You can write a list of ``Nasa`` objects to thermdat format using [``pmutt.io.thermdat.write_thermdat``][0].
[0]: https://vlachosgroup.github.io/pmutt/io.html#pmutt.io.thermdat.write_thermdat
```
from pmutt.io.thermdat import write_thermdat
# Make Nasa objects from previously defined ethane and propane
ethane_nasa = Nasa.from_model(name='ethane',
phase='G',
T_low=298.,
T_high=800.,
model=ethane_ref.model,
elements=ethane_ref.elements,
references=refs)
propane_nasa = Nasa.from_model(name='propane',
phase='G',
T_low=298.,
T_high=800.,
model=propane_ref.model,
elements=propane_ref.elements,
references=refs)
nasa_species = [ethane_nasa, propane_nasa, butane_nasa]
# Determine the output path and write the thermdat file
thermdat_path = os.path.join(notebook_folder, 'thermdat')
write_thermdat(filename=thermdat_path, nasa_species=nasa_species)
```
Similarly, [``pmutt.io.thermdat.read_thermdat``][0] reads thermdat files.
[0]: https://vlachosgroup.github.io/pmutt/io.html#pmutt.io.thermdat.read_thermdat
## Reactions
You can also evaluate reactions properties. The most straightforward way to do this is to initialize using strings.
```
from pmutt.io.thermdat import read_thermdat
from pmutt import pmutt_list_to_dict
from pmutt.reaction import Reaction
# Get a dictionary of species
thermdat_H2O_path = os.path.join(notebook_folder, 'thermdat_H2O')
species_list = read_thermdat(thermdat_H2O_path)
species_dict = pmutt_list_to_dict(species_list)
# Initialize the reaction
rxn_H2O = Reaction.from_string('H2 + 0.5O2 = H2O', species=species_dict)
# Calculate reaction properties
H_rxn = rxn_H2O.get_delta_H(T=298., units='kJ/mol')
S_rxn = rxn_H2O.get_delta_S(T=298., units='J/mol/K')
print('H_rxn(T=298) = {:.1f} kJ/mol'.format(H_rxn))
print('S_rxn(T=298) = {:.2f} J/mol/K'.format(S_rxn))
```
## Exercise
Write a script to calculate the Enthalpy of adsorption (in kcal/mol) of H2O on Cu(111) at T = 298 K. Some important details are given below.
### Information Required
#### H2O:
- ideal gas
- atoms: You can use "ase.build.molecule" to generate a water molecule like we did with ethane, propane, and butane.
- vibrational wavenumbers (1/cm): 3825.434, 3710.2642, 1582.432
- potential energy (eV): -14.22393533
- spin: 0
- symmetry number: 2
#### Cu(111):
- only electronic modes
- potential energy (eV): -224.13045381
#### H2O+Cu(111):
- electronic and harmonic vibration modes
- potential energy (eV): -238.4713854
- vibrational wavenumbers (1/cm): 3797.255519, 3658.895695, 1530.600295, 266.366130, 138.907356, 63.899768, 59.150454, 51.256019, -327.384554 (negative numbers represent imaginary frequencies. The default behavior of pMuTT is to ignore these frequencies when calculating any thermodynamic property)
#### Reaction:
H2O + Cu(111) --> H2O+Cu(111)
### Solution:
```
from ase.build import molecule
from pmutt.statmech import StatMech, presets
from pmutt.reaction import Reaction
# Using dictionary since later I will initialize the reaction with a string
species = {
'H2O(g)': StatMech(atoms=molecule('H2O'),
vib_wavenumbers=[3825.434, 3710.2642, 1582.432],
potentialenergy=-14.22393533,
spin=0,
symmetrynumber=2,
**presets['idealgas']),
'*': StatMech(potentialenergy=-224.13045381,
**presets['electronic']),
'H2O*': StatMech(potentialenergy=-238.4713854,
vib_wavenumbers=[3797.255519,
3658.895695,
1530.600295,
266.366130,
138.907356,
63.899768,
59.150454,
51.256019,
-327.384554], #Imaginary frequency!
**presets['harmonic']),
}
rxn = Reaction.from_string('H2O(g) + * = H2O*', species)
del_H = rxn.get_delta_H(T=298., units='kcal/mol')
print('del_H = {:.2f} kcal/mol'.format(del_H))
```
|
github_jupyter
|
from pmutt import constants as c
1.987
print(c.R('kJ/mol/K'))
print('Some constants')
print('R (J/mol/K) = {}'.format(c.R('J/mol/K')))
print("Avogadro's number = {}\n".format(c.Na))
print('Unit conversions')
print('5 kJ/mol --> {} eV/molecule'.format(c.convert_unit(num=5., initial='kJ/mol', final='eV/molecule')))
print('Frequency of 1000 Hz --> Wavenumber of {} 1/cm\n'.format(c.freq_to_wavenumber(1000.)))
print('See expected inputs, supported units of different constants')
help(c.R)
help(c.convert_unit)
from ase.build import molecule
from pmutt.statmech import StatMech, trans, vib, rot, elec
butane_atoms = molecule('trans-butane')
'''Translational'''
butane_trans = trans.FreeTrans(n_degrees=3, atoms=butane_atoms)
'''Vibrational'''
butane_vib = vib.HarmonicVib(vib_wavenumbers=[3054.622862, 3047.573455, 3037.53448,
3030.21322, 3029.947329, 2995.758708,
2970.12166, 2968.142985, 2951.122942,
2871.560685, 1491.354921, 1456.480829,
1455.224163, 1429.084081, 1423.153673,
1364.456094, 1349.778994, 1321.137752,
1297.412109, 1276.969173, 1267.783512,
1150.401492, 1027.841298, 1018.203753,
945.310074, 929.15992, 911.661049,
808.685354, 730.986587, 475.287654,
339.164649, 264.682213, 244.584138,
219.956713, 115.923768, 35.56194])
'''Rotational'''
butane_rot = rot.RigidRotor(symmetrynumber=2, atoms=butane_atoms)
'''Electronic'''
butane_elec = elec.GroundStateElec(potentialenergy=-73.7051, spin=0)
'''StatMech Initialization'''
butane_statmech = StatMech(name='butane',
trans_model=butane_trans,
vib_model=butane_vib,
rot_model=butane_rot,
elec_model=butane_elec)
H_statmech = butane_statmech.get_H(T=298., units='kJ/mol')
S_statmech = butane_statmech.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_statmech))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_statmech))
from pprint import pprint
from ase.build import molecule
from pmutt.statmech import StatMech, presets
idealgas_defaults = presets['idealgas']
pprint(idealgas_defaults)
butane_preset = StatMech(name='butane',
atoms=molecule('trans-butane'),
vib_wavenumbers=[3054.622862, 3047.573455, 3037.53448,
3030.21322, 3029.947329, 2995.758708,
2970.12166, 2968.142985, 2951.122942,
2871.560685, 1491.354921, 1456.480829,
1455.224163, 1429.084081, 1423.153673,
1364.456094, 1349.778994, 1321.137752,
1297.412109, 1276.969173, 1267.783512,
1150.401492, 1027.841298, 1018.203753,
945.310074, 929.15992, 911.661049,
808.685354, 730.986587, 475.287654,
339.164649, 264.682213, 244.584138,
219.956713, 115.923768, 35.56194],
symmetrynumber=2,
potentialenergy=-73.7051,
spin=0,
**idealgas_defaults)
H_preset = butane_preset.get_H(T=298., units='kJ/mol')
S_preset = butane_preset.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_preset))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_preset))
from pmutt.statmech import EmptyMode
empty = EmptyMode()
print('Some EmptyMode properties:')
print('q = {}'.format(empty.get_q()))
print('H/RT = {}'.format(empty.get_HoRT()))
print('S/R = {}'.format(empty.get_SoR()))
print('G/RT = {}'.format(empty.get_GoRT()))
from pmutt.empirical.nasa import Nasa
butane_nasa = Nasa.from_model(name='butane',
model=butane_statmech,
T_low=298.,
T_high=800.,
elements={'C': 4, 'H': 10},
phase='G')
H_nasa = butane_nasa.get_H(T=298., units='kJ/mol')
S_nasa = butane_nasa.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_nasa))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_nasa))
import numpy as np
butane_nasa_direct = Nasa(name='butane',
T_low=100.,
T_mid=1147.61,
T_high=5000.,
a_low=np.array([ 2.16917742E+00,
3.43905384E-02,
-1.61588593E-06,
-1.30723691E-08,
5.17339469E-12,
-1.72505133E+04,
1.46546944E+01]),
a_high=np.array([ 6.78908025E+00,
3.03597365E-02,
-1.21261608E-05,
2.19944009E-09,
-1.50288488E-13,
-1.91058191E+04,
-1.17331911E+01]),
elements={'C': 4, 'H': 10},
phase='G')
H_nasa_direct = butane_nasa_direct.get_H(T=298., units='kJ/mol')
S_nasa_direct = butane_nasa_direct.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_nasa_direct))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_nasa_direct))
print('H_nasa = {:.1f} kJ/mol'.format(H_nasa))
print('H_nasa_direct = {:.1f} kJ/mol'.format(H_nasa_direct))
print('H_wiki = -125.6 kJ/mol\n')
print('S_nasa = {:.2f} J/mol/K'.format(S_nasa))
print('S_nasa_direct = {:.2f} J/mol/K'.format(S_nasa_direct))
print('S_wiki = 310.23 J/mol/K')
from pmutt.empirical.references import Reference, References
ethane_ref = Reference(name='ethane',
elements={'C': 2, 'H': 6},
atoms=molecule('C2H6'),
vib_wavenumbers=[3050.5296, 3049.8428, 3025.2714,
3024.4304, 2973.5455, 2971.9261,
1455.4203, 1454.9941, 1454.2055,
1453.7038, 1372.4786, 1358.3593,
1176.4512, 1175.507, 992.55,
803.082, 801.4536, 298.4712],
symmetrynumber=6,
potentialenergy=-40.5194,
spin=0,
T_ref=298.15,
HoRT_ref=-33.7596,
**idealgas_defaults)
propane_ref = Reference(name='propane',
elements={'C': 3, 'H': 8},
atoms=molecule('C3H8'),
vib_wavenumbers=[3040.9733, 3038.992, 3036.8071,
3027.6062, 2984.8436, 2966.1692,
2963.4684, 2959.7431, 1462.5683,
1457.4221, 1446.858, 1442.0357,
1438.7871, 1369.6901, 1352.6287,
1316.215, 1273.9426, 1170.4456,
1140.9699, 1049.3866, 902.8507,
885.3209, 865.5958, 735.1924,
362.7372, 266.3928, 221.4547],
symmetrynumber=2,
potentialenergy=-57.0864,
spin=0,
T_ref=298.15,
HoRT_ref=-42.2333,
**idealgas_defaults)
refs = References(references=[ethane_ref, propane_ref])
print(refs.offset)
butane_nasa_ref = Nasa.from_model(name='butane',
model=butane_statmech,
T_low=298.,
T_high=800.,
elements={'C': 4, 'H': 10},
references=refs)
H_nasa_ref = butane_nasa_ref.get_H(T=298., units='kJ/mol')
S_nasa_ref = butane_nasa_ref.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_nasa_ref))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_nasa_ref))
import os
from pathlib import Path
from pmutt.io.excel import read_excel
# Find the location of Jupyter notebook
# Note that normally Python scripts have a __file__ variable but Jupyter notebook doesn't.
# Using pathlib can overcome this limiation
try:
notebook_folder = os.path.dirname(__file__)
except NameError:
notebook_folder = Path().resolve()
os.chdir(notebook_folder)
# The Excel spreadsheet is located in the same folder as the Jupyter notebook
refs_path = os.path.join(notebook_folder, 'refs.xlsx')
refs_data = read_excel(refs_path)
pprint(refs_data)
ref_list = []
for record in refs_data:
ref_list.append(Reference(**record))
refs_excel = References(references=ref_list)
print(refs_excel.offset)
# The Excel spreadsheet is located in the same folder as the Jupyter notebook
butane_path = os.path.join(notebook_folder, 'butane.xlsx')
butane_data = read_excel(butane_path)[0] # [0] accesses the butane data
butane_excel = Nasa.from_model(T_low=298.,
T_high=800.,
references=refs_excel,
**butane_data)
H_excel = butane_excel.get_H(T=298., units='kJ/mol')
S_excel = butane_excel.get_S(T=298., units='J/mol/K')
print('H_butane(T=298) = {:.1f} kJ/mol'.format(H_excel))
print('S_butane(T=298) = {:.2f} J/mol/K'.format(S_excel))
from pmutt.io.thermdat import write_thermdat
# Make Nasa objects from previously defined ethane and propane
ethane_nasa = Nasa.from_model(name='ethane',
phase='G',
T_low=298.,
T_high=800.,
model=ethane_ref.model,
elements=ethane_ref.elements,
references=refs)
propane_nasa = Nasa.from_model(name='propane',
phase='G',
T_low=298.,
T_high=800.,
model=propane_ref.model,
elements=propane_ref.elements,
references=refs)
nasa_species = [ethane_nasa, propane_nasa, butane_nasa]
# Determine the output path and write the thermdat file
thermdat_path = os.path.join(notebook_folder, 'thermdat')
write_thermdat(filename=thermdat_path, nasa_species=nasa_species)
from pmutt.io.thermdat import read_thermdat
from pmutt import pmutt_list_to_dict
from pmutt.reaction import Reaction
# Get a dictionary of species
thermdat_H2O_path = os.path.join(notebook_folder, 'thermdat_H2O')
species_list = read_thermdat(thermdat_H2O_path)
species_dict = pmutt_list_to_dict(species_list)
# Initialize the reaction
rxn_H2O = Reaction.from_string('H2 + 0.5O2 = H2O', species=species_dict)
# Calculate reaction properties
H_rxn = rxn_H2O.get_delta_H(T=298., units='kJ/mol')
S_rxn = rxn_H2O.get_delta_S(T=298., units='J/mol/K')
print('H_rxn(T=298) = {:.1f} kJ/mol'.format(H_rxn))
print('S_rxn(T=298) = {:.2f} J/mol/K'.format(S_rxn))
from ase.build import molecule
from pmutt.statmech import StatMech, presets
from pmutt.reaction import Reaction
# Using dictionary since later I will initialize the reaction with a string
species = {
'H2O(g)': StatMech(atoms=molecule('H2O'),
vib_wavenumbers=[3825.434, 3710.2642, 1582.432],
potentialenergy=-14.22393533,
spin=0,
symmetrynumber=2,
**presets['idealgas']),
'*': StatMech(potentialenergy=-224.13045381,
**presets['electronic']),
'H2O*': StatMech(potentialenergy=-238.4713854,
vib_wavenumbers=[3797.255519,
3658.895695,
1530.600295,
266.366130,
138.907356,
63.899768,
59.150454,
51.256019,
-327.384554], #Imaginary frequency!
**presets['harmonic']),
}
rxn = Reaction.from_string('H2O(g) + * = H2O*', species)
del_H = rxn.get_delta_H(T=298., units='kcal/mol')
print('del_H = {:.2f} kcal/mol'.format(del_H))
| 0.505859 | 0.852137 |
# TabNet: Attentive Interpretable Tabular Learning
## Preparation
```
%%capture
!pip install pytorch-tabnet
!pip install imblearn
!pip install catboost
!pip install tab-transformer-pytorch
import torch
import torch.nn as nn
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score, accuracy_score, f1_score, confusion_matrix
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import plot_confusion_matrix
from imblearn.metrics import sensitivity_score,specificity_score,sensitivity_specificity_support
import pandas as pd
import numpy as np
import pprint
from catboost import CatBoostClassifier
import matplotlib.pyplot as plt
from pytorch_tabnet.tab_model import TabNetClassifier
from tab_transformer_pytorch import TabTransformer;
```
## Preprocessing
Cover type dataset contains tree observations from four areas of the Roosevelt National Forest in Colorado. All observations are cartographic variables (no remote sensing) from 30 meter x 30 meter sections of forest. There are over half a million measurements total! This dataset includes information on tree type, shadow coverage, distance to nearby landmarks (roads etcetera), soil type, and local topography.
Class descriptions:
* **1:** Spruce/Fir
* **2:** Lodgepole Pine
* **3:** Ponderosa Pine
* **4:** Cottonwood/Willow
* **5:** Aspen
* **6:** Douglas-fir
* **7:** Krummholz
For more details for dataset, please see metadata from [this link](https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/).
```
base_data_path = "/content/drive/MyDrive/applied_ai_enes_safak/datasets/"
forest_cover_type_dataset = base_data_path + "cover_type.csv"
data = pd.read_csv(forest_cover_type_dataset)
df_train = data.iloc[:11_340,:]
df_val = data.iloc[11_340:11_340+3_780,:]
df_test = data.iloc[11_340+3_780:, :]
print(len(df_train),len(df_val),len(df_test))
pprint.saferepr(data.columns.to_list())
data['Cover_Type'].value_counts()
X_train, y_train = df_train.iloc[:,:-1].values, df_train.iloc[:,-1].values
X_val, y_val = df_val.iloc[:,:-1].values, df_val.iloc[:,-1].values
X_test, y_test = df_test.iloc[:,:-1].values, df_test.iloc[:,-1].values
```
## Modeling TabNet
```
from pytorch_tabnet.metrics import Metric
class GMean(Metric):
def __init__(self):
self._name = "gmean"
self._maximize = True
def __call__(self, y_true, y_pred):
return np.sqrt(sensitivity_score(y_true, y_pred, average="macro"), specificity_score(y_true, y_pred, average="macro"))
clf = TabNetClassifier(
lambda_sparse = 1e-4,
n_d = 64,
n_a = 64,
gamma = 1.5,
optimizer_params=dict(lr=0.02),
scheduler_params={"gamma":0.95, "verbose":0, "step_size":15},
#scheduler_fn = torch.optim.lr_scheduler.ExponentialLR,
scheduler_fn = torch.optim.lr_scheduler.StepLR,
#optimizer_fn = torch.optim.AdamW
optimizer_fn = torch.optim.Adam,
)
clf.fit(
X_train = X_train,
y_train = y_train,
eval_set = [(X_val, y_val)],
max_epochs = 1000,
batch_size = 1024,
virtual_batch_size = 128,
num_workers = 0,
patience = 20,
drop_last = False,
eval_metric=['balanced_accuracy']
)
# more for batching strategies: https://arxiv.org/pdf/1906.03548.pdf
clf.history
y_preds = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_preds)
roc_auc = roc_auc_score(y_test, clf.predict_proba(X_test), multi_class='ovr')
f1 = f1_score(y_test, y_preds, average="macro")
print(f"Accuracy: {accuracy}\nROC-AUC: {roc_auc}")
```
## CatBoost Comparison
```
catboost = CatBoostClassifier(verbose=0)
catboost.fit(X_train, y_train)
y_preds = catboost.predict(X_test)
accuracy = accuracy_score(y_test, y_preds)
roc_auc = roc_auc_score(y_test, catboost.predict_proba(X_test), multi_class='ovr')
f1 = f1_score(y_test, y_preds, average="macro")
print(f"Catboost Accuracy: {accuracy}\nCatboost ROC-AUC: {roc_auc}")
```
## Explainable TabNet
```
clf.feature_importances_
explain_matrix, masks = clf.explain(X_test)
fig, axs = plt.subplots(1, 3, figsize=(10,10))
for i in range(len(masks)):
axs[i].imshow(masks[i][:100],cmap="gray")
axs[i].set_title(f"mask {i}")
fig = plt.figure(figsize=(10,10))
plt.imshow(explain_matrix[:100],cmap="gray")
```
## Highly Imbalanced Dataset: Seismic Bumps
Seismic hazard is the hardest detectable and predictable of natural hazards and in
this respect it is comparable to an earthquake. More and more advanced seismic and seismoacoustic
monitoring systems allow a better understanding rock mass processes and definition of seismic hazard
prediction methods. Accuracy of so far created methods is however far from perfect. Good prediction
of increased seismic activity is therefore a matter of great practical importance. The presented data
set is characterized by unbalanced distribution of positive and negative examples. In the data set there
are only 170 positive examples representing class 1.
```
def gmean(y_true, y_pred):
return np.sqrt(sensitivity_score(y_true, y_pred) * specificity_score(y_true,y_pred))
subsample = 300
base_data_path = "/content/drive/MyDrive/applied_ai_enes_safak/datasets/"
forest_cover_type_dataset = base_data_path + "seismic_bumps.csv"
data = pd.read_csv(forest_cover_type_dataset).drop(["id"],axis=1)
data_wo_0 = data[data["class"] != 0]
data_w0_undersampled = data[data["class"] == 0].sample(subsample).reset_index(drop=True)
data_c = pd.concat([data_wo_0, data_w0_undersampled],axis=0).sample(frac=1).reset_index(drop=True)
data_c["class"].value_counts()
cont_cols = data_c.drop(["class"],axis=1)._get_numeric_data().columns
cat_cols = list(set(data_c.drop(["class"],axis=1).columns) - set(cont_cols))
data_c["seismic"] = data_c["seismic"].factorize()[0]
data_c["seismoacoustic"] = data_c["seismoacoustic"].factorize()[0]
data_c["shift"] = data_c["shift"].factorize()[0]
data_c["ghazard"] = data_c["ghazard"].factorize()[0]
cat_cols_idx = [data_c.columns.to_list().index(i) for i in cat_cols]
X_train, X_test, y_train, y_test = train_test_split(data_c.iloc[:,:-1].values,data_c.iloc[:,-1].values,test_size = 0.33)
len_unique_cat = [len(np.unique(X_train[:,data.columns.to_list().index(i)])) for i in cat_cols]
clf = TabNetClassifier(
lambda_sparse = 1e-2,
n_d = 8,
n_a = 8,
gamma = 1.5,
optimizer_params=dict(lr=0.05),
scheduler_params={"gamma":0.95, "verbose":0, "step_size":15},
#scheduler_fn = torch.optim.lr_scheduler.ExponentialLR,
scheduler_fn = torch.optim.lr_scheduler.StepLR,
#optimizer_fn = torch.optim.AdamW
optimizer_fn = torch.optim.Adam,
cat_idxs = cat_cols_idx,
cat_dims = len_unique_cat
)
clf.fit(
X_train = X_train,
y_train = y_train,
eval_set = [(X_test, y_test)],
max_epochs = 1000,
batch_size = 1024,
virtual_batch_size = 128,
num_workers = 0,
patience = 20,
drop_last = False,
eval_metric=['auc']
)
proba = clf.predict_proba(X_test)
proba_unhot = []
for row in proba:
proba_unhot.append(row[1])
y_preds = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_preds)
roc_auc = roc_auc_score(y_test, proba_unhot)
print(f"Accuracy: {accuracy}\nROC-AUC: {roc_auc}")
gmean(y_test, y_preds)
confusion_matrix(y_test, y_preds)
proba = catboost.predict_proba(X_test)
proba_unhot = []
for row in proba:
proba_unhot.append(row[1])
catboost = CatBoostClassifier(verbose=0)
catboost.fit(X_train, y_train)
y_preds = catboost.predict(X_test)
accuracy = accuracy_score(y_test, y_preds)
roc_auc = roc_auc_score(y_test, proba_unhot)
print(f"Catboost Accuracy: {accuracy}\nCatboost ROC-AUC: {roc_auc}")
confusion_matrix(y_test, y_preds)
gmean(y_test, y_preds)
explain_matrix, masks = clf.explain(X_test)
fig, axs = plt.subplots(1, 3, figsize=(10,10))
for i in range(len(masks)):
axs[i].imshow(masks[i][:100],cmap="gray")
axs[i].set_title(f"mask {i}")
fig = plt.figure(figsize=(10,10))
plt.imshow(explain_matrix[:100],cmap="gray")
```
|
github_jupyter
|
%%capture
!pip install pytorch-tabnet
!pip install imblearn
!pip install catboost
!pip install tab-transformer-pytorch
import torch
import torch.nn as nn
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score, accuracy_score, f1_score, confusion_matrix
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import plot_confusion_matrix
from imblearn.metrics import sensitivity_score,specificity_score,sensitivity_specificity_support
import pandas as pd
import numpy as np
import pprint
from catboost import CatBoostClassifier
import matplotlib.pyplot as plt
from pytorch_tabnet.tab_model import TabNetClassifier
from tab_transformer_pytorch import TabTransformer;
base_data_path = "/content/drive/MyDrive/applied_ai_enes_safak/datasets/"
forest_cover_type_dataset = base_data_path + "cover_type.csv"
data = pd.read_csv(forest_cover_type_dataset)
df_train = data.iloc[:11_340,:]
df_val = data.iloc[11_340:11_340+3_780,:]
df_test = data.iloc[11_340+3_780:, :]
print(len(df_train),len(df_val),len(df_test))
pprint.saferepr(data.columns.to_list())
data['Cover_Type'].value_counts()
X_train, y_train = df_train.iloc[:,:-1].values, df_train.iloc[:,-1].values
X_val, y_val = df_val.iloc[:,:-1].values, df_val.iloc[:,-1].values
X_test, y_test = df_test.iloc[:,:-1].values, df_test.iloc[:,-1].values
from pytorch_tabnet.metrics import Metric
class GMean(Metric):
def __init__(self):
self._name = "gmean"
self._maximize = True
def __call__(self, y_true, y_pred):
return np.sqrt(sensitivity_score(y_true, y_pred, average="macro"), specificity_score(y_true, y_pred, average="macro"))
clf = TabNetClassifier(
lambda_sparse = 1e-4,
n_d = 64,
n_a = 64,
gamma = 1.5,
optimizer_params=dict(lr=0.02),
scheduler_params={"gamma":0.95, "verbose":0, "step_size":15},
#scheduler_fn = torch.optim.lr_scheduler.ExponentialLR,
scheduler_fn = torch.optim.lr_scheduler.StepLR,
#optimizer_fn = torch.optim.AdamW
optimizer_fn = torch.optim.Adam,
)
clf.fit(
X_train = X_train,
y_train = y_train,
eval_set = [(X_val, y_val)],
max_epochs = 1000,
batch_size = 1024,
virtual_batch_size = 128,
num_workers = 0,
patience = 20,
drop_last = False,
eval_metric=['balanced_accuracy']
)
# more for batching strategies: https://arxiv.org/pdf/1906.03548.pdf
clf.history
y_preds = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_preds)
roc_auc = roc_auc_score(y_test, clf.predict_proba(X_test), multi_class='ovr')
f1 = f1_score(y_test, y_preds, average="macro")
print(f"Accuracy: {accuracy}\nROC-AUC: {roc_auc}")
catboost = CatBoostClassifier(verbose=0)
catboost.fit(X_train, y_train)
y_preds = catboost.predict(X_test)
accuracy = accuracy_score(y_test, y_preds)
roc_auc = roc_auc_score(y_test, catboost.predict_proba(X_test), multi_class='ovr')
f1 = f1_score(y_test, y_preds, average="macro")
print(f"Catboost Accuracy: {accuracy}\nCatboost ROC-AUC: {roc_auc}")
clf.feature_importances_
explain_matrix, masks = clf.explain(X_test)
fig, axs = plt.subplots(1, 3, figsize=(10,10))
for i in range(len(masks)):
axs[i].imshow(masks[i][:100],cmap="gray")
axs[i].set_title(f"mask {i}")
fig = plt.figure(figsize=(10,10))
plt.imshow(explain_matrix[:100],cmap="gray")
def gmean(y_true, y_pred):
return np.sqrt(sensitivity_score(y_true, y_pred) * specificity_score(y_true,y_pred))
subsample = 300
base_data_path = "/content/drive/MyDrive/applied_ai_enes_safak/datasets/"
forest_cover_type_dataset = base_data_path + "seismic_bumps.csv"
data = pd.read_csv(forest_cover_type_dataset).drop(["id"],axis=1)
data_wo_0 = data[data["class"] != 0]
data_w0_undersampled = data[data["class"] == 0].sample(subsample).reset_index(drop=True)
data_c = pd.concat([data_wo_0, data_w0_undersampled],axis=0).sample(frac=1).reset_index(drop=True)
data_c["class"].value_counts()
cont_cols = data_c.drop(["class"],axis=1)._get_numeric_data().columns
cat_cols = list(set(data_c.drop(["class"],axis=1).columns) - set(cont_cols))
data_c["seismic"] = data_c["seismic"].factorize()[0]
data_c["seismoacoustic"] = data_c["seismoacoustic"].factorize()[0]
data_c["shift"] = data_c["shift"].factorize()[0]
data_c["ghazard"] = data_c["ghazard"].factorize()[0]
cat_cols_idx = [data_c.columns.to_list().index(i) for i in cat_cols]
X_train, X_test, y_train, y_test = train_test_split(data_c.iloc[:,:-1].values,data_c.iloc[:,-1].values,test_size = 0.33)
len_unique_cat = [len(np.unique(X_train[:,data.columns.to_list().index(i)])) for i in cat_cols]
clf = TabNetClassifier(
lambda_sparse = 1e-2,
n_d = 8,
n_a = 8,
gamma = 1.5,
optimizer_params=dict(lr=0.05),
scheduler_params={"gamma":0.95, "verbose":0, "step_size":15},
#scheduler_fn = torch.optim.lr_scheduler.ExponentialLR,
scheduler_fn = torch.optim.lr_scheduler.StepLR,
#optimizer_fn = torch.optim.AdamW
optimizer_fn = torch.optim.Adam,
cat_idxs = cat_cols_idx,
cat_dims = len_unique_cat
)
clf.fit(
X_train = X_train,
y_train = y_train,
eval_set = [(X_test, y_test)],
max_epochs = 1000,
batch_size = 1024,
virtual_batch_size = 128,
num_workers = 0,
patience = 20,
drop_last = False,
eval_metric=['auc']
)
proba = clf.predict_proba(X_test)
proba_unhot = []
for row in proba:
proba_unhot.append(row[1])
y_preds = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_preds)
roc_auc = roc_auc_score(y_test, proba_unhot)
print(f"Accuracy: {accuracy}\nROC-AUC: {roc_auc}")
gmean(y_test, y_preds)
confusion_matrix(y_test, y_preds)
proba = catboost.predict_proba(X_test)
proba_unhot = []
for row in proba:
proba_unhot.append(row[1])
catboost = CatBoostClassifier(verbose=0)
catboost.fit(X_train, y_train)
y_preds = catboost.predict(X_test)
accuracy = accuracy_score(y_test, y_preds)
roc_auc = roc_auc_score(y_test, proba_unhot)
print(f"Catboost Accuracy: {accuracy}\nCatboost ROC-AUC: {roc_auc}")
confusion_matrix(y_test, y_preds)
gmean(y_test, y_preds)
explain_matrix, masks = clf.explain(X_test)
fig, axs = plt.subplots(1, 3, figsize=(10,10))
for i in range(len(masks)):
axs[i].imshow(masks[i][:100],cmap="gray")
axs[i].set_title(f"mask {i}")
fig = plt.figure(figsize=(10,10))
plt.imshow(explain_matrix[:100],cmap="gray")
| 0.776453 | 0.889769 |
# Human numbers
```
from fastai.text import *
bs=64
```
## Data
```
path = untar_data(URLs.HUMAN_NUMBERS)
path.ls()
def readnums(d): return [', '.join(o.strip() for o in open(path/d).readlines())]
train_txt = readnums('train.txt'); train_txt[0][:80]
valid_txt = readnums('valid.txt'); valid_txt[0][-80:]
train = TextList(train_txt, path=path)
valid = TextList(valid_txt, path=path)
src = ItemLists(path=path, train=train, valid=valid).label_for_lm()
data = src.databunch(bs=bs)
train[0].text[:80]
len(data.valid_ds[0][0].data)
data.bptt, len(data.valid_dl)
13017/bs/70
it = iter(data.valid_dl)
x1,y1 = next(it)
x2,y2 = next(it)
x3,y3 = next(it)
it.close()
x1.numel()+x2.numel()+x3.numel()
x1.shape,y1.shape
x2.shape,y2.shape
x1[:,0]
y1[:,0]
v = data.valid_ds.vocab
v.textify(x1[0])
v.textify(y1[0])
v.textify(x2[0])
v.textify(x3[0])
v.textify(x1[1])
v.textify(x2[1])
v.textify(x3[1])
v.textify(x3[-1])
data.show_batch(ds_type=DatasetType.Valid)
```
## Single fully connected model
```
data = src.databunch(bs=bs, bptt=3)
x,y = data.one_batch()
x.shape,y.shape
nv = len(v.itos); nv
nh=64
def loss4(input,target): return F.cross_entropy(input, target[:,-1])
def acc4 (input,target): return accuracy(input, target[:,-1])
class Model0(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh) # green arrow
self.h_h = nn.Linear(nh,nh) # brown arrow
self.h_o = nn.Linear(nh,nv) # blue arrow
self.bn = nn.BatchNorm1d(nh)
def forward(self, x):
h = self.bn(F.relu(self.h_h(self.i_h(x[:,0]))))
if x.shape[1]>1:
h = h + self.i_h(x[:,1])
h = self.bn(F.relu(self.h_h(h)))
if x.shape[1]>2:
h = h + self.i_h(x[:,2])
h = self.bn(F.relu(self.h_h(h)))
return self.h_o(h)
learn = Learner(data, Model0(), loss_func=loss4, metrics=acc4)
learn.fit_one_cycle(6, 1e-4)
```
## Same thing with a loop
```
class Model1(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh) # green arrow
self.h_h = nn.Linear(nh,nh) # brown arrow
self.h_o = nn.Linear(nh,nv) # blue arrow
self.bn = nn.BatchNorm1d(nh)
def forward(self, x):
h = torch.zeros(x.shape[0], nh).to(device=x.device)
for i in range(x.shape[1]):
h = h + self.i_h(x[:,i])
h = self.bn(F.relu(self.h_h(h)))
return self.h_o(h)
learn = Learner(data, Model1(), loss_func=loss4, metrics=acc4)
learn.fit_one_cycle(6, 1e-4)
```
## Multi fully connected model
```
data = src.databunch(bs=bs, bptt=20)
x,y = data.one_batch()
x.shape,y.shape
class Model2(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.h_h = nn.Linear(nh,nh)
self.h_o = nn.Linear(nh,nv)
self.bn = nn.BatchNorm1d(nh)
def forward(self, x):
h = torch.zeros(x.shape[0], nh).to(device=x.device)
res = []
for i in range(x.shape[1]):
h = h + self.i_h(x[:,i])
h = F.relu(self.h_h(h))
res.append(self.h_o(self.bn(h)))
return torch.stack(res, dim=1)
learn = Learner(data, Model2(), metrics=accuracy)
learn.fit_one_cycle(10, 1e-4, pct_start=0.1)
```
## Maintain state
```
class Model3(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.h_h = nn.Linear(nh,nh)
self.h_o = nn.Linear(nh,nv)
self.bn = nn.BatchNorm1d(nh)
self.h = torch.zeros(bs, nh).cuda()
def forward(self, x):
res = []
h = self.h
for i in range(x.shape[1]):
h = h + self.i_h(x[:,i])
h = F.relu(self.h_h(h))
res.append(self.bn(h))
self.h = h.detach()
res = torch.stack(res, dim=1)
res = self.h_o(res)
return res
learn = Learner(data, Model3(), metrics=accuracy)
learn.fit_one_cycle(20, 3e-3)
```
## nn.RNN
```
class Model4(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.rnn = nn.RNN(nh,nh, batch_first=True)
self.h_o = nn.Linear(nh,nv)
self.bn = BatchNorm1dFlat(nh)
self.h = torch.zeros(1, bs, nh).cuda()
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = h.detach()
return self.h_o(self.bn(res))
learn = Learner(data, Model4(), metrics=accuracy)
learn.fit_one_cycle(20, 3e-3)
```
## 2-layer GRU
```
class Model5(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.rnn = nn.GRU(nh, nh, 2, batch_first=True)
self.h_o = nn.Linear(nh,nv)
self.bn = BatchNorm1dFlat(nh)
self.h = torch.zeros(2, bs, nh).cuda()
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = h.detach()
return self.h_o(self.bn(res))
learn = Learner(data, Model5(), metrics=accuracy)
learn.fit_one_cycle(10, 1e-2)
```
## fin
|
github_jupyter
|
from fastai.text import *
bs=64
path = untar_data(URLs.HUMAN_NUMBERS)
path.ls()
def readnums(d): return [', '.join(o.strip() for o in open(path/d).readlines())]
train_txt = readnums('train.txt'); train_txt[0][:80]
valid_txt = readnums('valid.txt'); valid_txt[0][-80:]
train = TextList(train_txt, path=path)
valid = TextList(valid_txt, path=path)
src = ItemLists(path=path, train=train, valid=valid).label_for_lm()
data = src.databunch(bs=bs)
train[0].text[:80]
len(data.valid_ds[0][0].data)
data.bptt, len(data.valid_dl)
13017/bs/70
it = iter(data.valid_dl)
x1,y1 = next(it)
x2,y2 = next(it)
x3,y3 = next(it)
it.close()
x1.numel()+x2.numel()+x3.numel()
x1.shape,y1.shape
x2.shape,y2.shape
x1[:,0]
y1[:,0]
v = data.valid_ds.vocab
v.textify(x1[0])
v.textify(y1[0])
v.textify(x2[0])
v.textify(x3[0])
v.textify(x1[1])
v.textify(x2[1])
v.textify(x3[1])
v.textify(x3[-1])
data.show_batch(ds_type=DatasetType.Valid)
data = src.databunch(bs=bs, bptt=3)
x,y = data.one_batch()
x.shape,y.shape
nv = len(v.itos); nv
nh=64
def loss4(input,target): return F.cross_entropy(input, target[:,-1])
def acc4 (input,target): return accuracy(input, target[:,-1])
class Model0(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh) # green arrow
self.h_h = nn.Linear(nh,nh) # brown arrow
self.h_o = nn.Linear(nh,nv) # blue arrow
self.bn = nn.BatchNorm1d(nh)
def forward(self, x):
h = self.bn(F.relu(self.h_h(self.i_h(x[:,0]))))
if x.shape[1]>1:
h = h + self.i_h(x[:,1])
h = self.bn(F.relu(self.h_h(h)))
if x.shape[1]>2:
h = h + self.i_h(x[:,2])
h = self.bn(F.relu(self.h_h(h)))
return self.h_o(h)
learn = Learner(data, Model0(), loss_func=loss4, metrics=acc4)
learn.fit_one_cycle(6, 1e-4)
class Model1(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh) # green arrow
self.h_h = nn.Linear(nh,nh) # brown arrow
self.h_o = nn.Linear(nh,nv) # blue arrow
self.bn = nn.BatchNorm1d(nh)
def forward(self, x):
h = torch.zeros(x.shape[0], nh).to(device=x.device)
for i in range(x.shape[1]):
h = h + self.i_h(x[:,i])
h = self.bn(F.relu(self.h_h(h)))
return self.h_o(h)
learn = Learner(data, Model1(), loss_func=loss4, metrics=acc4)
learn.fit_one_cycle(6, 1e-4)
data = src.databunch(bs=bs, bptt=20)
x,y = data.one_batch()
x.shape,y.shape
class Model2(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.h_h = nn.Linear(nh,nh)
self.h_o = nn.Linear(nh,nv)
self.bn = nn.BatchNorm1d(nh)
def forward(self, x):
h = torch.zeros(x.shape[0], nh).to(device=x.device)
res = []
for i in range(x.shape[1]):
h = h + self.i_h(x[:,i])
h = F.relu(self.h_h(h))
res.append(self.h_o(self.bn(h)))
return torch.stack(res, dim=1)
learn = Learner(data, Model2(), metrics=accuracy)
learn.fit_one_cycle(10, 1e-4, pct_start=0.1)
class Model3(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.h_h = nn.Linear(nh,nh)
self.h_o = nn.Linear(nh,nv)
self.bn = nn.BatchNorm1d(nh)
self.h = torch.zeros(bs, nh).cuda()
def forward(self, x):
res = []
h = self.h
for i in range(x.shape[1]):
h = h + self.i_h(x[:,i])
h = F.relu(self.h_h(h))
res.append(self.bn(h))
self.h = h.detach()
res = torch.stack(res, dim=1)
res = self.h_o(res)
return res
learn = Learner(data, Model3(), metrics=accuracy)
learn.fit_one_cycle(20, 3e-3)
class Model4(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.rnn = nn.RNN(nh,nh, batch_first=True)
self.h_o = nn.Linear(nh,nv)
self.bn = BatchNorm1dFlat(nh)
self.h = torch.zeros(1, bs, nh).cuda()
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = h.detach()
return self.h_o(self.bn(res))
learn = Learner(data, Model4(), metrics=accuracy)
learn.fit_one_cycle(20, 3e-3)
class Model5(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.rnn = nn.GRU(nh, nh, 2, batch_first=True)
self.h_o = nn.Linear(nh,nv)
self.bn = BatchNorm1dFlat(nh)
self.h = torch.zeros(2, bs, nh).cuda()
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = h.detach()
return self.h_o(self.bn(res))
learn = Learner(data, Model5(), metrics=accuracy)
learn.fit_one_cycle(10, 1e-2)
| 0.813387 | 0.770206 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Vectors/us_epa_ecoregions.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Vectors/us_epa_ecoregions.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Vectors/us_epa_ecoregions.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
dataset = ee.FeatureCollection('EPA/Ecoregions/2013/L3')
visParams = {
'palette': ['0a3b04', '1a9924', '15d812'],
'min': 23.0,
'max': 3.57e+11,
'opacity': 0.8,
}
image = ee.Image().float().paint(dataset, 'shape_area')
Map.setCenter(-99.814, 40.166, 5)
Map.addLayer(image, visParams, 'EPA/Ecoregions/2013/L3')
# Map.addLayer(dataset, {}, 'for Inspector', False)
dataset = ee.FeatureCollection('EPA/Ecoregions/2013/L4')
visParams = {
'palette': ['0a3b04', '1a9924', '15d812'],
'min': 0.0,
'max': 67800000000.0,
'opacity': 0.8,
}
image = ee.Image().float().paint(dataset, 'shape_area')
Map.setCenter(-99.814, 40.166, 5)
Map.addLayer(image, visParams, 'EPA/Ecoregions/2013/L4')
# Map.addLayer(dataset, {}, 'for Inspector', False)
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
|
github_jupyter
|
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = geemap.Map(center=[40,-100], zoom=4)
Map
# Add Earth Engine dataset
dataset = ee.FeatureCollection('EPA/Ecoregions/2013/L3')
visParams = {
'palette': ['0a3b04', '1a9924', '15d812'],
'min': 23.0,
'max': 3.57e+11,
'opacity': 0.8,
}
image = ee.Image().float().paint(dataset, 'shape_area')
Map.setCenter(-99.814, 40.166, 5)
Map.addLayer(image, visParams, 'EPA/Ecoregions/2013/L3')
# Map.addLayer(dataset, {}, 'for Inspector', False)
dataset = ee.FeatureCollection('EPA/Ecoregions/2013/L4')
visParams = {
'palette': ['0a3b04', '1a9924', '15d812'],
'min': 0.0,
'max': 67800000000.0,
'opacity': 0.8,
}
image = ee.Image().float().paint(dataset, 'shape_area')
Map.setCenter(-99.814, 40.166, 5)
Map.addLayer(image, visParams, 'EPA/Ecoregions/2013/L4')
# Map.addLayer(dataset, {}, 'for Inspector', False)
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
| 0.526343 | 0.961929 |
```
import keras
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute
from keras.layers import Concatenate, Reshape, Softmax, Conv2DTranspose, Embedding, Multiply
from keras.callbacks import ModelCheckpoint, EarlyStopping, Callback
from keras import regularizers
from keras import backend as K
from keras.utils.generic_utils import Progbar
from keras.layers.merge import _Merge
import keras.losses
from functools import partial
from collections import defaultdict
import tensorflow as tf
from tensorflow.python.framework import ops
import isolearn.keras as iso
import numpy as np
import tensorflow as tf
import logging
logging.getLogger('tensorflow').setLevel(logging.ERROR)
import pandas as pd
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
import isolearn.io as isoio
import isolearn.keras as isol
from genesis.visualization import *
def iso_normalizer(t) :
iso = 0.0
if np.sum(t) > 0.0 :
iso = np.sum(t[80: 80+25]) / np.sum(t)
return iso
def cut_normalizer(t) :
cuts = np.concatenate([np.zeros(205), np.array([1.0])])
if np.sum(t) > 0.0 :
cuts = t / np.sum(t)
return cuts
def plot_gan_logo(pwm, score, sequence_template=None, figsize=(12, 3), width_ratios=[1, 7], logo_height=1.0, plot_start=0, plot_end=164) :
#Slice according to seq trim index
pwm = pwm[plot_start: plot_end, :]
sequence_template = sequence_template[plot_start: plot_end]
pwm += 0.0001
for j in range(0, pwm.shape[0]) :
pwm[j, :] /= np.sum(pwm[j, :])
entropy = np.zeros(pwm.shape)
entropy[pwm > 0] = pwm[pwm > 0] * -np.log2(pwm[pwm > 0])
entropy = np.sum(entropy, axis=1)
conservation = 2 - entropy
fig = plt.figure(figsize=figsize)
gs = gridspec.GridSpec(1, 2, width_ratios=[width_ratios[0], width_ratios[-1]])
ax2 = plt.subplot(gs[0])
ax3 = plt.subplot(gs[1])
plt.sca(ax2)
plt.axis('off')
annot_text = '\nScore = ' + str(round(score, 4))
ax2.text(0.99, 0.5, annot_text, horizontalalignment='right', verticalalignment='center', transform=ax2.transAxes, color='black', fontsize=12, weight="bold")
height_base = (1.0 - logo_height) / 2.
for j in range(0, pwm.shape[0]) :
sort_index = np.argsort(pwm[j, :])
for ii in range(0, 4) :
i = sort_index[ii]
nt_prob = pwm[j, i] * conservation[j]
nt = ''
if i == 0 :
nt = 'A'
elif i == 1 :
nt = 'C'
elif i == 2 :
nt = 'G'
elif i == 3 :
nt = 'T'
color = None
if sequence_template[j] != 'N' :
color = 'black'
if ii == 0 :
letterAt(nt, j + 0.5, height_base, nt_prob * logo_height, ax3, color=color)
else :
prev_prob = np.sum(pwm[j, sort_index[:ii]] * conservation[j]) * logo_height
letterAt(nt, j + 0.5, height_base + prev_prob, nt_prob * logo_height, ax3, color=color)
plt.sca(ax3)
plt.xlim((0, plot_end - plot_start))
plt.ylim((0, 2))
plt.xticks([], [])
plt.yticks([], [])
plt.axis('off')
ax3.axhline(y=0.01 + height_base, color='black', linestyle='-', linewidth=2)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
#Load APA plasmid data (random mpra)
file_path = '../../../aparent/data/prepared_data/apa_plasmid_data/'
data_version = ''
#plasmid_dict = isoio.load(file_path + 'apa_plasmid_data' + data_version)
plasmid_dict = pickle.load(open('../../../aparent/apa_plasmid_data.pickle', 'rb'))
plasmid_df = plasmid_dict['plasmid_df']
plasmid_cuts = plasmid_dict['plasmid_cuts']
print("len(plasmid_df) = " + str(len(plasmid_df)))
#Filter data
kept_libraries = [22]
min_count = 50
min_usage = 0.95
if kept_libraries is not None :
keep_index = np.nonzero(plasmid_df.library_index.isin(kept_libraries))[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]
'''keep_index = np.nonzero(plasmid_df.seq.str.slice(70, 76) == 'AATAAA')[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]
keep_index = np.nonzero(plasmid_df.seq.str.slice(76).str.contains('AATAAA'))[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]'''
if min_count is not None :
keep_index = np.nonzero(plasmid_df.total_count >= min_count)[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]
if min_usage is not None :
prox_c = np.ravel(plasmid_cuts[:, 180+70+6:180+70+6+35].sum(axis=-1))
total_c = np.ravel(plasmid_cuts[:, 180:180+205].sum(axis=-1)) + np.ravel(plasmid_cuts[:, -1].todense())
keep_index = np.nonzero(prox_c / total_c >= min_usage)[0]
#keep_index = np.nonzero(plasmid_df.proximal_count / plasmid_df.total_count >= min_usage)[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]
print("len(plasmid_df) = " + str(len(plasmid_df)) + " (filtered)")
#Store cached filtered dataframe
#pickle.dump({'plasmid_df' : plasmid_df, 'plasmid_cuts' : plasmid_cuts}, open('apa_simple_cached_set.pickle', 'wb'))
#Load cached dataframe
cached_dict = pickle.load(open('apa_simple_cached_set.pickle', 'rb'))
plasmid_df = cached_dict['plasmid_df']
plasmid_cuts = cached_dict['plasmid_cuts']
print("len(plasmid_df) = " + str(len(plasmid_df)) + " (loaded)")
#Make generators
valid_set_size = 0.05
test_set_size = 0.05
batch_size = 32
#Generate training and test set indexes
plasmid_index = np.arange(len(plasmid_df), dtype=np.int)
plasmid_train_index = plasmid_index[:-int(len(plasmid_df) * (valid_set_size + test_set_size))]
plasmid_valid_index = plasmid_index[plasmid_train_index.shape[0]:-int(len(plasmid_df) * test_set_size)]
plasmid_test_index = plasmid_index[plasmid_train_index.shape[0] + plasmid_valid_index.shape[0]:]
print('Training set size = ' + str(plasmid_train_index.shape[0]))
print('Validation set size = ' + str(plasmid_valid_index.shape[0]))
print('Test set size = ' + str(plasmid_test_index.shape[0]))
data_gens = {
gen_id : iso.DataGenerator(
idx,
{'df' : plasmid_df},
batch_size=batch_size,
inputs = [
{
'id' : 'seq',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : iso.SequenceExtractor('padded_seq', start_pos=180 + 25 - 55, end_pos=180 + 25 - 55 + 256),
'encoder' : iso.OneHotEncoder(seq_length=256),
'dim' : (1, 256, 4),
'sparsify' : False
}
],
outputs = [
{
'id' : 'dummy_output',
'source_type' : 'zeros',
'dim' : (1,),
'sparsify' : False
}
],
randomizers = [],
shuffle = True if gen_id == 'train' else False
) for gen_id, idx in [('all', plasmid_index), ('train', plasmid_train_index), ('valid', plasmid_valid_index), ('test', plasmid_test_index)]
}
x_train = np.concatenate([data_gens['train'][i][0][0] for i in range(len(data_gens['train']))], axis=0)
x_test = np.concatenate([data_gens['test'][i][0][0] for i in range(len(data_gens['test']))], axis=0)
print(x_train.shape)
print(x_test.shape)
def make_gen_resblock(n_channels=64, window_size=3, stride=1, dilation=1, group_ix=0, layer_ix=0) :
#Initialize res block layers
batch_norm_0 = BatchNormalization(name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_batch_norm_0')
relu_0 = Lambda(lambda x: K.relu(x))
deconv_0 = Conv2DTranspose(n_channels, (1, window_size), strides=(1, stride), padding='same', activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_deconv_0')
batch_norm_1 = BatchNormalization(name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_batch_norm_1')
relu_1 = Lambda(lambda x: K.relu(x))
conv_1 = Conv2D(n_channels, (1, window_size), dilation_rate=(1, dilation), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_conv_1')
skip_deconv_0 = Conv2DTranspose(n_channels, (1, 1), strides=(1, stride), padding='same', activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_skip_deconv_0')
skip_1 = Lambda(lambda x: x[0] + x[1], name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_skip_1')
#Execute res block
def _resblock_func(input_tensor) :
batch_norm_0_out = batch_norm_0(input_tensor)
relu_0_out = relu_0(batch_norm_0_out)
deconv_0_out = deconv_0(relu_0_out)
batch_norm_1_out = batch_norm_1(deconv_0_out)
relu_1_out = relu_1(batch_norm_1_out)
conv_1_out = conv_1(relu_1_out)
skip_deconv_0_out = skip_deconv_0(input_tensor)
skip_1_out = skip_1([conv_1_out, skip_deconv_0_out])
return skip_1_out
return _resblock_func
#Decoder Model definition
def load_decoder_resnet(seq_length=256, latent_size=100) :
#Generator network parameters
window_size = 3
strides = [2, 2, 2, 2, 2, 1]
dilations = [1, 1, 1, 1, 1, 1]
channels = [384, 256, 128, 64, 32, 32]
initial_length = 8
n_resblocks = len(strides)
#Policy network definition
policy_dense_0 = Dense(initial_length * channels[0], activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_dense_0')
policy_dense_0_reshape = Reshape((1, initial_length, channels[0]))
curr_length = initial_length
resblocks = []
for layer_ix in range(n_resblocks) :
resblocks.append(make_gen_resblock(n_channels=channels[layer_ix], window_size=window_size, stride=strides[layer_ix], dilation=dilations[layer_ix], group_ix=0, layer_ix=layer_ix))
#final_batch_norm = BatchNormalization(name='policy_generator_final_batch_norm')
#final_relu = Lambda(lambda x: K.relu(x))
final_conv = Conv2D(4, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_final_conv')
def _generator_func(seed_input) :
policy_dense_0_out = policy_dense_0_reshape(policy_dense_0(seed_input))
#Connect group of res blocks
output_tensor = policy_dense_0_out
#Res block group 0
for layer_ix in range(n_resblocks) :
output_tensor = resblocks[layer_ix](output_tensor)
#Final conv out
#final_batch_norm_out = final_batch_norm(output_tensor)
#final_relu_out = final_relu(final_batch_norm_out)
final_conv_out = final_conv(output_tensor)#final_conv(final_relu_out)
return final_conv_out
return _generator_func
def make_disc_resblock(n_channels=64, window_size=8, dilation_rate=1, group_ix=0, layer_ix=0) :
#Initialize res block layers
batch_norm_0 = BatchNormalization(name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_batch_norm_0')
relu_0 = Lambda(lambda x: K.relu(x, alpha=0.0))
conv_0 = Conv2D(n_channels, (1, window_size), dilation_rate=dilation_rate, strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_conv_0')
batch_norm_1 = BatchNormalization(name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_batch_norm_1')
relu_1 = Lambda(lambda x: K.relu(x, alpha=0.0))
conv_1 = Conv2D(n_channels, (1, window_size), dilation_rate=dilation_rate, strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_conv_1')
skip_1 = Lambda(lambda x: x[0] + x[1], name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_skip_1')
#Execute res block
def _resblock_func(input_tensor) :
batch_norm_0_out = batch_norm_0(input_tensor)
relu_0_out = relu_0(batch_norm_0_out)
conv_0_out = conv_0(relu_0_out)
batch_norm_1_out = batch_norm_1(conv_0_out)
relu_1_out = relu_1(batch_norm_1_out)
conv_1_out = conv_1(relu_1_out)
skip_1_out = skip_1([conv_1_out, input_tensor])
return skip_1_out
return _resblock_func
#Encoder Model definition
def load_encoder_network_4_resblocks(batch_size, seq_length=205, latent_size=100, drop_rate=0.25) :
#Discriminator network parameters
n_resblocks = 4
n_channels = 32
#Discriminator network definition
policy_conv_0 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_conv_0')
skip_conv_0 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_skip_conv_0')
resblocks = []
for layer_ix in range(n_resblocks) :
resblocks.append(make_disc_resblock(n_channels=n_channels, window_size=8, dilation_rate=1, group_ix=0, layer_ix=layer_ix))
last_block_conv = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_last_block_conv')
skip_add = Lambda(lambda x: x[0] + x[1], name='policy_discriminator_skip_add')
final_flatten = Flatten()
z_mean = Dense(latent_size, name='policy_discriminator_z_mean')
z_log_var = Dense(latent_size, name='policy_discriminator_z_log_var')
def _encoder_func(sequence_input) :
policy_conv_0_out = policy_conv_0(sequence_input)
#Connect group of res blocks
output_tensor = policy_conv_0_out
#Res block group 0
skip_conv_0_out = skip_conv_0(output_tensor)
for layer_ix in range(n_resblocks) :
output_tensor = resblocks[layer_ix](output_tensor)
#Last res block extr conv
last_block_conv_out = last_block_conv(output_tensor)
skip_add_out = skip_add([last_block_conv_out, skip_conv_0_out])
#Final dense out
final_dense_out = final_flatten(skip_add_out)
#Z mean and log variance
z_mean_out = z_mean(final_dense_out)
z_log_var_out = z_log_var(final_dense_out)
return z_mean_out, z_log_var_out
return _encoder_func
#Encoder Model definition
def load_encoder_network_8_resblocks(batch_size, seq_length=205, drop_rate=0.25) :
#Discriminator network parameters
n_resblocks = 4
n_channels = 32
latent_size = 100
#Discriminator network definition
policy_conv_0 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_conv_0')
#Res block group 0
skip_conv_0 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_skip_conv_0')
resblocks_0 = []
for layer_ix in range(n_resblocks) :
resblocks_0.append(make_disc_resblock(n_channels=n_channels, window_size=8, dilation_rate=1, group_ix=0, layer_ix=layer_ix))
#Res block group 1
skip_conv_1 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_skip_conv_1')
resblocks_1 = []
for layer_ix in range(n_resblocks) :
resblocks_1.append(make_disc_resblock(n_channels=n_channels, window_size=8, dilation_rate=4, group_ix=1, layer_ix=layer_ix))
last_block_conv = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_last_block_conv')
skip_add = Lambda(lambda x: x[0] + x[1] + x[2], name='policy_discriminator_skip_add')
final_flatten = Flatten()
z_mean = Dense(latent_size, name='policy_discriminator_z_mean')
z_log_var = Dense(latent_size, name='policy_discriminator_z_log_var')
def _encoder_func(sequence_input) :
policy_conv_0_out = policy_conv_0(sequence_input)
#Connect group of res blocks
output_tensor = policy_conv_0_out
#Res block group 0
skip_conv_0_out = skip_conv_0(output_tensor)
for layer_ix in range(n_resblocks) :
output_tensor = resblocks_0[layer_ix](output_tensor)
#Res block group 0
skip_conv_1_out = skip_conv_1(output_tensor)
for layer_ix in range(n_resblocks) :
output_tensor = resblocks_1[layer_ix](output_tensor)
#Last res block extr conv
last_block_conv_out = last_block_conv(output_tensor)
skip_add_out = skip_add([last_block_conv_out, skip_conv_0_out, skip_conv_1_out])
#Final dense out
final_dense_out = final_flatten(skip_add_out)
#Z mean and log variance
z_mean_out = z_mean(final_dense_out)
z_log_var_out = z_log_var(final_dense_out)
return z_mean_out, z_log_var_out
return _encoder_func
from tensorflow.python.framework import ops
#Stochastic Binarized Neuron helper functions (Tensorflow)
#ST Estimator code adopted from https://r2rt.com/beyond-binary-ternary-and-one-hot-neurons.html
#See Github https://github.com/spitis/
def st_sampled_softmax(logits):
with ops.name_scope("STSampledSoftmax") as namescope :
nt_probs = tf.nn.softmax(logits)
onehot_dim = logits.get_shape().as_list()[1]
sampled_onehot = tf.one_hot(tf.squeeze(tf.multinomial(logits, 1), 1), onehot_dim, 1.0, 0.0)
with tf.get_default_graph().gradient_override_map({'Ceil': 'Identity', 'Mul': 'STMul'}):
return tf.ceil(sampled_onehot * nt_probs)
def st_hardmax_softmax(logits):
with ops.name_scope("STHardmaxSoftmax") as namescope :
nt_probs = tf.nn.softmax(logits)
onehot_dim = logits.get_shape().as_list()[1]
sampled_onehot = tf.one_hot(tf.argmax(nt_probs, 1), onehot_dim, 1.0, 0.0)
with tf.get_default_graph().gradient_override_map({'Ceil': 'Identity', 'Mul': 'STMul'}):
return tf.ceil(sampled_onehot * nt_probs)
@ops.RegisterGradient("STMul")
def st_mul(op, grad):
return [grad, grad]
#PWM Masking and Sampling helper functions
def mask_pwm(inputs) :
pwm, onehot_template, onehot_mask = inputs
return pwm * onehot_mask + onehot_template
def sample_pwm_only(pwm_logits) :
n_sequences = K.shape(pwm_logits)[0]
seq_length = K.shape(pwm_logits)[2]
flat_pwm = K.reshape(pwm_logits, (n_sequences * seq_length, 4))
sampled_pwm = st_sampled_softmax(flat_pwm)
return K.reshape(sampled_pwm, (n_sequences, 1, seq_length, 4))
def sample_pwm(pwm_logits) :
n_sequences = K.shape(pwm_logits)[0]
seq_length = K.shape(pwm_logits)[2]
flat_pwm = K.reshape(pwm_logits, (n_sequences * seq_length, 4))
sampled_pwm = sampled_pwm = K.switch(K.learning_phase(), st_sampled_softmax(flat_pwm), st_hardmax_softmax(flat_pwm))
return K.reshape(sampled_pwm, (n_sequences, 1, seq_length, 4))
def max_pwm(pwm_logits) :
n_sequences = K.shape(pwm_logits)[0]
seq_length = K.shape(pwm_logits)[2]
flat_pwm = K.reshape(pwm_logits, (n_sequences * seq_length, 4))
sampled_pwm = sampled_pwm = st_hardmax_softmax(flat_pwm)
return K.reshape(sampled_pwm, (n_sequences, 1, seq_length, 4))
#Generator helper functions
def initialize_sequence_templates(generator, sequence_templates) :
embedding_templates = []
embedding_masks = []
for k in range(len(sequence_templates)) :
sequence_template = sequence_templates[k]
onehot_template = iso.OneHotEncoder(seq_length=len(sequence_template))(sequence_template).reshape((1, len(sequence_template), 4))
for j in range(len(sequence_template)) :
if sequence_template[j] not in ['N', 'X'] :
nt_ix = np.argmax(onehot_template[0, j, :])
onehot_template[:, j, :] = -4.0
onehot_template[:, j, nt_ix] = 10.0
elif sequence_template[j] == 'X' :
onehot_template[:, j, :] = -1.0
onehot_mask = np.zeros((1, len(sequence_template), 4))
for j in range(len(sequence_template)) :
if sequence_template[j] == 'N' :
onehot_mask[:, j, :] = 1.0
embedding_templates.append(onehot_template.reshape(1, -1))
embedding_masks.append(onehot_mask.reshape(1, -1))
embedding_templates = np.concatenate(embedding_templates, axis=0)
embedding_masks = np.concatenate(embedding_masks, axis=0)
generator.get_layer('template_dense').set_weights([embedding_templates])
generator.get_layer('template_dense').trainable = False
generator.get_layer('mask_dense').set_weights([embedding_masks])
generator.get_layer('mask_dense').trainable = False
#Generator construction function
def build_sampler(batch_size, seq_length, n_classes=1, n_samples=None, validation_sample_mode='max') :
use_samples = True
if n_samples is None :
use_samples = False
n_samples = 1
#Initialize Reshape layer
reshape_layer = Reshape((1, seq_length, 4))
#Initialize template and mask matrices
onehot_template_dense = Embedding(n_classes, seq_length * 4, embeddings_initializer='zeros', name='template_dense')
onehot_mask_dense = Embedding(n_classes, seq_length * 4, embeddings_initializer='ones', name='mask_dense')
#Initialize Templating and Masking Lambda layer
masking_layer = Lambda(mask_pwm, output_shape = (1, seq_length, 4), name='masking_layer')
#Initialize PWM normalization layer
pwm_layer = Softmax(axis=-1, name='pwm')
#Initialize sampling layers
sample_func = sample_pwm
if validation_sample_mode == 'sample' :
sample_func = sample_pwm_only
upsampling_layer = Lambda(lambda x: K.tile(x, [n_samples, 1, 1, 1]), name='upsampling_layer')
sampling_layer = Lambda(sample_func, name='pwm_sampler')
permute_layer = Lambda(lambda x: K.permute_dimensions(K.reshape(x, (n_samples, batch_size, 1, seq_length, 4)), (1, 0, 2, 3, 4)), name='permute_layer')
def _sampler_func(class_input, raw_logits) :
#Get Template and Mask
onehot_template = reshape_layer(onehot_template_dense(class_input))
onehot_mask = reshape_layer(onehot_mask_dense(class_input))
#Add Template and Multiply Mask
pwm_logits = masking_layer([raw_logits, onehot_template, onehot_mask])
#Compute PWM (Nucleotide-wise Softmax)
pwm = pwm_layer(pwm_logits)
sampled_pwm = None
#Optionally tile each PWM to sample from and create sample axis
if use_samples :
pwm_logits_upsampled = upsampling_layer(pwm_logits)
sampled_pwm = sampling_layer(pwm_logits_upsampled)
sampled_pwm = permute_layer(sampled_pwm)
else :
sampled_pwm = sampling_layer(pwm_logits)
return pwm_logits, pwm, sampled_pwm
return _sampler_func
pwm_true = np.array([1., 0., 0., 0.])
p_ons = np.linspace(0.001, 0.999, 1000)
ces = []
for i in range(p_ons.shape[0]) :
p_on = p_ons[i]
p_off = 1. - p_on / 3.
pwm_pred = np.array([p_on, p_off, p_off, p_off])
ce = - np.sum(pwm_true * np.log(pwm_pred))
ces.append(ce)
ces = np.array(ces)
f = plt.figure(figsize=(4, 3))
plt.plot(p_ons, ces, color='black', linewidth=2)
plt.scatter([0.25], [- np.sum(pwm_true * np.log(np.array([0.25, 0.25, 0.25, 0.25])))], c='darkblue', s=45)
plt.scatter([0.95], [- np.sum(pwm_true * np.log(np.array([0.95, 0.05/3., 0.05/3., 0.05/3.])))], c='darkorange', s=45)
plt.tight_layout()
plt.show()
def get_pwm_cross_entropy() :
def _pwm_cross_entropy(inputs) :
pwm_true, pwm_pred = inputs
pwm_pred = K.clip(pwm_pred, K.epsilon(), 1. - K.epsilon())
ce = - K.sum(pwm_true[:, 0, :, :] * K.log(pwm_pred[:, 0, :, :]), axis=-1)
return K.sum(ce, axis=-1)
return _pwm_cross_entropy
def min_pred(y_true, y_pred) :
return y_pred
def get_weighted_loss(loss_coeff=1.) :
def _min_pred(y_true, y_pred) :
return loss_coeff * y_pred
return _min_pred
def get_z_sample(z_inputs):
z_mean, z_log_var = z_inputs
batch_size = K.shape(z_mean)[0]
latent_dim = K.int_shape(z_mean)[1]
epsilon = K.random_normal(shape=(batch_size, latent_dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
def get_z_kl_loss(anneal_coeff) :
def _z_kl_loss(inputs, anneal_coeff=anneal_coeff) :
z_mean, z_log_var = inputs
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
return anneal_coeff * kl_loss
return _z_kl_loss
#Simple Library
sequence_templates = [
'GGCGGCATGGACGAGCTGTACAAGTCTTGATCCCTACACGACGCTCTTCCGATCTNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNCGCCTAACCCTAAGCAGATTCTTCATGCAATTGTCGGTCAAGCCTTGCCTTGTT'
]
#Initialize Encoder and Decoder networks
batch_size = 32
seq_length = 256
n_samples = None
latent_size = 100
#Load Encoder
encoder = load_encoder_network_4_resblocks(batch_size, seq_length=seq_length, latent_size=latent_size, drop_rate=0.)
#Load Decoder
decoder = load_decoder_resnet(seq_length=seq_length, latent_size=latent_size)
#Load Sampler
sampler = build_sampler(batch_size, seq_length, n_classes=1, n_samples=n_samples, validation_sample_mode='sample')
#Build Encoder Model
encoder_input = Input(shape=(1, seq_length, 4), name='encoder_input')
z_mean, z_log_var = encoder(encoder_input)
z_sampling_layer = Lambda(get_z_sample, output_shape=(latent_size,), name='z_sampler')
z = z_sampling_layer([z_mean, z_log_var])
# instantiate encoder model
encoder_model = Model(encoder_input, [z_mean, z_log_var, z])
encoder_model.compile(
optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
loss=min_pred
)
#Build Decoder Model
decoder_class = Input(shape=(1,), name='decoder_class')
decoder_input = Input(shape=(latent_size,), name='decoder_input')
pwm_logits, pwm, sampled_pwm = sampler(decoder_class, decoder(decoder_input))
decoder_model = Model([decoder_class, decoder_input], [pwm_logits, pwm, sampled_pwm])
#Initialize Sequence Templates and Masks
initialize_sequence_templates(decoder_model, sequence_templates)
decoder_model.compile(
optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
loss=min_pred
)
#Build VAE Pipeline
vae_decoder_class = Input(shape=(1,), name='vae_decoder_class')
vae_encoder_input = Input(shape=(1, seq_length, 4), name='vae_encoder_input')
encoded_z_mean, encoded_z_log_var = encoder(vae_encoder_input)
encoded_z = z_sampling_layer([encoded_z_mean, encoded_z_log_var])
decoded_logits, decoded_pwm, decoded_sample = sampler(vae_decoder_class, decoder(encoded_z))
reconstruction_loss = Lambda(get_pwm_cross_entropy(), name='reconstruction')([vae_encoder_input, decoded_pwm])
anneal_coeff = K.variable(0.0)
kl_loss = Lambda(get_z_kl_loss(anneal_coeff), name='kl')([encoded_z_mean, encoded_z_log_var])
vae_model = Model(
[vae_decoder_class, vae_encoder_input],
[reconstruction_loss, kl_loss]#, entropy_loss]
)
#Initialize Sequence Templates and Masks
initialize_sequence_templates(vae_model, sequence_templates)
vae_model.compile(
optimizer=keras.optimizers.Adam(lr=0.0001, beta_1=0.5, beta_2=0.9),
#optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
loss={
'reconstruction' : get_weighted_loss(loss_coeff=1. * (1./147.)),
'kl' : get_weighted_loss(loss_coeff=0.65 * (1./147.))#0.15#0.05#, #0.000001
#'entropy' : get_weighted_loss(loss_coeff=0.0)
}
)
encoder_model.summary()
decoder_model.summary()
n_epochs = 50
def _anneal_func(val, epoch, n_epochs=n_epochs) :
if epoch <= 0 :
return 0.0
elif epoch <= 2 :
return 0.1
elif epoch <= 4 :
return 0.2
elif epoch <= 6 :
return 0.4
elif epoch <= 8 :
return 0.6
elif epoch <= 10 :
return 0.8
elif epoch > 10 :
return 1.0
return 1.0
class EpochVariableCallback(Callback):
def __init__(self, my_variable, my_func):
self.my_variable = my_variable
self.my_func = my_func
def on_epoch_end(self, epoch, logs={}):
K.set_value(self.my_variable, self.my_func(K.get_value(self.my_variable), epoch))
s_train = np.zeros((x_train.shape[0], 1))
s_test = np.zeros((x_test.shape[0], 1))
dummy_target_train = np.zeros((x_train.shape[0], 1))
dummy_target_test = np.zeros((x_test.shape[0], 1))
model_name = "vae_apa_max_isoform_simple_new_resnet_len_256_50_epochs_medium_high_kl_annealed"
callbacks =[
#EarlyStopping(monitor='val_loss', min_delta=0.002, patience=10, verbose=0, mode='auto'),
ModelCheckpoint("model_checkpoints/" + model_name + "_epoch_{epoch:02d}.hdf5", monitor='val_loss', mode='min', save_weights_only=True),
EpochVariableCallback(anneal_coeff, _anneal_func)
]
# train the autoencoder
train_history = vae_model.fit(
[s_train, x_train],
[dummy_target_train, dummy_target_train],#, dummy_target_train],
shuffle=True,
epochs=n_epochs,
batch_size=batch_size,
validation_data=(
[s_test, x_test],
[dummy_target_test, dummy_target_test]#, dummy_target_test]
),
callbacks=callbacks
)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(4.5 * 2, 4))
n_epochs_actual = len(train_history.history['reconstruction_loss'])
ax1.plot(np.arange(1, n_epochs_actual + 1), train_history.history['reconstruction_loss'], linewidth=3, color='green')
ax1.plot(np.arange(1, n_epochs_actual + 1), train_history.history['val_reconstruction_loss'], linewidth=3, color='orange')
plt.sca(ax1)
plt.xlabel("Epochs", fontsize=14)
plt.ylabel("Reconstruction Loss", fontsize=14)
plt.xlim(1, n_epochs_actual)
plt.xticks([1, n_epochs_actual], [1, n_epochs_actual], fontsize=12)
plt.yticks(fontsize=12)
ax2.plot(np.arange(1, n_epochs_actual + 1), train_history.history['kl_loss'], linewidth=3, color='green')
ax2.plot(np.arange(1, n_epochs_actual + 1), train_history.history['val_kl_loss'], linewidth=3, color='orange')
plt.sca(ax2)
plt.xlabel("Epochs", fontsize=14)
plt.ylabel("KL Divergence", fontsize=14)
plt.xlim(1, n_epochs_actual)
plt.xticks([1, n_epochs_actual], [1, n_epochs_actual], fontsize=12)
plt.yticks(fontsize=12)
plt.tight_layout()
plt.show()
# Save model and weights
save_dir = 'saved_models'
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name + '_encoder.h5')
encoder_model.save(model_path)
print('Saved trained model at %s ' % model_path)
model_path = os.path.join(save_dir, model_name + '_decoder.h5')
decoder_model.save(model_path)
print('Saved trained model at %s ' % model_path)
#Load models
save_dir = 'saved_models'
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name + '_encoder.h5')
encoder_model = load_model(model_path, custom_objects={'st_sampled_softmax':st_sampled_softmax, 'st_hardmax_softmax':st_hardmax_softmax, 'min_pred':min_pred})
model_path = os.path.join(save_dir, model_name + '_decoder.h5')
decoder_model = load_model(model_path, custom_objects={'st_sampled_softmax':st_sampled_softmax, 'st_hardmax_softmax':st_hardmax_softmax, 'min_pred':min_pred})
#Visualize a few fake and real sequence patterns
s_test = np.zeros((x_test.shape[0], 1))
z_mean_test, z_log_var_test, z_test = encoder_model.predict([x_test], batch_size=32, verbose=True)
fake_pwm_test_batch = decoder_model.predict([s_test, z_test], batch_size=32, verbose=True)
for plot_i in range(5) :
print("Test sequence " + str(plot_i) + ":")
plot_gan_logo(x_test[plot_i, 0, :, :], 0, sequence_template=('N' * 256), figsize=(12, 0.55), width_ratios=[1, 7], logo_height=1.0, plot_start=50, plot_end=50+147)
plot_gan_logo(fake_pwm_test_batch[1][plot_i, 0, :, :], 0, sequence_template=('N' * 256), figsize=(12, 0.55), width_ratios=[1, 7], logo_height=1.0, plot_start=50, plot_end=50+147)
#Sample new patterns
z_test_new = np.random.normal(loc=0.0, scale=1.0, size=(32, 100))
fake_pwm_test_batch = decoder_model.predict_on_batch([s_test[:32], z_test_new[:32]])
print("- Fake PWMs (Randomly Generated) -")
for plot_i in range(5) :
plot_gan_logo(fake_pwm_test_batch[1][plot_i, 0, :, :], 0, sequence_template=('N' * 256), figsize=(12, 0.55), width_ratios=[1, 7], logo_height=1.0, plot_start=50, plot_end=50+147)
```
|
github_jupyter
|
import keras
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute
from keras.layers import Concatenate, Reshape, Softmax, Conv2DTranspose, Embedding, Multiply
from keras.callbacks import ModelCheckpoint, EarlyStopping, Callback
from keras import regularizers
from keras import backend as K
from keras.utils.generic_utils import Progbar
from keras.layers.merge import _Merge
import keras.losses
from functools import partial
from collections import defaultdict
import tensorflow as tf
from tensorflow.python.framework import ops
import isolearn.keras as iso
import numpy as np
import tensorflow as tf
import logging
logging.getLogger('tensorflow').setLevel(logging.ERROR)
import pandas as pd
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
import isolearn.io as isoio
import isolearn.keras as isol
from genesis.visualization import *
def iso_normalizer(t) :
iso = 0.0
if np.sum(t) > 0.0 :
iso = np.sum(t[80: 80+25]) / np.sum(t)
return iso
def cut_normalizer(t) :
cuts = np.concatenate([np.zeros(205), np.array([1.0])])
if np.sum(t) > 0.0 :
cuts = t / np.sum(t)
return cuts
def plot_gan_logo(pwm, score, sequence_template=None, figsize=(12, 3), width_ratios=[1, 7], logo_height=1.0, plot_start=0, plot_end=164) :
#Slice according to seq trim index
pwm = pwm[plot_start: plot_end, :]
sequence_template = sequence_template[plot_start: plot_end]
pwm += 0.0001
for j in range(0, pwm.shape[0]) :
pwm[j, :] /= np.sum(pwm[j, :])
entropy = np.zeros(pwm.shape)
entropy[pwm > 0] = pwm[pwm > 0] * -np.log2(pwm[pwm > 0])
entropy = np.sum(entropy, axis=1)
conservation = 2 - entropy
fig = plt.figure(figsize=figsize)
gs = gridspec.GridSpec(1, 2, width_ratios=[width_ratios[0], width_ratios[-1]])
ax2 = plt.subplot(gs[0])
ax3 = plt.subplot(gs[1])
plt.sca(ax2)
plt.axis('off')
annot_text = '\nScore = ' + str(round(score, 4))
ax2.text(0.99, 0.5, annot_text, horizontalalignment='right', verticalalignment='center', transform=ax2.transAxes, color='black', fontsize=12, weight="bold")
height_base = (1.0 - logo_height) / 2.
for j in range(0, pwm.shape[0]) :
sort_index = np.argsort(pwm[j, :])
for ii in range(0, 4) :
i = sort_index[ii]
nt_prob = pwm[j, i] * conservation[j]
nt = ''
if i == 0 :
nt = 'A'
elif i == 1 :
nt = 'C'
elif i == 2 :
nt = 'G'
elif i == 3 :
nt = 'T'
color = None
if sequence_template[j] != 'N' :
color = 'black'
if ii == 0 :
letterAt(nt, j + 0.5, height_base, nt_prob * logo_height, ax3, color=color)
else :
prev_prob = np.sum(pwm[j, sort_index[:ii]] * conservation[j]) * logo_height
letterAt(nt, j + 0.5, height_base + prev_prob, nt_prob * logo_height, ax3, color=color)
plt.sca(ax3)
plt.xlim((0, plot_end - plot_start))
plt.ylim((0, 2))
plt.xticks([], [])
plt.yticks([], [])
plt.axis('off')
ax3.axhline(y=0.01 + height_base, color='black', linestyle='-', linewidth=2)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
#Load APA plasmid data (random mpra)
file_path = '../../../aparent/data/prepared_data/apa_plasmid_data/'
data_version = ''
#plasmid_dict = isoio.load(file_path + 'apa_plasmid_data' + data_version)
plasmid_dict = pickle.load(open('../../../aparent/apa_plasmid_data.pickle', 'rb'))
plasmid_df = plasmid_dict['plasmid_df']
plasmid_cuts = plasmid_dict['plasmid_cuts']
print("len(plasmid_df) = " + str(len(plasmid_df)))
#Filter data
kept_libraries = [22]
min_count = 50
min_usage = 0.95
if kept_libraries is not None :
keep_index = np.nonzero(plasmid_df.library_index.isin(kept_libraries))[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]
'''keep_index = np.nonzero(plasmid_df.seq.str.slice(70, 76) == 'AATAAA')[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]
keep_index = np.nonzero(plasmid_df.seq.str.slice(76).str.contains('AATAAA'))[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]'''
if min_count is not None :
keep_index = np.nonzero(plasmid_df.total_count >= min_count)[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]
if min_usage is not None :
prox_c = np.ravel(plasmid_cuts[:, 180+70+6:180+70+6+35].sum(axis=-1))
total_c = np.ravel(plasmid_cuts[:, 180:180+205].sum(axis=-1)) + np.ravel(plasmid_cuts[:, -1].todense())
keep_index = np.nonzero(prox_c / total_c >= min_usage)[0]
#keep_index = np.nonzero(plasmid_df.proximal_count / plasmid_df.total_count >= min_usage)[0]
plasmid_df = plasmid_df.iloc[keep_index].copy()
plasmid_cuts = plasmid_cuts[keep_index, :]
print("len(plasmid_df) = " + str(len(plasmid_df)) + " (filtered)")
#Store cached filtered dataframe
#pickle.dump({'plasmid_df' : plasmid_df, 'plasmid_cuts' : plasmid_cuts}, open('apa_simple_cached_set.pickle', 'wb'))
#Load cached dataframe
cached_dict = pickle.load(open('apa_simple_cached_set.pickle', 'rb'))
plasmid_df = cached_dict['plasmid_df']
plasmid_cuts = cached_dict['plasmid_cuts']
print("len(plasmid_df) = " + str(len(plasmid_df)) + " (loaded)")
#Make generators
valid_set_size = 0.05
test_set_size = 0.05
batch_size = 32
#Generate training and test set indexes
plasmid_index = np.arange(len(plasmid_df), dtype=np.int)
plasmid_train_index = plasmid_index[:-int(len(plasmid_df) * (valid_set_size + test_set_size))]
plasmid_valid_index = plasmid_index[plasmid_train_index.shape[0]:-int(len(plasmid_df) * test_set_size)]
plasmid_test_index = plasmid_index[plasmid_train_index.shape[0] + plasmid_valid_index.shape[0]:]
print('Training set size = ' + str(plasmid_train_index.shape[0]))
print('Validation set size = ' + str(plasmid_valid_index.shape[0]))
print('Test set size = ' + str(plasmid_test_index.shape[0]))
data_gens = {
gen_id : iso.DataGenerator(
idx,
{'df' : plasmid_df},
batch_size=batch_size,
inputs = [
{
'id' : 'seq',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : iso.SequenceExtractor('padded_seq', start_pos=180 + 25 - 55, end_pos=180 + 25 - 55 + 256),
'encoder' : iso.OneHotEncoder(seq_length=256),
'dim' : (1, 256, 4),
'sparsify' : False
}
],
outputs = [
{
'id' : 'dummy_output',
'source_type' : 'zeros',
'dim' : (1,),
'sparsify' : False
}
],
randomizers = [],
shuffle = True if gen_id == 'train' else False
) for gen_id, idx in [('all', plasmid_index), ('train', plasmid_train_index), ('valid', plasmid_valid_index), ('test', plasmid_test_index)]
}
x_train = np.concatenate([data_gens['train'][i][0][0] for i in range(len(data_gens['train']))], axis=0)
x_test = np.concatenate([data_gens['test'][i][0][0] for i in range(len(data_gens['test']))], axis=0)
print(x_train.shape)
print(x_test.shape)
def make_gen_resblock(n_channels=64, window_size=3, stride=1, dilation=1, group_ix=0, layer_ix=0) :
#Initialize res block layers
batch_norm_0 = BatchNormalization(name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_batch_norm_0')
relu_0 = Lambda(lambda x: K.relu(x))
deconv_0 = Conv2DTranspose(n_channels, (1, window_size), strides=(1, stride), padding='same', activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_deconv_0')
batch_norm_1 = BatchNormalization(name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_batch_norm_1')
relu_1 = Lambda(lambda x: K.relu(x))
conv_1 = Conv2D(n_channels, (1, window_size), dilation_rate=(1, dilation), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_conv_1')
skip_deconv_0 = Conv2DTranspose(n_channels, (1, 1), strides=(1, stride), padding='same', activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_skip_deconv_0')
skip_1 = Lambda(lambda x: x[0] + x[1], name='policy_generator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_skip_1')
#Execute res block
def _resblock_func(input_tensor) :
batch_norm_0_out = batch_norm_0(input_tensor)
relu_0_out = relu_0(batch_norm_0_out)
deconv_0_out = deconv_0(relu_0_out)
batch_norm_1_out = batch_norm_1(deconv_0_out)
relu_1_out = relu_1(batch_norm_1_out)
conv_1_out = conv_1(relu_1_out)
skip_deconv_0_out = skip_deconv_0(input_tensor)
skip_1_out = skip_1([conv_1_out, skip_deconv_0_out])
return skip_1_out
return _resblock_func
#Decoder Model definition
def load_decoder_resnet(seq_length=256, latent_size=100) :
#Generator network parameters
window_size = 3
strides = [2, 2, 2, 2, 2, 1]
dilations = [1, 1, 1, 1, 1, 1]
channels = [384, 256, 128, 64, 32, 32]
initial_length = 8
n_resblocks = len(strides)
#Policy network definition
policy_dense_0 = Dense(initial_length * channels[0], activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_dense_0')
policy_dense_0_reshape = Reshape((1, initial_length, channels[0]))
curr_length = initial_length
resblocks = []
for layer_ix in range(n_resblocks) :
resblocks.append(make_gen_resblock(n_channels=channels[layer_ix], window_size=window_size, stride=strides[layer_ix], dilation=dilations[layer_ix], group_ix=0, layer_ix=layer_ix))
#final_batch_norm = BatchNormalization(name='policy_generator_final_batch_norm')
#final_relu = Lambda(lambda x: K.relu(x))
final_conv = Conv2D(4, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_uniform', name='policy_generator_final_conv')
def _generator_func(seed_input) :
policy_dense_0_out = policy_dense_0_reshape(policy_dense_0(seed_input))
#Connect group of res blocks
output_tensor = policy_dense_0_out
#Res block group 0
for layer_ix in range(n_resblocks) :
output_tensor = resblocks[layer_ix](output_tensor)
#Final conv out
#final_batch_norm_out = final_batch_norm(output_tensor)
#final_relu_out = final_relu(final_batch_norm_out)
final_conv_out = final_conv(output_tensor)#final_conv(final_relu_out)
return final_conv_out
return _generator_func
def make_disc_resblock(n_channels=64, window_size=8, dilation_rate=1, group_ix=0, layer_ix=0) :
#Initialize res block layers
batch_norm_0 = BatchNormalization(name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_batch_norm_0')
relu_0 = Lambda(lambda x: K.relu(x, alpha=0.0))
conv_0 = Conv2D(n_channels, (1, window_size), dilation_rate=dilation_rate, strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_conv_0')
batch_norm_1 = BatchNormalization(name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_batch_norm_1')
relu_1 = Lambda(lambda x: K.relu(x, alpha=0.0))
conv_1 = Conv2D(n_channels, (1, window_size), dilation_rate=dilation_rate, strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_conv_1')
skip_1 = Lambda(lambda x: x[0] + x[1], name='policy_discriminator_resblock_' + str(group_ix) + '_' + str(layer_ix) + '_skip_1')
#Execute res block
def _resblock_func(input_tensor) :
batch_norm_0_out = batch_norm_0(input_tensor)
relu_0_out = relu_0(batch_norm_0_out)
conv_0_out = conv_0(relu_0_out)
batch_norm_1_out = batch_norm_1(conv_0_out)
relu_1_out = relu_1(batch_norm_1_out)
conv_1_out = conv_1(relu_1_out)
skip_1_out = skip_1([conv_1_out, input_tensor])
return skip_1_out
return _resblock_func
#Encoder Model definition
def load_encoder_network_4_resblocks(batch_size, seq_length=205, latent_size=100, drop_rate=0.25) :
#Discriminator network parameters
n_resblocks = 4
n_channels = 32
#Discriminator network definition
policy_conv_0 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_conv_0')
skip_conv_0 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_skip_conv_0')
resblocks = []
for layer_ix in range(n_resblocks) :
resblocks.append(make_disc_resblock(n_channels=n_channels, window_size=8, dilation_rate=1, group_ix=0, layer_ix=layer_ix))
last_block_conv = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_last_block_conv')
skip_add = Lambda(lambda x: x[0] + x[1], name='policy_discriminator_skip_add')
final_flatten = Flatten()
z_mean = Dense(latent_size, name='policy_discriminator_z_mean')
z_log_var = Dense(latent_size, name='policy_discriminator_z_log_var')
def _encoder_func(sequence_input) :
policy_conv_0_out = policy_conv_0(sequence_input)
#Connect group of res blocks
output_tensor = policy_conv_0_out
#Res block group 0
skip_conv_0_out = skip_conv_0(output_tensor)
for layer_ix in range(n_resblocks) :
output_tensor = resblocks[layer_ix](output_tensor)
#Last res block extr conv
last_block_conv_out = last_block_conv(output_tensor)
skip_add_out = skip_add([last_block_conv_out, skip_conv_0_out])
#Final dense out
final_dense_out = final_flatten(skip_add_out)
#Z mean and log variance
z_mean_out = z_mean(final_dense_out)
z_log_var_out = z_log_var(final_dense_out)
return z_mean_out, z_log_var_out
return _encoder_func
#Encoder Model definition
def load_encoder_network_8_resblocks(batch_size, seq_length=205, drop_rate=0.25) :
#Discriminator network parameters
n_resblocks = 4
n_channels = 32
latent_size = 100
#Discriminator network definition
policy_conv_0 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_conv_0')
#Res block group 0
skip_conv_0 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_skip_conv_0')
resblocks_0 = []
for layer_ix in range(n_resblocks) :
resblocks_0.append(make_disc_resblock(n_channels=n_channels, window_size=8, dilation_rate=1, group_ix=0, layer_ix=layer_ix))
#Res block group 1
skip_conv_1 = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_skip_conv_1')
resblocks_1 = []
for layer_ix in range(n_resblocks) :
resblocks_1.append(make_disc_resblock(n_channels=n_channels, window_size=8, dilation_rate=4, group_ix=1, layer_ix=layer_ix))
last_block_conv = Conv2D(n_channels, (1, 1), strides=(1, 1), padding='same', activation='linear', kernel_initializer='glorot_normal', name='policy_discriminator_last_block_conv')
skip_add = Lambda(lambda x: x[0] + x[1] + x[2], name='policy_discriminator_skip_add')
final_flatten = Flatten()
z_mean = Dense(latent_size, name='policy_discriminator_z_mean')
z_log_var = Dense(latent_size, name='policy_discriminator_z_log_var')
def _encoder_func(sequence_input) :
policy_conv_0_out = policy_conv_0(sequence_input)
#Connect group of res blocks
output_tensor = policy_conv_0_out
#Res block group 0
skip_conv_0_out = skip_conv_0(output_tensor)
for layer_ix in range(n_resblocks) :
output_tensor = resblocks_0[layer_ix](output_tensor)
#Res block group 0
skip_conv_1_out = skip_conv_1(output_tensor)
for layer_ix in range(n_resblocks) :
output_tensor = resblocks_1[layer_ix](output_tensor)
#Last res block extr conv
last_block_conv_out = last_block_conv(output_tensor)
skip_add_out = skip_add([last_block_conv_out, skip_conv_0_out, skip_conv_1_out])
#Final dense out
final_dense_out = final_flatten(skip_add_out)
#Z mean and log variance
z_mean_out = z_mean(final_dense_out)
z_log_var_out = z_log_var(final_dense_out)
return z_mean_out, z_log_var_out
return _encoder_func
from tensorflow.python.framework import ops
#Stochastic Binarized Neuron helper functions (Tensorflow)
#ST Estimator code adopted from https://r2rt.com/beyond-binary-ternary-and-one-hot-neurons.html
#See Github https://github.com/spitis/
def st_sampled_softmax(logits):
with ops.name_scope("STSampledSoftmax") as namescope :
nt_probs = tf.nn.softmax(logits)
onehot_dim = logits.get_shape().as_list()[1]
sampled_onehot = tf.one_hot(tf.squeeze(tf.multinomial(logits, 1), 1), onehot_dim, 1.0, 0.0)
with tf.get_default_graph().gradient_override_map({'Ceil': 'Identity', 'Mul': 'STMul'}):
return tf.ceil(sampled_onehot * nt_probs)
def st_hardmax_softmax(logits):
with ops.name_scope("STHardmaxSoftmax") as namescope :
nt_probs = tf.nn.softmax(logits)
onehot_dim = logits.get_shape().as_list()[1]
sampled_onehot = tf.one_hot(tf.argmax(nt_probs, 1), onehot_dim, 1.0, 0.0)
with tf.get_default_graph().gradient_override_map({'Ceil': 'Identity', 'Mul': 'STMul'}):
return tf.ceil(sampled_onehot * nt_probs)
@ops.RegisterGradient("STMul")
def st_mul(op, grad):
return [grad, grad]
#PWM Masking and Sampling helper functions
def mask_pwm(inputs) :
pwm, onehot_template, onehot_mask = inputs
return pwm * onehot_mask + onehot_template
def sample_pwm_only(pwm_logits) :
n_sequences = K.shape(pwm_logits)[0]
seq_length = K.shape(pwm_logits)[2]
flat_pwm = K.reshape(pwm_logits, (n_sequences * seq_length, 4))
sampled_pwm = st_sampled_softmax(flat_pwm)
return K.reshape(sampled_pwm, (n_sequences, 1, seq_length, 4))
def sample_pwm(pwm_logits) :
n_sequences = K.shape(pwm_logits)[0]
seq_length = K.shape(pwm_logits)[2]
flat_pwm = K.reshape(pwm_logits, (n_sequences * seq_length, 4))
sampled_pwm = sampled_pwm = K.switch(K.learning_phase(), st_sampled_softmax(flat_pwm), st_hardmax_softmax(flat_pwm))
return K.reshape(sampled_pwm, (n_sequences, 1, seq_length, 4))
def max_pwm(pwm_logits) :
n_sequences = K.shape(pwm_logits)[0]
seq_length = K.shape(pwm_logits)[2]
flat_pwm = K.reshape(pwm_logits, (n_sequences * seq_length, 4))
sampled_pwm = sampled_pwm = st_hardmax_softmax(flat_pwm)
return K.reshape(sampled_pwm, (n_sequences, 1, seq_length, 4))
#Generator helper functions
def initialize_sequence_templates(generator, sequence_templates) :
embedding_templates = []
embedding_masks = []
for k in range(len(sequence_templates)) :
sequence_template = sequence_templates[k]
onehot_template = iso.OneHotEncoder(seq_length=len(sequence_template))(sequence_template).reshape((1, len(sequence_template), 4))
for j in range(len(sequence_template)) :
if sequence_template[j] not in ['N', 'X'] :
nt_ix = np.argmax(onehot_template[0, j, :])
onehot_template[:, j, :] = -4.0
onehot_template[:, j, nt_ix] = 10.0
elif sequence_template[j] == 'X' :
onehot_template[:, j, :] = -1.0
onehot_mask = np.zeros((1, len(sequence_template), 4))
for j in range(len(sequence_template)) :
if sequence_template[j] == 'N' :
onehot_mask[:, j, :] = 1.0
embedding_templates.append(onehot_template.reshape(1, -1))
embedding_masks.append(onehot_mask.reshape(1, -1))
embedding_templates = np.concatenate(embedding_templates, axis=0)
embedding_masks = np.concatenate(embedding_masks, axis=0)
generator.get_layer('template_dense').set_weights([embedding_templates])
generator.get_layer('template_dense').trainable = False
generator.get_layer('mask_dense').set_weights([embedding_masks])
generator.get_layer('mask_dense').trainable = False
#Generator construction function
def build_sampler(batch_size, seq_length, n_classes=1, n_samples=None, validation_sample_mode='max') :
use_samples = True
if n_samples is None :
use_samples = False
n_samples = 1
#Initialize Reshape layer
reshape_layer = Reshape((1, seq_length, 4))
#Initialize template and mask matrices
onehot_template_dense = Embedding(n_classes, seq_length * 4, embeddings_initializer='zeros', name='template_dense')
onehot_mask_dense = Embedding(n_classes, seq_length * 4, embeddings_initializer='ones', name='mask_dense')
#Initialize Templating and Masking Lambda layer
masking_layer = Lambda(mask_pwm, output_shape = (1, seq_length, 4), name='masking_layer')
#Initialize PWM normalization layer
pwm_layer = Softmax(axis=-1, name='pwm')
#Initialize sampling layers
sample_func = sample_pwm
if validation_sample_mode == 'sample' :
sample_func = sample_pwm_only
upsampling_layer = Lambda(lambda x: K.tile(x, [n_samples, 1, 1, 1]), name='upsampling_layer')
sampling_layer = Lambda(sample_func, name='pwm_sampler')
permute_layer = Lambda(lambda x: K.permute_dimensions(K.reshape(x, (n_samples, batch_size, 1, seq_length, 4)), (1, 0, 2, 3, 4)), name='permute_layer')
def _sampler_func(class_input, raw_logits) :
#Get Template and Mask
onehot_template = reshape_layer(onehot_template_dense(class_input))
onehot_mask = reshape_layer(onehot_mask_dense(class_input))
#Add Template and Multiply Mask
pwm_logits = masking_layer([raw_logits, onehot_template, onehot_mask])
#Compute PWM (Nucleotide-wise Softmax)
pwm = pwm_layer(pwm_logits)
sampled_pwm = None
#Optionally tile each PWM to sample from and create sample axis
if use_samples :
pwm_logits_upsampled = upsampling_layer(pwm_logits)
sampled_pwm = sampling_layer(pwm_logits_upsampled)
sampled_pwm = permute_layer(sampled_pwm)
else :
sampled_pwm = sampling_layer(pwm_logits)
return pwm_logits, pwm, sampled_pwm
return _sampler_func
pwm_true = np.array([1., 0., 0., 0.])
p_ons = np.linspace(0.001, 0.999, 1000)
ces = []
for i in range(p_ons.shape[0]) :
p_on = p_ons[i]
p_off = 1. - p_on / 3.
pwm_pred = np.array([p_on, p_off, p_off, p_off])
ce = - np.sum(pwm_true * np.log(pwm_pred))
ces.append(ce)
ces = np.array(ces)
f = plt.figure(figsize=(4, 3))
plt.plot(p_ons, ces, color='black', linewidth=2)
plt.scatter([0.25], [- np.sum(pwm_true * np.log(np.array([0.25, 0.25, 0.25, 0.25])))], c='darkblue', s=45)
plt.scatter([0.95], [- np.sum(pwm_true * np.log(np.array([0.95, 0.05/3., 0.05/3., 0.05/3.])))], c='darkorange', s=45)
plt.tight_layout()
plt.show()
def get_pwm_cross_entropy() :
def _pwm_cross_entropy(inputs) :
pwm_true, pwm_pred = inputs
pwm_pred = K.clip(pwm_pred, K.epsilon(), 1. - K.epsilon())
ce = - K.sum(pwm_true[:, 0, :, :] * K.log(pwm_pred[:, 0, :, :]), axis=-1)
return K.sum(ce, axis=-1)
return _pwm_cross_entropy
def min_pred(y_true, y_pred) :
return y_pred
def get_weighted_loss(loss_coeff=1.) :
def _min_pred(y_true, y_pred) :
return loss_coeff * y_pred
return _min_pred
def get_z_sample(z_inputs):
z_mean, z_log_var = z_inputs
batch_size = K.shape(z_mean)[0]
latent_dim = K.int_shape(z_mean)[1]
epsilon = K.random_normal(shape=(batch_size, latent_dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
def get_z_kl_loss(anneal_coeff) :
def _z_kl_loss(inputs, anneal_coeff=anneal_coeff) :
z_mean, z_log_var = inputs
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
return anneal_coeff * kl_loss
return _z_kl_loss
#Simple Library
sequence_templates = [
'GGCGGCATGGACGAGCTGTACAAGTCTTGATCCCTACACGACGCTCTTCCGATCTNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNCGCCTAACCCTAAGCAGATTCTTCATGCAATTGTCGGTCAAGCCTTGCCTTGTT'
]
#Initialize Encoder and Decoder networks
batch_size = 32
seq_length = 256
n_samples = None
latent_size = 100
#Load Encoder
encoder = load_encoder_network_4_resblocks(batch_size, seq_length=seq_length, latent_size=latent_size, drop_rate=0.)
#Load Decoder
decoder = load_decoder_resnet(seq_length=seq_length, latent_size=latent_size)
#Load Sampler
sampler = build_sampler(batch_size, seq_length, n_classes=1, n_samples=n_samples, validation_sample_mode='sample')
#Build Encoder Model
encoder_input = Input(shape=(1, seq_length, 4), name='encoder_input')
z_mean, z_log_var = encoder(encoder_input)
z_sampling_layer = Lambda(get_z_sample, output_shape=(latent_size,), name='z_sampler')
z = z_sampling_layer([z_mean, z_log_var])
# instantiate encoder model
encoder_model = Model(encoder_input, [z_mean, z_log_var, z])
encoder_model.compile(
optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
loss=min_pred
)
#Build Decoder Model
decoder_class = Input(shape=(1,), name='decoder_class')
decoder_input = Input(shape=(latent_size,), name='decoder_input')
pwm_logits, pwm, sampled_pwm = sampler(decoder_class, decoder(decoder_input))
decoder_model = Model([decoder_class, decoder_input], [pwm_logits, pwm, sampled_pwm])
#Initialize Sequence Templates and Masks
initialize_sequence_templates(decoder_model, sequence_templates)
decoder_model.compile(
optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
loss=min_pred
)
#Build VAE Pipeline
vae_decoder_class = Input(shape=(1,), name='vae_decoder_class')
vae_encoder_input = Input(shape=(1, seq_length, 4), name='vae_encoder_input')
encoded_z_mean, encoded_z_log_var = encoder(vae_encoder_input)
encoded_z = z_sampling_layer([encoded_z_mean, encoded_z_log_var])
decoded_logits, decoded_pwm, decoded_sample = sampler(vae_decoder_class, decoder(encoded_z))
reconstruction_loss = Lambda(get_pwm_cross_entropy(), name='reconstruction')([vae_encoder_input, decoded_pwm])
anneal_coeff = K.variable(0.0)
kl_loss = Lambda(get_z_kl_loss(anneal_coeff), name='kl')([encoded_z_mean, encoded_z_log_var])
vae_model = Model(
[vae_decoder_class, vae_encoder_input],
[reconstruction_loss, kl_loss]#, entropy_loss]
)
#Initialize Sequence Templates and Masks
initialize_sequence_templates(vae_model, sequence_templates)
vae_model.compile(
optimizer=keras.optimizers.Adam(lr=0.0001, beta_1=0.5, beta_2=0.9),
#optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
loss={
'reconstruction' : get_weighted_loss(loss_coeff=1. * (1./147.)),
'kl' : get_weighted_loss(loss_coeff=0.65 * (1./147.))#0.15#0.05#, #0.000001
#'entropy' : get_weighted_loss(loss_coeff=0.0)
}
)
encoder_model.summary()
decoder_model.summary()
n_epochs = 50
def _anneal_func(val, epoch, n_epochs=n_epochs) :
if epoch <= 0 :
return 0.0
elif epoch <= 2 :
return 0.1
elif epoch <= 4 :
return 0.2
elif epoch <= 6 :
return 0.4
elif epoch <= 8 :
return 0.6
elif epoch <= 10 :
return 0.8
elif epoch > 10 :
return 1.0
return 1.0
class EpochVariableCallback(Callback):
def __init__(self, my_variable, my_func):
self.my_variable = my_variable
self.my_func = my_func
def on_epoch_end(self, epoch, logs={}):
K.set_value(self.my_variable, self.my_func(K.get_value(self.my_variable), epoch))
s_train = np.zeros((x_train.shape[0], 1))
s_test = np.zeros((x_test.shape[0], 1))
dummy_target_train = np.zeros((x_train.shape[0], 1))
dummy_target_test = np.zeros((x_test.shape[0], 1))
model_name = "vae_apa_max_isoform_simple_new_resnet_len_256_50_epochs_medium_high_kl_annealed"
callbacks =[
#EarlyStopping(monitor='val_loss', min_delta=0.002, patience=10, verbose=0, mode='auto'),
ModelCheckpoint("model_checkpoints/" + model_name + "_epoch_{epoch:02d}.hdf5", monitor='val_loss', mode='min', save_weights_only=True),
EpochVariableCallback(anneal_coeff, _anneal_func)
]
# train the autoencoder
train_history = vae_model.fit(
[s_train, x_train],
[dummy_target_train, dummy_target_train],#, dummy_target_train],
shuffle=True,
epochs=n_epochs,
batch_size=batch_size,
validation_data=(
[s_test, x_test],
[dummy_target_test, dummy_target_test]#, dummy_target_test]
),
callbacks=callbacks
)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(4.5 * 2, 4))
n_epochs_actual = len(train_history.history['reconstruction_loss'])
ax1.plot(np.arange(1, n_epochs_actual + 1), train_history.history['reconstruction_loss'], linewidth=3, color='green')
ax1.plot(np.arange(1, n_epochs_actual + 1), train_history.history['val_reconstruction_loss'], linewidth=3, color='orange')
plt.sca(ax1)
plt.xlabel("Epochs", fontsize=14)
plt.ylabel("Reconstruction Loss", fontsize=14)
plt.xlim(1, n_epochs_actual)
plt.xticks([1, n_epochs_actual], [1, n_epochs_actual], fontsize=12)
plt.yticks(fontsize=12)
ax2.plot(np.arange(1, n_epochs_actual + 1), train_history.history['kl_loss'], linewidth=3, color='green')
ax2.plot(np.arange(1, n_epochs_actual + 1), train_history.history['val_kl_loss'], linewidth=3, color='orange')
plt.sca(ax2)
plt.xlabel("Epochs", fontsize=14)
plt.ylabel("KL Divergence", fontsize=14)
plt.xlim(1, n_epochs_actual)
plt.xticks([1, n_epochs_actual], [1, n_epochs_actual], fontsize=12)
plt.yticks(fontsize=12)
plt.tight_layout()
plt.show()
# Save model and weights
save_dir = 'saved_models'
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name + '_encoder.h5')
encoder_model.save(model_path)
print('Saved trained model at %s ' % model_path)
model_path = os.path.join(save_dir, model_name + '_decoder.h5')
decoder_model.save(model_path)
print('Saved trained model at %s ' % model_path)
#Load models
save_dir = 'saved_models'
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name + '_encoder.h5')
encoder_model = load_model(model_path, custom_objects={'st_sampled_softmax':st_sampled_softmax, 'st_hardmax_softmax':st_hardmax_softmax, 'min_pred':min_pred})
model_path = os.path.join(save_dir, model_name + '_decoder.h5')
decoder_model = load_model(model_path, custom_objects={'st_sampled_softmax':st_sampled_softmax, 'st_hardmax_softmax':st_hardmax_softmax, 'min_pred':min_pred})
#Visualize a few fake and real sequence patterns
s_test = np.zeros((x_test.shape[0], 1))
z_mean_test, z_log_var_test, z_test = encoder_model.predict([x_test], batch_size=32, verbose=True)
fake_pwm_test_batch = decoder_model.predict([s_test, z_test], batch_size=32, verbose=True)
for plot_i in range(5) :
print("Test sequence " + str(plot_i) + ":")
plot_gan_logo(x_test[plot_i, 0, :, :], 0, sequence_template=('N' * 256), figsize=(12, 0.55), width_ratios=[1, 7], logo_height=1.0, plot_start=50, plot_end=50+147)
plot_gan_logo(fake_pwm_test_batch[1][plot_i, 0, :, :], 0, sequence_template=('N' * 256), figsize=(12, 0.55), width_ratios=[1, 7], logo_height=1.0, plot_start=50, plot_end=50+147)
#Sample new patterns
z_test_new = np.random.normal(loc=0.0, scale=1.0, size=(32, 100))
fake_pwm_test_batch = decoder_model.predict_on_batch([s_test[:32], z_test_new[:32]])
print("- Fake PWMs (Randomly Generated) -")
for plot_i in range(5) :
plot_gan_logo(fake_pwm_test_batch[1][plot_i, 0, :, :], 0, sequence_template=('N' * 256), figsize=(12, 0.55), width_ratios=[1, 7], logo_height=1.0, plot_start=50, plot_end=50+147)
| 0.508788 | 0.410166 |
# Question formulation Notebook
Use of toy dataset and notebook dependencies.
### Notebook Set-up:
```
import re
import ast
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from os import path
from pyspark.sql import functions as F
from pyspark.sql.types import IntegerType
warnings.filterwarnings('ignore')
# warnings.resetwarnings()
PWD = !pwd
PWD = PWD[0]
from pyspark.sql import SparkSession
app_name = "w261-FinalProject"
master = "local[*]"
spark = SparkSession\
.builder\
.appName(app_name)\
.master(master)\
.getOrCreate()
sc = spark.sparkContext
# read the RDDs to see the form:
trainRDD = sc.textFile("gs://w261_final-project_team13/train.txt")
testRDD = sc.textFile("gs://w261_final-project_team13/test.txt")
trainRDD.take(2)
testRDD.take(2)
```
We see that both are tab-separated files, so we want to sample them into a single-node computation friendly file and get back to the local machines. For that we need to know how many observations we have:
```
print('Train dataset count:', trainRDD.count(), 'observations.')
print('Test dataset count:', testRDD.count(), 'observations.')
```
Based on that we will take 0.3% of the dataset as sample, which will be roughly $45.840.617 \cdot 0.0003 = 137.521$ observations and $10.38E3 \cdot 0.0003 = 30$ MB, perfectly handle in a single node machine and still relevant. For the test dataset the same smaple ratio will be kept.
Another point is that the text file do not have headers, and as we want to work with dataframes, we want to create a schema. To do that we may take a look at the `readme.txt` file supplied with the data:
```
====================================================
Format:
The columns are tab separeted with the following schema:
<label> <integer feature 1> ... <integer feature 13> <categorical feature 1> ... <categorical feature 26>
When a value is missing, the field is just empty.
There is no label field in the test set.
====================================================
```
Additionally we need to parse the data, going from lines of strings to integers and and strings. For that we can map the RDD after sampling and converting to the desired type:
```
labelsTrain = ['label','I1','I2','I3','I4','I5','I6','I7','I8','I9','I10','I11','I12','I13',
'C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13','C14',
'C15','C16','C17','C18','C19','C20','C21','C22','C23','C24','C25','C26']
labelsTest = ['I1','I2','I3','I4','I5','I6','I7','I8','I9','I10','I11','I12','I13',
'C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13','C14',
'C15','C16','C17','C18','C19','C20','C21','C22','C23','C24','C25','C26']
toyTrainDF = trainRDD.sample(False, 0.003, 2019).map(lambda line: line.split('\t')).toDF(labelsTrain)
toyTestDF = testRDD.sample(False, 0.003, 2019).map(lambda line: line.split('\t')).toDF(labelsTest)
# verifying the count:
print('Toy train dataframe count:', toyTrainDF.count())
print('Toy test dataframe count:', toyTestDF.count())
# Now writing out toy datasets to be able to work on local machines
toyTrainDF.write.parquet("gs://w261_final-project_team13/toy_train.txt")
toyTestDF.write.parquet("gs://w261_final-project_team13/toy_test.txt")
```
### Now running on the local machine:
```
# copy the files to the local machine:
!gsutil -m cp gs://w261_final-project_team13/toy_test.txt/* .data/toy_test.txt/
!gsutil -m cp gs://w261_final-project_team13/toy_train.txt/* .data/toy_train.txt/
gsutil cp gs://w261_final-project_team13/notebooks/* ./QuestionFormulation.ipynb
# read the parquet files and print the first observations of each:
toyTrainDF = spark.read.parquet("./data/toy_train.txt")
toyTestDF = spark.read.parquet("./data/toy_test.txt")
toyTrainDF.head()
toyTestDF.head()
```
We see that all features are strings. We want to cast the `label` feature to _Boolean_ and the `I1` to `I13` features to _integers_:
```
toyTrainDF = toyTrainDF.withColumn('label', toyTrainDF.label.cast('Boolean'))
intColumns = ['I1','I2','I3','I4','I5','I6','I7','I8','I9','I10','I11','I12','I13']
# convert to number. Cast to float to be able to use NaN:
for column in intColumns:
toyTrainDF = toyTrainDF.withColumn(column, F.when(toyTrainDF[column] != "", toyTrainDF[column].cast('float')).otherwise(float('NaN')))
toyTestDF = toyTestDF.withColumn(column, F.when(toyTestDF[column] != "", toyTestDF[column].cast('float')).otherwise(float('NaN')))
strColumns = ['C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13','C14',
'C15','C16','C17','C18','C19','C20','C21','C22','C23','C24','C25','C26']
for column in strColumns:
toyTrainDF = toyTrainDF.withColumn(column, F.when(toyTrainDF[column] != "", toyTrainDF[column]).otherwise(None))
toyTestDF = toyTestDF.withColumn(column, F.when(toyTestDF[column] != "", toyTestDF[column]).otherwise(None))
toyTrainDF.head()
toyTestDF.head()
```
Now, we can start analyzing the data to understand which features are more likely to contribute to a model and which are more likely to bias our model (due to a high number of `None` values or that need normalization, for example.
## EDA
As we are not sure which any of the variables means, we need to take a look on each and every one trying to understand if it is valuable for the click-through rate. We can check what is the overall rate for the whole dataset and as a baseline and see which are better than the baseline and which are worst:
```
# check the overall rate:
totalCount = toyTrainDF.count()
truePerc = toyTrainDF.filter(toyTrainDF.label == True).count()/totalCount
print('Overall click-through rate: {0:3.2f}%'.format(truePerc*100))
# none count in label feature:
noneCount = toyTrainDF.filter(toyTrainDF.label == float('NaN')).count()
print('Overall click-through count: {0:d}. This represents {1:3.2f}% of the dataset.'.format(noneCount,noneCount/totalCount*100))
```
As we have 39 features, it is a good idea to verify each one for its effectiveness, checking count of values, distributions, etc. For the numeric variables we have the describe function, but we also wants to verify None value count and the most frequent variables count:
```
def descNumEDA(dataframe, column, totalCount=0, nanThreshold=0.5):
"""
Function that prints an analysis of column from the given dataframe. Retuns None.
Args:
dataframe - input dataframe
column - string with a dataframe column.
totalCount - optional. Number of rows in the dataframe (if defined avoid recalculation).
nanThreshold - optional. Percentage allowed of NaN values in the column.
Returns:
Column - name of the column if the column have more NaN if it represents more
than nanThreshold ratio.
Output:
NaN Count - number for NaN values in % of the row count of the dataframe.
Mean - mean of the valid entries
StdDev - standard deviation of the valid entries
Max - maximum value of the valid entries
3rd Quartile - 75% quartile od the
Median - 50% quartile of the valid entries
1st Quartile - 25% of the valid entries
Min - minimum value of the valid entries
Most Freq - number of values for the 5 most frequent values
"""
if totalCount == 0:
totalCount = dataframe.count()
pandCol = dataframe.select(column).toPandas()[column]
freqNumbers = dict(pandCol.value_counts(normalize=True).head(5))
nanCount = dataframe.filter(F.isnan(dataframe[column])).count()
validCount = totalCount - nanCount
print('+'+13*'-'+'+'+30*'-'+'+')
print('|Feature: {:^4}'.format(column)+'|{:>22}{:>6.2f}% |'.format('Null Count: ', nanCount/totalCount*100))
print('+'+13*'-'+'+'+30*'-'+'+')
print('|{:>12} |{:>29.2f} |'.format('Mean', pandCol.mean()))
print('|{:>12} |{:>29.2f} |'.format('StdDev', pandCol.std()))
print('|{:>12} |{:>29.2f} |'.format('Maximum', pandCol.max()))
print('|{:>12} |{:>29.2f} |'.format('3rd Quartile', pandCol.quantile(q=0.75)))
print('|{:>12} |{:>29.2f} |'.format('Median', pandCol.quantile(q=0.5)))
print('|{:>12} |{:>29.2f} |'.format('1st Quartile', pandCol.quantile(q=0.25)))
print('|{:>12} |{:>29.2f} |'.format('Minimum', pandCol.min()))
print('|{:>12} |{:>29.2f} |'.format('Unique Val.', pandCol.nunique()))
print('+'+13*'-'+'+'+30*'-'+'+')
print('| Most Freq.: {:>30} |'.format(str(list(freqNumbers.keys()))))
print('+'+13*'-'+'+'+30*'-'+'+')
for item in freqNumbers:
print('|{:>12} |{:>28.2f}% |'.format(item, freqNumbers[item]*100))
print('+'+13*'-'+'+'+30*'-'+'+\n')
if nanCount/totalCount*100 > nanTreshold*100:
return column
else:
return None
badFeatures = []
nanTreshold = 0.4
for item in intColumns:
badFeatures.append(descNumEDA(toyTrainDF, item, totalCount, nanTreshold))
badFeatures = list(filter(None,badFeatures))
print('List of feature with more than {:4.2f}% NaN ratio: {}'.format(nanTreshold*100, badFeatures))
```
Overall, we see that most features are right skewed, with most of its values near zero. Additionally we were able to identify features with a lot of `NaN` values. This analisys will be usefull in the future when deciding which algorithm we want to use. Another consideration when picking algorithms would be multicolinearity. For that we ran a scatterplot matrix where we can check that:
Now, doing a similar analisys for the categorical values, we have:
```
def descCatEDA(dataframe, column, totalCount=0, nanThreshold=0.5):
"""
Function that prints an analysis of column from the given dataframe. Retuns None.
Args:
dataframe - input dataframe
column - string with a dataframe column.
totalCount - optional. Number of rows in the dataframe (if defined avoid recalculation).
nanThreshold - optional. Percentage allowed of NaN values in the column.
Returns:
Column - name of the column if the column have more NaN if it represents more
than nanThreshold ratio.
Output:
NaN Count - number for NaN values in % of the row count of the dataframe.
Most Freq - number of values for the 5 most frequent values (discarding NaN).
"""
if totalCount == 0:
totalCount = dataframe.count()
pandCol = dataframe.select(column).toPandas()[column]
freqNumbers = dict(pandCol.value_counts(normalize=True).head(5))
nanCount = dataframe.filter(dataframe[column].isNull()).count()
validCount = totalCount - nanCount
print('+'+13*'-'+'+'+22*'-'+'+')
print('|Feature: {:^4}'.format(column)+'|{:>14}{:>6.2f}% |'.format('Null Count: ', nanCount/totalCount*100))
print('+'+13*'-'+'+'+22*'-'+'+')
print('| Unique Values: {:>19} |'.format(pandCol.nunique()))
print('+'+13*'-'+'+'+22*'-'+'+')
for item in freqNumbers:
print('|{:>12} |{:>20.2f}% |'.format(item, freqNumbers[item]*100))
print('+'+13*'-'+'+'+22*'-'+'+\n')
if nanCount/totalCount*100 > nanTreshold*100:
return column
else:
return None
for item in strColumns:
badFeatures.append(descCatEDA(toyTrainDF, item, totalCount, nanTreshold))
badFeatures = list(filter(None,badFeatures))
print('List of feature with more than {:4.2f}% NaN ratio: {}'.format(nanTreshold*100, badFeatures))
```
Now let's check for correlations between integer variables, as this can be harmfull to some algorithms like logistic regression. We will take an approx. 1000 observation sample and normalize the columns to improve visualization.
```
# I'll sample again to improve visualization:
def corrdot(*args, **kwargs):
"""
Helper function to plot correlation indexes in the upper side of the scatterplot matrix.
Reference: https://github.com/mwaskom/seaborn/issues/1444
"""
corr_r = args[0].corr(args[1], 'spearman')
# Axes
ax = plt.gca()
ax.axis('off')
x_min, x_max = ax.get_xlim()
x_centroid = x_min + (x_max - x_min) / 2
y_min, y_max = ax.get_ylim()
y_true = y_max - (y_max - y_min) / 4
y_false = y_min + (y_max - y_min) / 4
# Plot args
if kwargs['label'] == True:
marker_size = abs(corr_r) * 5000
ax.scatter(x_centroid, y_true, marker='o', s=marker_size, alpha=0.6)
corr_text = str(round(corr_r, 2)).replace('-0', '-').lstrip('0')
ax.annotate(corr_text, [x_centroid, y_true,], ha='center', va='center', fontsize=20)
else:
marker_size = abs(corr_r) * 5000
ax.scatter(x_centroid, y_false, marker='o', s=marker_size, alpha=0.6)
corr_text = str(round(corr_r, 2)).replace('-0', '-').lstrip('0')
ax.annotate(corr_text, [x_centroid, y_false,], ha='center', va='center', fontsize=20)
# name of the integer columns for normalization and pairgrid:
int_columns_list = ['I1','I2','I3','I4','I5','I6','I7','I8','I9','I10','I11','I12','I13']
# normalizing the DF, keeping the label column and using OBS - MEAN/STDDEV:
vizDF = toyTrainDF.sample(False, 0.05, 2019).toPandas().iloc[:,0:14]
vizDF_norm = pd.DataFrame(vizDF[int_columns_list].values, columns=int_columns_list, index=vizDF.index)
vizDF_norm = (vizDF_norm - vizDF_norm.mean())/vizDF_norm.std()
vizDF[int_columns_list] = vizDF_norm
# ploting the results splited by label:
sns.set_context('notebook', font_scale=1.3)
g = sns.PairGrid(vizDF, vars=int_columns_list, hue='label')
g.map_lower(sns.scatterplot, alpha=0.3)
g.map_diag(sns.distplot, hist=False)
g.map_upper(corrdot)
g.add_legend()
```
Note that most variables doesn't are correlated equaly both for False labels and True labels. This will be important when taking decisions on dropping or encoding features. Another important point is the presence of outliers, with observations up to 40 standard deviations of the mean (as saw with `I7`). Those will be notes that will be taken in consideration when choosing the learning algorithm to train our model.
Finally, after analyzing the features of our dataset, we can analyze if some of our observations contains bad data. As no information is provided about the features, our only possible approach is to count missing values within each observation and see it's influence on the click prediction:
```
# checking the average NaN count in each row:
toyTrainPandasDF = toyTrainDF.toPandas()
countNullDF = toyTrainPandasDF.isnull().sum(axis=1).rename('CountNaN')
countNullDF.describe()
# concat dataframes and crosstab
countNullDF = pd.concat([toyTrainPandasDF['label'], countNullDF], axis=1)
countNullDF.groupby('label').mean()
```
We see that the count of missing values is higher in non-clicks, but the difference is small, so the absence of values is not a good indicator for CTR.
With the information gathered here, we can now move to algorithm selection.
|
github_jupyter
|
import re
import ast
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from os import path
from pyspark.sql import functions as F
from pyspark.sql.types import IntegerType
warnings.filterwarnings('ignore')
# warnings.resetwarnings()
PWD = !pwd
PWD = PWD[0]
from pyspark.sql import SparkSession
app_name = "w261-FinalProject"
master = "local[*]"
spark = SparkSession\
.builder\
.appName(app_name)\
.master(master)\
.getOrCreate()
sc = spark.sparkContext
# read the RDDs to see the form:
trainRDD = sc.textFile("gs://w261_final-project_team13/train.txt")
testRDD = sc.textFile("gs://w261_final-project_team13/test.txt")
trainRDD.take(2)
testRDD.take(2)
print('Train dataset count:', trainRDD.count(), 'observations.')
print('Test dataset count:', testRDD.count(), 'observations.')
====================================================
Format:
The columns are tab separeted with the following schema:
<label> <integer feature 1> ... <integer feature 13> <categorical feature 1> ... <categorical feature 26>
When a value is missing, the field is just empty.
There is no label field in the test set.
====================================================
labelsTrain = ['label','I1','I2','I3','I4','I5','I6','I7','I8','I9','I10','I11','I12','I13',
'C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13','C14',
'C15','C16','C17','C18','C19','C20','C21','C22','C23','C24','C25','C26']
labelsTest = ['I1','I2','I3','I4','I5','I6','I7','I8','I9','I10','I11','I12','I13',
'C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13','C14',
'C15','C16','C17','C18','C19','C20','C21','C22','C23','C24','C25','C26']
toyTrainDF = trainRDD.sample(False, 0.003, 2019).map(lambda line: line.split('\t')).toDF(labelsTrain)
toyTestDF = testRDD.sample(False, 0.003, 2019).map(lambda line: line.split('\t')).toDF(labelsTest)
# verifying the count:
print('Toy train dataframe count:', toyTrainDF.count())
print('Toy test dataframe count:', toyTestDF.count())
# Now writing out toy datasets to be able to work on local machines
toyTrainDF.write.parquet("gs://w261_final-project_team13/toy_train.txt")
toyTestDF.write.parquet("gs://w261_final-project_team13/toy_test.txt")
# copy the files to the local machine:
!gsutil -m cp gs://w261_final-project_team13/toy_test.txt/* .data/toy_test.txt/
!gsutil -m cp gs://w261_final-project_team13/toy_train.txt/* .data/toy_train.txt/
gsutil cp gs://w261_final-project_team13/notebooks/* ./QuestionFormulation.ipynb
# read the parquet files and print the first observations of each:
toyTrainDF = spark.read.parquet("./data/toy_train.txt")
toyTestDF = spark.read.parquet("./data/toy_test.txt")
toyTrainDF.head()
toyTestDF.head()
toyTrainDF = toyTrainDF.withColumn('label', toyTrainDF.label.cast('Boolean'))
intColumns = ['I1','I2','I3','I4','I5','I6','I7','I8','I9','I10','I11','I12','I13']
# convert to number. Cast to float to be able to use NaN:
for column in intColumns:
toyTrainDF = toyTrainDF.withColumn(column, F.when(toyTrainDF[column] != "", toyTrainDF[column].cast('float')).otherwise(float('NaN')))
toyTestDF = toyTestDF.withColumn(column, F.when(toyTestDF[column] != "", toyTestDF[column].cast('float')).otherwise(float('NaN')))
strColumns = ['C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13','C14',
'C15','C16','C17','C18','C19','C20','C21','C22','C23','C24','C25','C26']
for column in strColumns:
toyTrainDF = toyTrainDF.withColumn(column, F.when(toyTrainDF[column] != "", toyTrainDF[column]).otherwise(None))
toyTestDF = toyTestDF.withColumn(column, F.when(toyTestDF[column] != "", toyTestDF[column]).otherwise(None))
toyTrainDF.head()
toyTestDF.head()
# check the overall rate:
totalCount = toyTrainDF.count()
truePerc = toyTrainDF.filter(toyTrainDF.label == True).count()/totalCount
print('Overall click-through rate: {0:3.2f}%'.format(truePerc*100))
# none count in label feature:
noneCount = toyTrainDF.filter(toyTrainDF.label == float('NaN')).count()
print('Overall click-through count: {0:d}. This represents {1:3.2f}% of the dataset.'.format(noneCount,noneCount/totalCount*100))
def descNumEDA(dataframe, column, totalCount=0, nanThreshold=0.5):
"""
Function that prints an analysis of column from the given dataframe. Retuns None.
Args:
dataframe - input dataframe
column - string with a dataframe column.
totalCount - optional. Number of rows in the dataframe (if defined avoid recalculation).
nanThreshold - optional. Percentage allowed of NaN values in the column.
Returns:
Column - name of the column if the column have more NaN if it represents more
than nanThreshold ratio.
Output:
NaN Count - number for NaN values in % of the row count of the dataframe.
Mean - mean of the valid entries
StdDev - standard deviation of the valid entries
Max - maximum value of the valid entries
3rd Quartile - 75% quartile od the
Median - 50% quartile of the valid entries
1st Quartile - 25% of the valid entries
Min - minimum value of the valid entries
Most Freq - number of values for the 5 most frequent values
"""
if totalCount == 0:
totalCount = dataframe.count()
pandCol = dataframe.select(column).toPandas()[column]
freqNumbers = dict(pandCol.value_counts(normalize=True).head(5))
nanCount = dataframe.filter(F.isnan(dataframe[column])).count()
validCount = totalCount - nanCount
print('+'+13*'-'+'+'+30*'-'+'+')
print('|Feature: {:^4}'.format(column)+'|{:>22}{:>6.2f}% |'.format('Null Count: ', nanCount/totalCount*100))
print('+'+13*'-'+'+'+30*'-'+'+')
print('|{:>12} |{:>29.2f} |'.format('Mean', pandCol.mean()))
print('|{:>12} |{:>29.2f} |'.format('StdDev', pandCol.std()))
print('|{:>12} |{:>29.2f} |'.format('Maximum', pandCol.max()))
print('|{:>12} |{:>29.2f} |'.format('3rd Quartile', pandCol.quantile(q=0.75)))
print('|{:>12} |{:>29.2f} |'.format('Median', pandCol.quantile(q=0.5)))
print('|{:>12} |{:>29.2f} |'.format('1st Quartile', pandCol.quantile(q=0.25)))
print('|{:>12} |{:>29.2f} |'.format('Minimum', pandCol.min()))
print('|{:>12} |{:>29.2f} |'.format('Unique Val.', pandCol.nunique()))
print('+'+13*'-'+'+'+30*'-'+'+')
print('| Most Freq.: {:>30} |'.format(str(list(freqNumbers.keys()))))
print('+'+13*'-'+'+'+30*'-'+'+')
for item in freqNumbers:
print('|{:>12} |{:>28.2f}% |'.format(item, freqNumbers[item]*100))
print('+'+13*'-'+'+'+30*'-'+'+\n')
if nanCount/totalCount*100 > nanTreshold*100:
return column
else:
return None
badFeatures = []
nanTreshold = 0.4
for item in intColumns:
badFeatures.append(descNumEDA(toyTrainDF, item, totalCount, nanTreshold))
badFeatures = list(filter(None,badFeatures))
print('List of feature with more than {:4.2f}% NaN ratio: {}'.format(nanTreshold*100, badFeatures))
def descCatEDA(dataframe, column, totalCount=0, nanThreshold=0.5):
"""
Function that prints an analysis of column from the given dataframe. Retuns None.
Args:
dataframe - input dataframe
column - string with a dataframe column.
totalCount - optional. Number of rows in the dataframe (if defined avoid recalculation).
nanThreshold - optional. Percentage allowed of NaN values in the column.
Returns:
Column - name of the column if the column have more NaN if it represents more
than nanThreshold ratio.
Output:
NaN Count - number for NaN values in % of the row count of the dataframe.
Most Freq - number of values for the 5 most frequent values (discarding NaN).
"""
if totalCount == 0:
totalCount = dataframe.count()
pandCol = dataframe.select(column).toPandas()[column]
freqNumbers = dict(pandCol.value_counts(normalize=True).head(5))
nanCount = dataframe.filter(dataframe[column].isNull()).count()
validCount = totalCount - nanCount
print('+'+13*'-'+'+'+22*'-'+'+')
print('|Feature: {:^4}'.format(column)+'|{:>14}{:>6.2f}% |'.format('Null Count: ', nanCount/totalCount*100))
print('+'+13*'-'+'+'+22*'-'+'+')
print('| Unique Values: {:>19} |'.format(pandCol.nunique()))
print('+'+13*'-'+'+'+22*'-'+'+')
for item in freqNumbers:
print('|{:>12} |{:>20.2f}% |'.format(item, freqNumbers[item]*100))
print('+'+13*'-'+'+'+22*'-'+'+\n')
if nanCount/totalCount*100 > nanTreshold*100:
return column
else:
return None
for item in strColumns:
badFeatures.append(descCatEDA(toyTrainDF, item, totalCount, nanTreshold))
badFeatures = list(filter(None,badFeatures))
print('List of feature with more than {:4.2f}% NaN ratio: {}'.format(nanTreshold*100, badFeatures))
# I'll sample again to improve visualization:
def corrdot(*args, **kwargs):
"""
Helper function to plot correlation indexes in the upper side of the scatterplot matrix.
Reference: https://github.com/mwaskom/seaborn/issues/1444
"""
corr_r = args[0].corr(args[1], 'spearman')
# Axes
ax = plt.gca()
ax.axis('off')
x_min, x_max = ax.get_xlim()
x_centroid = x_min + (x_max - x_min) / 2
y_min, y_max = ax.get_ylim()
y_true = y_max - (y_max - y_min) / 4
y_false = y_min + (y_max - y_min) / 4
# Plot args
if kwargs['label'] == True:
marker_size = abs(corr_r) * 5000
ax.scatter(x_centroid, y_true, marker='o', s=marker_size, alpha=0.6)
corr_text = str(round(corr_r, 2)).replace('-0', '-').lstrip('0')
ax.annotate(corr_text, [x_centroid, y_true,], ha='center', va='center', fontsize=20)
else:
marker_size = abs(corr_r) * 5000
ax.scatter(x_centroid, y_false, marker='o', s=marker_size, alpha=0.6)
corr_text = str(round(corr_r, 2)).replace('-0', '-').lstrip('0')
ax.annotate(corr_text, [x_centroid, y_false,], ha='center', va='center', fontsize=20)
# name of the integer columns for normalization and pairgrid:
int_columns_list = ['I1','I2','I3','I4','I5','I6','I7','I8','I9','I10','I11','I12','I13']
# normalizing the DF, keeping the label column and using OBS - MEAN/STDDEV:
vizDF = toyTrainDF.sample(False, 0.05, 2019).toPandas().iloc[:,0:14]
vizDF_norm = pd.DataFrame(vizDF[int_columns_list].values, columns=int_columns_list, index=vizDF.index)
vizDF_norm = (vizDF_norm - vizDF_norm.mean())/vizDF_norm.std()
vizDF[int_columns_list] = vizDF_norm
# ploting the results splited by label:
sns.set_context('notebook', font_scale=1.3)
g = sns.PairGrid(vizDF, vars=int_columns_list, hue='label')
g.map_lower(sns.scatterplot, alpha=0.3)
g.map_diag(sns.distplot, hist=False)
g.map_upper(corrdot)
g.add_legend()
# checking the average NaN count in each row:
toyTrainPandasDF = toyTrainDF.toPandas()
countNullDF = toyTrainPandasDF.isnull().sum(axis=1).rename('CountNaN')
countNullDF.describe()
# concat dataframes and crosstab
countNullDF = pd.concat([toyTrainPandasDF['label'], countNullDF], axis=1)
countNullDF.groupby('label').mean()
| 0.497803 | 0.859664 |
```
# Import the dependencies
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from citipy import citipy
import requests
from config import weather_api_key
import sys
from datetime import datetime
# Create a set of random latitute and longitude combinations.
lats = np.random.uniform(low = -90.000, high= 90.000, size = 1500)
lngs = np.random.uniform(low = -180.000, high= 180.000, size = 1500)
lat_lngs = zip(lats,lngs)
lat_lngs
#Add the latitudes and longitudes to a list.
coordinates = list(lat_lngs)
#Create a list for holding the cities.
cities = []
#Identify the nearest city for each latitude and longitude combination.
for coordinate in coordinates:
city=citipy.nearest_city(coordinate[0],coordinate[1]).city_name
#If the city name is unique, then we will add it to the cities list.
if city not in cities:
cities.append(city)
#Print the city count to confirm sufficient count.
len(cities)
cities
url=f"http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID={weather_api_key}"
# Create an empty list to hold the weather data.
city_data = []
#Print the beginning of the logging.
print("Beginning Data Retrieval ")
print("-----------------------------")
# Create counters.
record_count = 1
set_count = 1
# Loop through all the cities in the list.
for i, city in enumerate(cities):
# Group cities in sets of 50 for logging purposes.
if (i % 50 == 0 and i >= 50):
set_count += 1
record_count = 1
# Create endpoint URL with each city.
city_url = url + "&q=" + city
#city_url = f'{url}&q={city.replace(" ","+")}'
#Log the URL, record, and set numbers and the city.
print(f"Processing Record {record_count} of Set {set_count} | {city}")
# Add 1 to the record count.
record_count += 1
# Run an API request for each of the cities.
try:
# Parse the JSON and retrieve data.
city_weather = requests.get(city_url).json()
# Parse out the needed data.
city_lat = city_weather["coord"]["lat"]
city_lng = city_weather["coord"]["lon"]
city_max_temp = city_weather["main"]["temp_max"]
city_humidity = city_weather["main"]["humidity"]
city_clouds = city_weather["clouds"]["all"]
city_wind = city_weather["wind"]["speed"]
city_country = city_weather["sys"]["country"]
# Convert the date to ISO standard.
city_date = datetime.utcfromtimestamp(city_weather["dt"]).strftime('%Y-%m-%d %H:%M:%S')
# Append the city information into city_data list.
city_data.append({"City": city.title(),
"Lat": city_lat,
"Lng": city_lng,
"Max Temp": city_max_temp,
"Humidity": city_humidity,
"Cloudiness": city_clouds,
"Wind Speed": city_wind,
"Country": city_country,
"Date": city_date})
# If an error is experienced, skip the city.
except:
print("City not found. Skipping...")
pass
# Indicate that Data Loading is complete.
print("-----------------------------")
print("Data Retrieval Complete ")
print("-----------------------------")
len(city_data)
#Convert the array of dictionaries to a Pandas DataFrame.
city_data_df = pd.DataFrame(city_data)
city_data_df.head(10)
new_column_order=["City","Country","Date","Lat","Lng","Max Temp","Humidity","Cloudiness","Wind Speed"]
city_data_df = city_data_df[new_column_order]
city_data_df.head(10)
#Create an output file (CSV).
output_data_file = "weather_data/cities.csv"
#Export the City_Data into a CSV.
city_data_df.to_csv(output_data_file, index_label="City_ID")
#Extract the relevant fields from the DataFrame for plotting.
lats = city_data_df["Lat"]
max_temps = city_data_df["Max Temp"]
humidity = city_data_df["Humidity"]
cloudiness = city_data_df["Cloudiness"]
wind_speed = city_data_df["Wind Speed"]
import time
#Build the scatter plot for latitude vs max temperature.
plt.scatter(lats,
max_temps,
edgecolor="black",linewidths=1,marker="o",
alpha=0.8,label="Cities")
#Incorporate the other graph qualities.
plt.title(f"City Latitude vs. Max Temperature "+time.strftime("%x"))
plt.ylabel("Max Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
#Save the figure.
plt.savefig("weather_data/Fig1.png")
#Show plot.
plt.show()
#Build the scatter plot for latitude vs humidity.
plt.scatter(lats,
humidity,
edgecolor="black",linewidths=1,marker="o",
alpha=0.8,label="Cities")
#Incorporate the other graph properties.
plt.title(f"City Latitude vs. Humidity "+time.strftime("%x"))
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
#Save the figure
plt.savefig("weather_data/Fig2.png")
#Show the plot
plt.show()
#Build the scatter plots for latitude vs. cloudiness.
plt.scatter(lats,
cloudiness,
edgecolor="black",linewidths=1,marker="o",
alpha=0.8,label="Cities")
#Incorporate the other graph properties.
plt.title(f"City Latitude vs. Cloudiness (%) "+time.strftime("%x"))
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
#Save the figure
plt.savefig("weather_data/Fig3.png")
#Show plot
plt.show()
#Build the scatter plots for latitude vs. wind speed.
plt.scatter(lats,
wind_speed,
edgecolor="black",linewidths=1,marker="o",
alpha=0.8,label="Cities")
#Incorporate the other graph properties.
plt.title(f"City Latitude vs. Wind Speed "+time.strftime("%x"))
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
#Save the figure
plt.savefig("weather_data/Fig4.png")
#Show the plot
plt.show()
from scipy.stats import linregress
# Create a function to create perform linear regression on the weather data
# and plot a regression line and the equation with the data.
def plot_linear_regression(x_values, y_values, title, y_label, text_coordinates):
# Run regression on hemisphere weather data.
(slope, intercept, r_value, p_value, std_err) = linregress(x_values, y_values)
# Calculate the regression line "y values" from the slope and intercept.
regress_values = x_values * slope + intercept
# Get the equation of the line.
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
# Create a scatter plot and plot the regression line.
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r")
# Annotate the text for the line equation.
plt.annotate(line_eq, text_coordinates, fontsize=15, color="red")
plt.xlabel('Latitude')
plt.ylabel(y_label)
plt.title(title)
plt.show()
index13 = city_data_df.loc[13]
index13
city_data_df["Lat"] >= 0
city_data_df.loc[(city_data_df["Lat"] >= 0)]
#Create Northern and Southern Hemisphere DataFrames.
northern_hemi_df = city_data_df.loc[(city_data_df["Lat"] >= 0)]
southern_hemi_df = city_data_df.loc[(city_data_df["Lat"] < 0)]
# Linear regression on the Northern Hemisphere
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Max Temp"]
# Call the function.
plot_linear_regression(x_values, y_values,'Linear Regression on the Northern Hemisphere \n for Maximum Temperature', 'Max Temp',(10,40))
# Linear regression on the Southern Hemisphere
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Max Temp"]
# Call the function.
plot_linear_regression(x_values, y_values,'Linear Regression on the Southern Hemisphere \n for Maximum Temperature', 'Max Temp',(-50,80))
#Create the regression on the Northern Hemisphere
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Humidity"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Northern Hemisphere \n for % Humidity', '%Humidity',(40,10))
#Create the regression on the Southern Hemisphere
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Humidity"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Southern Hemisphere \n for % Humidity', '%Humidity',(-50,15))
city_data_df.head()
#Create the regression on the Northern Hemisphere (Cloudiness)
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Cloudiness"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Northern Hemisphere \n for % Cloudiness', '% Cloudiness',(5,45))
#Create the regression on the Southern Hemisphere (Cloudiness)
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Cloudiness"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Southern Hemisphere \n for % Cloudiness', '% Cloudiness',(-50,50))
#Create the regression on the Northern Hemisphere (Wind)
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Wind Speed"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Northern Hemisphere \n for Wind Speed', 'Wind Speed',(40,35))
#Create the regression on the Southern Hemisphere (Wind)
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Wind Speed"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Southern Hemisphere \n for Wind Speed', 'Wind Speed',(-50,20))
```
|
github_jupyter
|
# Import the dependencies
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from citipy import citipy
import requests
from config import weather_api_key
import sys
from datetime import datetime
# Create a set of random latitute and longitude combinations.
lats = np.random.uniform(low = -90.000, high= 90.000, size = 1500)
lngs = np.random.uniform(low = -180.000, high= 180.000, size = 1500)
lat_lngs = zip(lats,lngs)
lat_lngs
#Add the latitudes and longitudes to a list.
coordinates = list(lat_lngs)
#Create a list for holding the cities.
cities = []
#Identify the nearest city for each latitude and longitude combination.
for coordinate in coordinates:
city=citipy.nearest_city(coordinate[0],coordinate[1]).city_name
#If the city name is unique, then we will add it to the cities list.
if city not in cities:
cities.append(city)
#Print the city count to confirm sufficient count.
len(cities)
cities
url=f"http://api.openweathermap.org/data/2.5/weather?units=Imperial&APPID={weather_api_key}"
# Create an empty list to hold the weather data.
city_data = []
#Print the beginning of the logging.
print("Beginning Data Retrieval ")
print("-----------------------------")
# Create counters.
record_count = 1
set_count = 1
# Loop through all the cities in the list.
for i, city in enumerate(cities):
# Group cities in sets of 50 for logging purposes.
if (i % 50 == 0 and i >= 50):
set_count += 1
record_count = 1
# Create endpoint URL with each city.
city_url = url + "&q=" + city
#city_url = f'{url}&q={city.replace(" ","+")}'
#Log the URL, record, and set numbers and the city.
print(f"Processing Record {record_count} of Set {set_count} | {city}")
# Add 1 to the record count.
record_count += 1
# Run an API request for each of the cities.
try:
# Parse the JSON and retrieve data.
city_weather = requests.get(city_url).json()
# Parse out the needed data.
city_lat = city_weather["coord"]["lat"]
city_lng = city_weather["coord"]["lon"]
city_max_temp = city_weather["main"]["temp_max"]
city_humidity = city_weather["main"]["humidity"]
city_clouds = city_weather["clouds"]["all"]
city_wind = city_weather["wind"]["speed"]
city_country = city_weather["sys"]["country"]
# Convert the date to ISO standard.
city_date = datetime.utcfromtimestamp(city_weather["dt"]).strftime('%Y-%m-%d %H:%M:%S')
# Append the city information into city_data list.
city_data.append({"City": city.title(),
"Lat": city_lat,
"Lng": city_lng,
"Max Temp": city_max_temp,
"Humidity": city_humidity,
"Cloudiness": city_clouds,
"Wind Speed": city_wind,
"Country": city_country,
"Date": city_date})
# If an error is experienced, skip the city.
except:
print("City not found. Skipping...")
pass
# Indicate that Data Loading is complete.
print("-----------------------------")
print("Data Retrieval Complete ")
print("-----------------------------")
len(city_data)
#Convert the array of dictionaries to a Pandas DataFrame.
city_data_df = pd.DataFrame(city_data)
city_data_df.head(10)
new_column_order=["City","Country","Date","Lat","Lng","Max Temp","Humidity","Cloudiness","Wind Speed"]
city_data_df = city_data_df[new_column_order]
city_data_df.head(10)
#Create an output file (CSV).
output_data_file = "weather_data/cities.csv"
#Export the City_Data into a CSV.
city_data_df.to_csv(output_data_file, index_label="City_ID")
#Extract the relevant fields from the DataFrame for plotting.
lats = city_data_df["Lat"]
max_temps = city_data_df["Max Temp"]
humidity = city_data_df["Humidity"]
cloudiness = city_data_df["Cloudiness"]
wind_speed = city_data_df["Wind Speed"]
import time
#Build the scatter plot for latitude vs max temperature.
plt.scatter(lats,
max_temps,
edgecolor="black",linewidths=1,marker="o",
alpha=0.8,label="Cities")
#Incorporate the other graph qualities.
plt.title(f"City Latitude vs. Max Temperature "+time.strftime("%x"))
plt.ylabel("Max Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
#Save the figure.
plt.savefig("weather_data/Fig1.png")
#Show plot.
plt.show()
#Build the scatter plot for latitude vs humidity.
plt.scatter(lats,
humidity,
edgecolor="black",linewidths=1,marker="o",
alpha=0.8,label="Cities")
#Incorporate the other graph properties.
plt.title(f"City Latitude vs. Humidity "+time.strftime("%x"))
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
#Save the figure
plt.savefig("weather_data/Fig2.png")
#Show the plot
plt.show()
#Build the scatter plots for latitude vs. cloudiness.
plt.scatter(lats,
cloudiness,
edgecolor="black",linewidths=1,marker="o",
alpha=0.8,label="Cities")
#Incorporate the other graph properties.
plt.title(f"City Latitude vs. Cloudiness (%) "+time.strftime("%x"))
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
#Save the figure
plt.savefig("weather_data/Fig3.png")
#Show plot
plt.show()
#Build the scatter plots for latitude vs. wind speed.
plt.scatter(lats,
wind_speed,
edgecolor="black",linewidths=1,marker="o",
alpha=0.8,label="Cities")
#Incorporate the other graph properties.
plt.title(f"City Latitude vs. Wind Speed "+time.strftime("%x"))
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
#Save the figure
plt.savefig("weather_data/Fig4.png")
#Show the plot
plt.show()
from scipy.stats import linregress
# Create a function to create perform linear regression on the weather data
# and plot a regression line and the equation with the data.
def plot_linear_regression(x_values, y_values, title, y_label, text_coordinates):
# Run regression on hemisphere weather data.
(slope, intercept, r_value, p_value, std_err) = linregress(x_values, y_values)
# Calculate the regression line "y values" from the slope and intercept.
regress_values = x_values * slope + intercept
# Get the equation of the line.
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
# Create a scatter plot and plot the regression line.
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r")
# Annotate the text for the line equation.
plt.annotate(line_eq, text_coordinates, fontsize=15, color="red")
plt.xlabel('Latitude')
plt.ylabel(y_label)
plt.title(title)
plt.show()
index13 = city_data_df.loc[13]
index13
city_data_df["Lat"] >= 0
city_data_df.loc[(city_data_df["Lat"] >= 0)]
#Create Northern and Southern Hemisphere DataFrames.
northern_hemi_df = city_data_df.loc[(city_data_df["Lat"] >= 0)]
southern_hemi_df = city_data_df.loc[(city_data_df["Lat"] < 0)]
# Linear regression on the Northern Hemisphere
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Max Temp"]
# Call the function.
plot_linear_regression(x_values, y_values,'Linear Regression on the Northern Hemisphere \n for Maximum Temperature', 'Max Temp',(10,40))
# Linear regression on the Southern Hemisphere
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Max Temp"]
# Call the function.
plot_linear_regression(x_values, y_values,'Linear Regression on the Southern Hemisphere \n for Maximum Temperature', 'Max Temp',(-50,80))
#Create the regression on the Northern Hemisphere
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Humidity"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Northern Hemisphere \n for % Humidity', '%Humidity',(40,10))
#Create the regression on the Southern Hemisphere
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Humidity"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Southern Hemisphere \n for % Humidity', '%Humidity',(-50,15))
city_data_df.head()
#Create the regression on the Northern Hemisphere (Cloudiness)
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Cloudiness"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Northern Hemisphere \n for % Cloudiness', '% Cloudiness',(5,45))
#Create the regression on the Southern Hemisphere (Cloudiness)
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Cloudiness"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Southern Hemisphere \n for % Cloudiness', '% Cloudiness',(-50,50))
#Create the regression on the Northern Hemisphere (Wind)
x_values = northern_hemi_df["Lat"]
y_values = northern_hemi_df["Wind Speed"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Northern Hemisphere \n for Wind Speed', 'Wind Speed',(40,35))
#Create the regression on the Southern Hemisphere (Wind)
x_values = southern_hemi_df["Lat"]
y_values = southern_hemi_df["Wind Speed"]
#Call the function.
plot_linear_regression(x_values, y_values, 'Linear Regression on the Southern Hemisphere \n for Wind Speed', 'Wind Speed',(-50,20))
| 0.470007 | 0.495117 |
```
!pip install -U kaggle-cli
!kg download -u <username> -p <password> -c 'plant-seedlings-classification' -f 'test.zip'
!kg download -u <username> -p <password> -c 'plant-seedlings-classification' -f 'train.zip
!unzip test.zip -d data
!unzip train.zip -d data
import os
print(os.listdir('data/train/'))
import fnmatch
import os
import numpy as np
import pandas as pd
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
np.random.seed(21)
path = 'data/train/'
train_label = []
train_img = []
label2num = {'Loose Silky-bent':0, 'Charlock':1, 'Sugar beet':2, 'Small-flowered Cranesbill':3,
'Common Chickweed':4, 'Common wheat':5, 'Maize':6, 'Cleavers':7, 'Scentless Mayweed':8,
'Fat Hen':9, 'Black-grass':10, 'Shepherds Purse':11}
for i in os.listdir(path):
label_number = label2num[i]
new_path = path+i+'/'
for j in fnmatch.filter(os.listdir(new_path), '*.png'):
temp_img = image.load_img(new_path+j, target_size=(128,128))
train_label.append(label_number)
temp_img = image.img_to_array(temp_img)
train_img.append(temp_img)
train_img = np.array(train_img)
train_y=pd.get_dummies(train_label)
train_y = np.array(train_y)
train_img=preprocess_input(train_img)
print('Training data shape: ', train_img.shape)
print('Training labels shape: ', train_y.shape)
import keras
from keras.models import Sequential,Model
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.applications.vgg16 import VGG16
def vgg16_model(num_classes=None):
model = VGG16(weights='imagenet', include_top=False,input_shape=(128,128,3))
model.layers.pop()
model.layers.pop()
model.layers.pop()
model.outputs = [model.layers[-1].output]
model.layers[-2].outbound_nodes= []
x=Conv2D(256, kernel_size=(2,2),strides=2)(model.output)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x=Conv2D(128, kernel_size=(2,2),strides=1)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x=Flatten()(x)
x=Dense(num_classes, activation='softmax')(x)
model=Model(model.input,x)
for layer in model.layers[:15]:
layer.trainable = False
return model
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def recall(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def fscore(y_true, y_pred):
if K.sum(K.round(K.clip(y_true, 0, 1))) == 0:
return 0
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
f_score = 2 * (p * r) / (p + r + K.epsilon())
return f_score
from keras import backend as K
num_classes=12
model = vgg16_model(num_classes)
model.compile(optimizer="adam", loss='categorical_crossentropy', metrics=['accuracy',fscore])
model.summary()
#Split training data into rain set and validation set
from sklearn.model_selection import train_test_split
X_train, X_valid, Y_train, Y_valid=train_test_split(train_img,train_y,test_size=0.1, random_state=42)
#Data augmentation
'''from keras.preprocessing.image import ImageDataGenerator
gen_train = ImageDataGenerator(
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
vertical_flip=True
)
gen_train.fit(X_train)
#Train model
from keras.callbacks import ModelCheckpoint
epochs = 10
batch_size = 32
model_checkpoint = ModelCheckpoint('weights.h5', monitor='val_loss', save_best_only=True)
model.fit_generator(gen_train.flow(X_train, Y_train, batch_size=batch_size, shuffle=True),
steps_per_epoch=(X_train.shape[0]//(4*batch_size)),
epochs=epochs,
validation_data=(X_valid,Y_valid),
callbacks=[model_checkpoint],verbose=1)
'''
from keras.callbacks import ModelCheckpoint
epochs = 10
batch_size = 32
model_checkpoint = ModelCheckpoint('weights.h5', monitor='val_loss', save_best_only=True)
model.fit(X_train,Y_train,
batch_size=128,
epochs=20,
verbose=1, shuffle=True, validation_data=(X_valid,Y_valid), callbacks=[model_checkpoint])
import matplotlib.pyplot as plt
def plot_model(model):
plots = [i for i in model.history.history.keys() if i.find('val_') == -1]
plt.figure(figsize=(10,10))
for i, p in enumerate(plots):
plt.subplot(len(plots), 2, i + 1)
plt.title(p)
plt.plot(model.history.history[p], label=p)
plt.plot(model.history.history['val_'+p], label='val_'+p)
plt.legend()
plt.show()
plot_model(model)
model.load_weights('weights.h5')
prob=[]
num=[]
test_img=[]
test_path = 'data/test/'
test_all = fnmatch.filter(os.listdir(test_path), '*.png')
test_img=[]
for i in range(len(test_all)):
path=test_path+'/'+test_all[i]
temp_img=image.load_img(path,target_size=(128,128))
temp_img=image.img_to_array(temp_img)
test_img.append(temp_img)
test_img=np.array(test_img)
test_img=preprocess_input(test_img)
test_labels=[]
pred=model.predict(test_img)
num2label = {0:'Loose Silky-bent', 1:'Charlock',2: 'Sugar beet',3: 'Small-flowered Cranesbill',
4:'Common Chickweed',5: 'Common wheat',6: 'Maize', 7:'Cleavers', 8:'Scentless Mayweed',
9: 'Fat Hen', 10:'Black-grass', 11:'Shepherds Purse'}
for i in range(len(test_all)):
max_score =0
lab=-1
for j in range(12):
if pred[i][j]>max_score:
max_score=pred[i][j]
lab=j
test_labels.append(num2label[lab])
d = {'file': test_all, 'species': test_labels}
df = pd.DataFrame(data=d)
print(df.head(5))
#Convert dataframe to csv
df.to_csv("/output/submit.csv",index=False)
#Submit the csv
print('Submitting csv')
!kg submit submit.csv -u <username> -p <password> -c plant-seedlings-classification
```
|
github_jupyter
|
!pip install -U kaggle-cli
!kg download -u <username> -p <password> -c 'plant-seedlings-classification' -f 'test.zip'
!kg download -u <username> -p <password> -c 'plant-seedlings-classification' -f 'train.zip
!unzip test.zip -d data
!unzip train.zip -d data
import os
print(os.listdir('data/train/'))
import fnmatch
import os
import numpy as np
import pandas as pd
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
np.random.seed(21)
path = 'data/train/'
train_label = []
train_img = []
label2num = {'Loose Silky-bent':0, 'Charlock':1, 'Sugar beet':2, 'Small-flowered Cranesbill':3,
'Common Chickweed':4, 'Common wheat':5, 'Maize':6, 'Cleavers':7, 'Scentless Mayweed':8,
'Fat Hen':9, 'Black-grass':10, 'Shepherds Purse':11}
for i in os.listdir(path):
label_number = label2num[i]
new_path = path+i+'/'
for j in fnmatch.filter(os.listdir(new_path), '*.png'):
temp_img = image.load_img(new_path+j, target_size=(128,128))
train_label.append(label_number)
temp_img = image.img_to_array(temp_img)
train_img.append(temp_img)
train_img = np.array(train_img)
train_y=pd.get_dummies(train_label)
train_y = np.array(train_y)
train_img=preprocess_input(train_img)
print('Training data shape: ', train_img.shape)
print('Training labels shape: ', train_y.shape)
import keras
from keras.models import Sequential,Model
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.applications.vgg16 import VGG16
def vgg16_model(num_classes=None):
model = VGG16(weights='imagenet', include_top=False,input_shape=(128,128,3))
model.layers.pop()
model.layers.pop()
model.layers.pop()
model.outputs = [model.layers[-1].output]
model.layers[-2].outbound_nodes= []
x=Conv2D(256, kernel_size=(2,2),strides=2)(model.output)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x=Conv2D(128, kernel_size=(2,2),strides=1)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x=Flatten()(x)
x=Dense(num_classes, activation='softmax')(x)
model=Model(model.input,x)
for layer in model.layers[:15]:
layer.trainable = False
return model
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def recall(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def fscore(y_true, y_pred):
if K.sum(K.round(K.clip(y_true, 0, 1))) == 0:
return 0
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
f_score = 2 * (p * r) / (p + r + K.epsilon())
return f_score
from keras import backend as K
num_classes=12
model = vgg16_model(num_classes)
model.compile(optimizer="adam", loss='categorical_crossentropy', metrics=['accuracy',fscore])
model.summary()
#Split training data into rain set and validation set
from sklearn.model_selection import train_test_split
X_train, X_valid, Y_train, Y_valid=train_test_split(train_img,train_y,test_size=0.1, random_state=42)
#Data augmentation
'''from keras.preprocessing.image import ImageDataGenerator
gen_train = ImageDataGenerator(
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
vertical_flip=True
)
gen_train.fit(X_train)
#Train model
from keras.callbacks import ModelCheckpoint
epochs = 10
batch_size = 32
model_checkpoint = ModelCheckpoint('weights.h5', monitor='val_loss', save_best_only=True)
model.fit_generator(gen_train.flow(X_train, Y_train, batch_size=batch_size, shuffle=True),
steps_per_epoch=(X_train.shape[0]//(4*batch_size)),
epochs=epochs,
validation_data=(X_valid,Y_valid),
callbacks=[model_checkpoint],verbose=1)
'''
from keras.callbacks import ModelCheckpoint
epochs = 10
batch_size = 32
model_checkpoint = ModelCheckpoint('weights.h5', monitor='val_loss', save_best_only=True)
model.fit(X_train,Y_train,
batch_size=128,
epochs=20,
verbose=1, shuffle=True, validation_data=(X_valid,Y_valid), callbacks=[model_checkpoint])
import matplotlib.pyplot as plt
def plot_model(model):
plots = [i for i in model.history.history.keys() if i.find('val_') == -1]
plt.figure(figsize=(10,10))
for i, p in enumerate(plots):
plt.subplot(len(plots), 2, i + 1)
plt.title(p)
plt.plot(model.history.history[p], label=p)
plt.plot(model.history.history['val_'+p], label='val_'+p)
plt.legend()
plt.show()
plot_model(model)
model.load_weights('weights.h5')
prob=[]
num=[]
test_img=[]
test_path = 'data/test/'
test_all = fnmatch.filter(os.listdir(test_path), '*.png')
test_img=[]
for i in range(len(test_all)):
path=test_path+'/'+test_all[i]
temp_img=image.load_img(path,target_size=(128,128))
temp_img=image.img_to_array(temp_img)
test_img.append(temp_img)
test_img=np.array(test_img)
test_img=preprocess_input(test_img)
test_labels=[]
pred=model.predict(test_img)
num2label = {0:'Loose Silky-bent', 1:'Charlock',2: 'Sugar beet',3: 'Small-flowered Cranesbill',
4:'Common Chickweed',5: 'Common wheat',6: 'Maize', 7:'Cleavers', 8:'Scentless Mayweed',
9: 'Fat Hen', 10:'Black-grass', 11:'Shepherds Purse'}
for i in range(len(test_all)):
max_score =0
lab=-1
for j in range(12):
if pred[i][j]>max_score:
max_score=pred[i][j]
lab=j
test_labels.append(num2label[lab])
d = {'file': test_all, 'species': test_labels}
df = pd.DataFrame(data=d)
print(df.head(5))
#Convert dataframe to csv
df.to_csv("/output/submit.csv",index=False)
#Submit the csv
print('Submitting csv')
!kg submit submit.csv -u <username> -p <password> -c plant-seedlings-classification
| 0.387343 | 0.308255 |
## Descrição:
As a data scientist working for an investment firm, you will extract the revenue data for Tesla and GameStop and build a dashboard to compare the price of the stock vs the revenue.
## Tarefas:
- Question 1 - Extracting Tesla Stock Data Using yfinance - 2 Points
- Question 2 - Extracting Tesla Revenue Data Using Webscraping - 1 Points
- Question 3 - Extracting GameStop Stock Data Using yfinance - 2 Points
- Question 4 - Extracting GameStop Revenue Data Using Webscraping - 1 Points
- Question 5 - Tesla Stock and Revenue Dashboard - 2 Points
- Question 6 - GameStop Stock and Revenue Dashboard- 2 Points
- Question 7 - Sharing your Assignment Notebook - 2 Points
```
# installing dependencies
!pip install yfinance pandas bs4
# importing modules
import yfinance as yf
import pandas as pd
import requests
from bs4 import BeautifulSoup
# needed to cast datetime values
from datetime import datetime
```
### Question 1 - Extracting Tesla Stock Data Using yfinance
```
# making Ticker object
tesla_ticker = yf.Ticker("TSLA")
# creating a dataframe with history values
tesla_data = tesla_ticker.history(period="max")
# now we need to reset dataframe index and show first values
tesla_data.reset_index(inplace=True)
tesla_data.head()
```
### Question 2 - Extracting Tesla Revenue Data Using Webscraping
```
# getting TSLA revenue data from https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue
tesla_revenue_url="https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue"
html_data=requests.get(tesla_revenue_url).text
# parsing html data
soup = BeautifulSoup(html_data,"html.parser")
# now we need to create a dataframe with columns "date" and "revenue", using data scrapped from soup
tesla_revenue_table = soup.find("table", class_="historical_data_table")
# now we will create an empty dataframe
tesla_revenue_data = pd.DataFrame(columns=["Date", "Revenue"])
# now we will loop through table and populate the dataframe
for row in tesla_revenue_table.tbody.find_all("tr"):
col = row.find_all("td")
if (col != []):
row_date = col[0].text
row_date = datetime.strptime(row_date, '%Y')
row_revenue = col[1].text
# we need to strip "," and "$" chars from revenue value
row_revenue = row_revenue.replace(",","").replace("$","")
row_revenue = int(row_revenue)
# printing var types
# print(type(row_date), type(row_revenue) )
tesla_revenue_data = tesla_revenue_data.append(
{
'Date': row_date,
'Revenue': row_revenue
}, ignore_index=True)
tesla_revenue_data.head()
```
### Question 3 - Extracting GameStop Stock Data Using yfinance
```
# making Ticker object
gme_ticker = yf.Ticker("GME")
# creating a dataframe with history values
gme_data = gme_ticker.history(period="max")
# now we need to reset dataframe index and show first values
gme_data.reset_index(inplace=True)
gme_data.head()
```
### Question 4 - Extracting GameStop Revenue Data Using Webscraping
```
# getting GME revenue data from https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue
gme_revenue_url="https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue"
html_data=requests.get(gme_revenue_url).text
# parsing html data
soup = BeautifulSoup(html_data,"html.parser")
# now we need to create a dataframe with columns "date" and "revenue", using data scrapped from soup
gme_revenue_table = soup.find("table", class_="historical_data_table")
# now we will create an empty dataframe
gme_revenue_data = pd.DataFrame(columns=["Date", "Revenue"])
# now we will loop through table and populate the dataframe
for row in gme_revenue_table.tbody.find_all("tr"):
col = row.find_all("td")
if (col != []):
row_date = col[0].text
row_date = datetime.strptime(row_date, '%Y')
row_revenue = col[1].text
# we need to strip "," and "$" chars from revenue value
row_revenue = row_revenue.replace(",","").replace("$","")
row_revenue = int(row_revenue)
# printing var types
# print(type(row_date), type(row_revenue) )
gme_revenue_data = gme_revenue_data.append(
{
'Date': row_date,
'Revenue': row_revenue
}, ignore_index=True)
gme_revenue_data.head()
```
### Question 5 - Tesla Stock and Revenue Dashboard
```
# plotting tesla stock data
tesla_data.plot(x="Date", y="Open")
# plotting tesla revenue data
tesla_revenue_data.plot(x="Date", y="Revenue")
```
### Question 6 - GameStop Stock and Revenue Dashboard
```
# plotting gamestop stock data
gme_data.plot(x="Date", y="Open")
# plotting gamestop revenue data
gme_revenue_data.plot(x="Date", y="Revenue")
```
|
github_jupyter
|
# installing dependencies
!pip install yfinance pandas bs4
# importing modules
import yfinance as yf
import pandas as pd
import requests
from bs4 import BeautifulSoup
# needed to cast datetime values
from datetime import datetime
# making Ticker object
tesla_ticker = yf.Ticker("TSLA")
# creating a dataframe with history values
tesla_data = tesla_ticker.history(period="max")
# now we need to reset dataframe index and show first values
tesla_data.reset_index(inplace=True)
tesla_data.head()
# getting TSLA revenue data from https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue
tesla_revenue_url="https://www.macrotrends.net/stocks/charts/TSLA/tesla/revenue"
html_data=requests.get(tesla_revenue_url).text
# parsing html data
soup = BeautifulSoup(html_data,"html.parser")
# now we need to create a dataframe with columns "date" and "revenue", using data scrapped from soup
tesla_revenue_table = soup.find("table", class_="historical_data_table")
# now we will create an empty dataframe
tesla_revenue_data = pd.DataFrame(columns=["Date", "Revenue"])
# now we will loop through table and populate the dataframe
for row in tesla_revenue_table.tbody.find_all("tr"):
col = row.find_all("td")
if (col != []):
row_date = col[0].text
row_date = datetime.strptime(row_date, '%Y')
row_revenue = col[1].text
# we need to strip "," and "$" chars from revenue value
row_revenue = row_revenue.replace(",","").replace("$","")
row_revenue = int(row_revenue)
# printing var types
# print(type(row_date), type(row_revenue) )
tesla_revenue_data = tesla_revenue_data.append(
{
'Date': row_date,
'Revenue': row_revenue
}, ignore_index=True)
tesla_revenue_data.head()
# making Ticker object
gme_ticker = yf.Ticker("GME")
# creating a dataframe with history values
gme_data = gme_ticker.history(period="max")
# now we need to reset dataframe index and show first values
gme_data.reset_index(inplace=True)
gme_data.head()
# getting GME revenue data from https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue
gme_revenue_url="https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue"
html_data=requests.get(gme_revenue_url).text
# parsing html data
soup = BeautifulSoup(html_data,"html.parser")
# now we need to create a dataframe with columns "date" and "revenue", using data scrapped from soup
gme_revenue_table = soup.find("table", class_="historical_data_table")
# now we will create an empty dataframe
gme_revenue_data = pd.DataFrame(columns=["Date", "Revenue"])
# now we will loop through table and populate the dataframe
for row in gme_revenue_table.tbody.find_all("tr"):
col = row.find_all("td")
if (col != []):
row_date = col[0].text
row_date = datetime.strptime(row_date, '%Y')
row_revenue = col[1].text
# we need to strip "," and "$" chars from revenue value
row_revenue = row_revenue.replace(",","").replace("$","")
row_revenue = int(row_revenue)
# printing var types
# print(type(row_date), type(row_revenue) )
gme_revenue_data = gme_revenue_data.append(
{
'Date': row_date,
'Revenue': row_revenue
}, ignore_index=True)
gme_revenue_data.head()
# plotting tesla stock data
tesla_data.plot(x="Date", y="Open")
# plotting tesla revenue data
tesla_revenue_data.plot(x="Date", y="Revenue")
# plotting gamestop stock data
gme_data.plot(x="Date", y="Open")
# plotting gamestop revenue data
gme_revenue_data.plot(x="Date", y="Revenue")
| 0.335133 | 0.941493 |
# Info
Name: Seyed Ali Mirferdos
Student ID: 99201465
# 0. Importing the necessary modules
```
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.svm import SVC
```
# 1. Part 1:
```
!gdown --id 1oRmkmOMD5t_vA35N8IBmFcQM-WfE0_7x
```
## 1.1. Loading the dataset
```
df = pd.read_csv('bill_authentication.csv')
df.head()
```
* There are 4 feature columns with float data type and a target column with data type of int.
* There are 1372 entries.
* We have no missing data.
```
df.info()
df.describe()
```
There are only 2 classes indicating whether the input is fake or not.
```
df['Class'].unique()
```
## 1.2. Plotting the data
The following graph shows the Variance vs. Skewness. We can observe the following characteristics:
* Most of True banknotes have a negative Variance but the Skewness is spread along the true banknotes.
* Most of Fake banknotes have a positive Variance and also a positive Skewness.
* There's a small overlap between the two classes but the overlap isn't that much. We can use a soft margine SVM with a polynomial kernel to get a smooth classification. However, a linear kernel stil can output a reasonable answer based on these two features.
```
sns.scatterplot(data=df, x="Variance", y="Skewness", hue="Class",
palette=['red', 'green'])
```
The following graph shows the Variance vs. Curtosis. We can observe the following characteristics:
* As before, most of True banknotes have a negative Variance but the Curtosis is spread along the true banknotes.
* Also most of Fake banknotes have a positive Variance but we can't conclude about the positivity of Curtosis among them.
* We can also see an overlap between the two classes. The overlap seems more than the previous graph but it still isn't that much. A linear kernel would output an answer with a huge amount of tolerance.
```
sns.scatterplot(data=df, x="Variance", y="Curtosis", hue="Class",
palette=['red', 'green'])
```
The following graph shows the Variance vs. Entropy. We can observe the following characteristics:
* The observation about Variance still stays as before.
* Entropy is spread along the true and fake banknotes.
* The overlap isn't that much but the distance between the lower points of True banknotes and the higher ones is very much making it hard for a linear SVM classifier to predict.
```
sns.scatterplot(data=df, x="Variance", y="Entropy", hue="Class",
palette=['red', 'green'])
```
The following graph shows the Skewness vs. Curtosis. We can observe the following characteristics:
* Both the Skewness and the Curtosis are spread along the two classes.
* The two classes are very interwoven and it'll be almost impossible for the linear classifier to predict them.
```
sns.scatterplot(data=df, x="Skewness", y="Curtosis", hue="Class",
palette=['red', 'green'])
```
The following graph shows the Skewness vs. Entropy. We can observe the following characteristics:
* Both the Skewness and the Entropy are spread along the two classes.
* The two graphs have a similar shape. Like the previous graph, the two classes are very interwoven and it'll be almost impossible for the linear classifier to predict them.
```
sns.scatterplot(data=df, x="Skewness", y="Entropy", hue="Class",
palette=['red', 'green'])
```
The following graph shows the Curtosis vs. Entropy. We can observe the following characteristics:
* Both the Entropy and the Curtosis are spread along the two classes.
* The two classes in this graph have the most overlap between all of the 6 graphs. It'll be impossible for the linear classifier to predict them.
```
sns.scatterplot(data=df, x="Curtosis", y="Entropy", hue="Class",
palette=['red', 'green'])
```
## 1.3. Creating the x and y datasets
```
X = df.drop(['Class'], axis=1)
y = df[['Class']]
X.head()
y.head()
```
## 1.4. Splitting the data
```
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42)
```
# 2. Part 2:
## 2.1. Creating a Decision Tree Classifier
```
clf = DecisionTreeClassifier(random_state=0)
clf.fit(X_train, y_train)
```
## 2.2. Predicting the result for the test set
```
y_pred = clf.predict(X_test)
```
## 2.3. Evaluating the model
* The precision for the Fake class is higher and also the recall for the Real class correspondigly. The percentage is almost the same for other pairs.
* The f1-score, accuracy, and the averages are the all same.
* The number of Fake data points is more than the Real ones.
* There's a higher ratio for TP than TN.
* Also there's a higher ratio for FN than FP. As we saw in the Variance vs. Skewness graph, there were more real banknotes inside the fake banknotes area.
```
print(classification_report(y_test, y_pred, target_names=['Fake', 'Real']))
print(confusion_matrix(y_test, y_pred, normalize='true'))
```
# 3. Part 3:
## 3.1. Creating a SVC Classifier
```
clf2 = SVC(kernel='linear')
clf2.fit(X_train, y_train['Class'])
```
## 3.2. Predicting the result for the test set
```
y_pred2 = clf2.predict(X_test)
```
## 3.3. Evaluating the model
* The precision and recall and f1-score for the Fake class are higher.
* The accuracy and the averages are the all same.
* The number of Fake data points is more than the Real ones.
* There's almost a same ratio for TP and TN.
* Also there's a higher ratio for FN than FP but still the difference isn't too much.
```
print(classification_report(y_test, y_pred2, target_names=['Fake', 'Real']))
print(confusion_matrix(y_test, y_pred2, normalize='true'))
[[0.99324324 0.00675676]
[0.03937008 0.96062992]]
```
## 3.4. Comparison
* For the Fake class, decision tree has a lower precision and f1-score than the linear SVM.
* For the Real class, decision tree has a lower recall and f1-score than the linear SVM but a higher precision.
* The SVM has yielded a greater amount of accuracy.
* The decision tree has a higher TP but a lower TN ratio than the SVM. However, the difference between the TNs is more than the TPs.
* The decision tree has a very smaller FP but a higher FN ratio than the SVM. However, the difference between the FPs is significant.
---
For this dataset, the linear SVM has achieved a better performance concerning the f1-score and accuracy. The two models almost achieved the same TP and TN ratios although the SVM has a more balanced ratios. Also the decision tree has a very smaller FP ratio which would be a considerable parameter to focus when dealing with banknotes.
Overall, we can say although the SVM has a better classification results, but as there's much more focus in the FP in this specific task, going with the decision tree would lower the risks and increase the satisfaction of users.
```
```
|
github_jupyter
|
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.svm import SVC
!gdown --id 1oRmkmOMD5t_vA35N8IBmFcQM-WfE0_7x
df = pd.read_csv('bill_authentication.csv')
df.head()
df.info()
df.describe()
df['Class'].unique()
sns.scatterplot(data=df, x="Variance", y="Skewness", hue="Class",
palette=['red', 'green'])
sns.scatterplot(data=df, x="Variance", y="Curtosis", hue="Class",
palette=['red', 'green'])
sns.scatterplot(data=df, x="Variance", y="Entropy", hue="Class",
palette=['red', 'green'])
sns.scatterplot(data=df, x="Skewness", y="Curtosis", hue="Class",
palette=['red', 'green'])
sns.scatterplot(data=df, x="Skewness", y="Entropy", hue="Class",
palette=['red', 'green'])
sns.scatterplot(data=df, x="Curtosis", y="Entropy", hue="Class",
palette=['red', 'green'])
X = df.drop(['Class'], axis=1)
y = df[['Class']]
X.head()
y.head()
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42)
clf = DecisionTreeClassifier(random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred, target_names=['Fake', 'Real']))
print(confusion_matrix(y_test, y_pred, normalize='true'))
clf2 = SVC(kernel='linear')
clf2.fit(X_train, y_train['Class'])
y_pred2 = clf2.predict(X_test)
print(classification_report(y_test, y_pred2, target_names=['Fake', 'Real']))
print(confusion_matrix(y_test, y_pred2, normalize='true'))
[[0.99324324 0.00675676]
[0.03937008 0.96062992]]
| 0.555918 | 0.935169 |
```
import youtube_dl
import re
import os
from tqdm import tqdm
import pandas as pd
import numpy as np
WAV_DIR = 'wav_files/'
genre_dict = {
'/m/064t9': 'Pop_music',
'/m/0glt670': 'Hip_hop_music',
'/m/0y4f8': 'Vocal',
'/m/06cqb': 'Reggae',
}
genre_set = set(genre_dict.keys())
genre_labels=list(genre_dict.values())
temp_str = []
with open('data-files/csv_files/unbalanced_train_segments.csv', 'r') as f:
temp_str = f.readlines()
data = np.ones(shape=(1,4))
for line in tqdm(temp_str):
line = re.sub('\s?"', '', line.strip())
elements = line.split(',')
common_elements = list(genre_set.intersection(elements[3:]))
if common_elements != []:
data = np.vstack([data, np.array(elements[:3]
+ [genre_dict[common_elements[0]]]).reshape(1, 4)])
df = pd.DataFrame(data[1:], columns=['url', 'start_time', 'end_time', 'class_label'])
df.head()
df['class_label'].value_counts()
with open('your_file.txt', 'r') as f:
for item in indice:
f.read("%s\n" % item)
# take only 1k audio clips - to make the data more balanced
np.random.seed(10)
drop_values=list(df['class_label'].value_counts())
x=[1000,1000,1000,1000]
drop_values=[a_i - b_i for a_i, b_i in zip(drop_values, x)]
for value,label in zip(drop_values,genre_labels) :
drop_indices = np.random.choice(df[df['class_label'] == label].index, size=value, replace=False)
df.drop(labels=drop_indices, axis=0, inplace=True)
df.reset_index(drop=True, inplace=False)
# Time to INT
df['start_time'] = df['start_time'].map(lambda x: np.int32(np.float(x)))
df['end_time'] = df['end_time'].map(lambda x: np.int32(np.float(x)))
df['class_label'].value_counts()
```
Example:<br>
Step 1:<br>
`ffmpeg -ss 5 -i $(youtube-dl -f 140 --get-url 'https://www.youtube.com/embed/---1_cCGK4M') -t 10 -c:v copy -c:a copy test.mp4`<br>
Starting time is 5 seconds, duration is 10s.
Refer: https://github.com/rg3/youtube-dl/issues/622
Step 2:<br>
`ffmpeg -i test.mp4 -vn -acodec pcm_s16le -ar 44100 -ac 1 output.wav` <br>
PCM-16, 44k sampling, 1-channel (Mono)
<br>
Refer: https://superuser.com/questions/609740/extracting-wav-from-mp4-while-preserving-the-highest-possible-quality
```
for i, row in tqdm(df.iterrows()):
url = "'https://www.youtube.com/embed/" + row['url'] + "'"
file_name = str(i)+"_"+row['class_label']
try:
command_1 = "ffmpeg -ss " + str(row['start_time']) + " -i $(youtube-dl -f 140 --get-url " +\
url + ") -t 15 -c:v copy -c:a copy " + file_name + ".mp4"
command_2 = "ffmpeg -i "+ file_name +".mp4 -vn -acodec pcm_s16le -ar 44100 -ac 1 " + WAV_DIR + file_name + ".wav"
command_3 = 'rm ' + file_name + '.mp4'
# Run the 3 commands
os.system(command_1 + ';' + command_2 + ';' + command_3 + ';')
except:
print(i, url)
pass
```
|
github_jupyter
|
import youtube_dl
import re
import os
from tqdm import tqdm
import pandas as pd
import numpy as np
WAV_DIR = 'wav_files/'
genre_dict = {
'/m/064t9': 'Pop_music',
'/m/0glt670': 'Hip_hop_music',
'/m/0y4f8': 'Vocal',
'/m/06cqb': 'Reggae',
}
genre_set = set(genre_dict.keys())
genre_labels=list(genre_dict.values())
temp_str = []
with open('data-files/csv_files/unbalanced_train_segments.csv', 'r') as f:
temp_str = f.readlines()
data = np.ones(shape=(1,4))
for line in tqdm(temp_str):
line = re.sub('\s?"', '', line.strip())
elements = line.split(',')
common_elements = list(genre_set.intersection(elements[3:]))
if common_elements != []:
data = np.vstack([data, np.array(elements[:3]
+ [genre_dict[common_elements[0]]]).reshape(1, 4)])
df = pd.DataFrame(data[1:], columns=['url', 'start_time', 'end_time', 'class_label'])
df.head()
df['class_label'].value_counts()
with open('your_file.txt', 'r') as f:
for item in indice:
f.read("%s\n" % item)
# take only 1k audio clips - to make the data more balanced
np.random.seed(10)
drop_values=list(df['class_label'].value_counts())
x=[1000,1000,1000,1000]
drop_values=[a_i - b_i for a_i, b_i in zip(drop_values, x)]
for value,label in zip(drop_values,genre_labels) :
drop_indices = np.random.choice(df[df['class_label'] == label].index, size=value, replace=False)
df.drop(labels=drop_indices, axis=0, inplace=True)
df.reset_index(drop=True, inplace=False)
# Time to INT
df['start_time'] = df['start_time'].map(lambda x: np.int32(np.float(x)))
df['end_time'] = df['end_time'].map(lambda x: np.int32(np.float(x)))
df['class_label'].value_counts()
for i, row in tqdm(df.iterrows()):
url = "'https://www.youtube.com/embed/" + row['url'] + "'"
file_name = str(i)+"_"+row['class_label']
try:
command_1 = "ffmpeg -ss " + str(row['start_time']) + " -i $(youtube-dl -f 140 --get-url " +\
url + ") -t 15 -c:v copy -c:a copy " + file_name + ".mp4"
command_2 = "ffmpeg -i "+ file_name +".mp4 -vn -acodec pcm_s16le -ar 44100 -ac 1 " + WAV_DIR + file_name + ".wav"
command_3 = 'rm ' + file_name + '.mp4'
# Run the 3 commands
os.system(command_1 + ';' + command_2 + ';' + command_3 + ';')
except:
print(i, url)
pass
| 0.233969 | 0.280339 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/JavaScripts/Image/Hillshade.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/Hillshade.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=JavaScripts/Image/Hillshade.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Image/Hillshade.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
|
github_jupyter
|
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
# Add Earth Engine dataset
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
| 0.524882 | 0.946646 |
<img src='./img/egu_2020.png' alt='Logo EU Copernicus EUMETSAT' align='left' width='30%'></img><img src='./img/atmos_logos.png' alt='Logo EU Copernicus EUMETSAT' align='right' width='60%'></img></span>
<br>
<a href="./12_AC_SAF_GOME-2_L2_preprocess.ipynb"><< 12 - AC SAF GOME-2 Level 2 - preprocess </a><span style="float:right;"><a href="./31_case_study_covid-19_GOME2_anomaly_map.ipynb"> 31 - Covid-19 case study - GOME-2 anomaly map >></a></span>
# Copernicus Sentinel-5P TROPOMI Carbonmonoxide (CO)
A precursor satellite mission, Sentinel-5P aims to fill in the data gap and provide data continuity between the retirement of the Envisat satellite and NASA's Aura mission and the launch of Sentinel-5. The Copernicus Sentinel-5P mission is being used to closely monitor the changes in air quality and was launched in October 2017.
Sentinel-5p Pre-Ops data are disseminated in the `netCDF` format and can be downloaded via the [Copernicus Open Access Hub](https://scihub.copernicus.eu/).
Sentinel-5p carries the `TROPOMI` instrument, which is a spectrometer in the UV-VIS-NIR-SWIR spectral range. `TROPOMI` provides measurements on:
* `Ozone`
* `NO`<sub>`2`</sub>
* `SO`<sub>`2`</sub>
* `Formaldehyde`
* `Aerosol`
* `Carbonmonoxide`
* `Methane`
* `Clouds`
#### Module outline:
* [1 - Load and browse Sentinel-5P data](#load_s5p)
* [2 - Plotting example - Sentinel-5P data](#plotting_s5p)
#### Load required libraries
```
%matplotlib inline
import os
import xarray as xr
import numpy as np
import netCDF4 as nc
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from matplotlib.axes import Axes
from cartopy.mpl.geoaxes import GeoAxes
GeoAxes._pcolormesh_patched = Axes.pcolormesh
import cartopy.feature as cfeature
import geopandas as gpd
import warnings
warnings.simplefilter(action = "ignore", category = RuntimeWarning)
```
<hr>
## <a id="load_s5p"></a>Load and browse Sentinel-5P data
### Open one individual Sentinel-5P netCDF file with `NetCDF4`
The dataset object contains information about the general data structure of the dataset. You can see that the variables of `Sentinel-5P` data are organised in groups, which is analogous to directories in a filesystem.
```
s5p_file = nc.Dataset('./eodata/sentinel5p/co/2019/08/19/S5P_OFFL_L2__CO_____20190819T164807_20190819T182937_09581_01_010302_20190825T161022.nc', 'r')
s5p_file.groups
```
<br>
If you select the `/PRODUCT` group, you get more information on what variables the dataset object contain.
```
s5p_file.groups['PRODUCT']
```
<br>
You see that the object contains the following variables:
* `scanline`
* `ground_pixel`
* `time`
* `corner`
* `delta_time`
* `time_utc`
* `ga_value`
* `latitude`
* `longitude`
* `carbonmonoxide_total_column`
* `carbonmonoxie_total_column_precision`
You can specify one variable of interest and get more detailed information about the variable. E.g. `carbonmonoxide_total_column` is the atmosphere mole content of carbon monoxide, has the unit mol m<sup>-2</sup>, and is a 3D variable.
You can do this for the available variables, but also for the dimensions latitude and longitude.
You can see e.g. that the `latitude` coordinates range between -85.9 S and 61.9 S and the `longitude` coordinates range between -124.3 W to 101.9 E.
```
co = s5p_file.groups['PRODUCT'].variables['carbonmonoxide_total_column']
lon = s5p_file.groups['PRODUCT'].variables['longitude'][:][0,:,:]
lat = s5p_file.groups['PRODUCT'].variables['latitude'][:][0,:,:]
co, lon, lat
```
<br>
You can retrieve the array values of the variable object by selecting the `time` dimension and `data`. You can have a look at the `minimum` and `maximum` data value to get an idea of the data range. You see that the data contain negative values. Let's mask the negative values and all values equal to the `_FillValue` and set it to `NaN`. `_FillValue` is used for not significant data. Thus, you want to mask those.
```
co_data = co[0,:,:].data
print(co_data.min(), co_data.max())
co_data[co_data <= 0.] = co._FillValue
co_data[co_data == co._FillValue] = np.nan
```
<br>
## <a id="plotting_s5p"></a>Plotting example - Sentinel-5P data
### Plot `Dataset` NetCDF library object with `matplotlib` and `cartopy`
The retrieved data array from the Dataset NetCDF object is of type `numpy array` and you can plot it with matplotlib's `pcolormesh` function. Due to the nature of the `CO` data values, we apply a logarithmic scale to the color bar with `LogNorm` from `matplotlib.colors`, which facilitates the visualisation of the data.
Let's create a function [visualize_pcolormesh](./functions.ipynb#visualize_pcolormesh), where we can specify projection, extent, conversion_factor, color_scale, unit, title and if the plot shall have a global extent.
```
def visualize_pcolormesh(data_array, longitude, latitude, projection, color_scale, unit, long_name, vmin, vmax, lonmin, lonmax, latmin, latmax, log=True, set_global=True):
"""
Visualizes a numpy array with matplotlib's 'pcolormesh' function.
Parameters:
data_array: any numpy MaskedArray, e.g. loaded with the NetCDF library and the Dataset function
longitude: numpy Array holding longitude information
latitude: numpy Array holding latitude information
projection: a projection provided by the cartopy library, e.g. ccrs.PlateCarree()
color_scale (str): string taken from matplotlib's color ramp reference
unit (str): the unit of the parameter, taken from the NetCDF file if possible
long_name (str): long name of the parameter, taken from the NetCDF file if possible
vmin (int): minimum number on visualisation legend
vmax (int): maximum number on visualisation legend
lonmin,lonmax,latmin,latmax: geographic extent of the plot
log (logical): set True, if the values shall be represented in a logarithmic scale
set_global (logical): set True, if the plot shall have a global coverage
"""
fig=plt.figure(figsize=(20, 10))
ax = plt.axes(projection=projection)
# define the coordinate system that the grid lons and grid lats are on
if(log):
img = plt.pcolormesh(longitude, latitude, np.squeeze(data_array), norm=LogNorm(),
cmap=plt.get_cmap(color_scale), transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax)
else:
img = plt.pcolormesh(longitude, latitude, data_array,
cmap=plt.get_cmap(color_scale), transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax)
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1)
if (projection==ccrs.PlateCarree()):
ax.set_extent([lonmin, lonmax, latmin, latmax], projection)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.xlabels_top=False
gl.ylabels_right=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
if(set_global):
ax.set_global()
ax.gridlines()
cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.04, pad=0.1)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
ax.set_title(long_name, fontsize=20, pad=20.0)
return fig, ax
```
You can retrieve unit and title information from the load `Dataset`, where the information is stored as attributes. You can now plot the data.
```
unit = co.units
long_name = co.long_name
visualize_pcolormesh(co_data, lon, lat, ccrs.Mollweide(), 'viridis', unit, long_name, 0.01,1, lon.min(), lon.max(), lat.min(), lat.max(), log=True, set_global=True)
```
<br>
You can zoom into a region by specifying a `bounding box` of interest. Let's set the extent to South America, with: `[-100, 0, -80, 40]`. The above plotting function [visualize_pcolormesh](./functions.ipynb#visualize_pcolormesh) allows for setting a specific bounding box. You simply have to set the `set_global` key to False. It is best to adjust the projection to `PlateCarree()`, as this will be more appropriate for a regional subset.
```
lonmin=-100
lonmax=0
latmin=-80
latmax=40
visualize_pcolormesh(co_data, lon, lat, ccrs.PlateCarree(), 'viridis', unit, long_name, 0.01,1, lonmin, lonmax, latmin, latmax, log=True, set_global=False)
```
<br>
### Load multiple Sentinel-5p data files with `xarray` and `open_mfdataset`
The plots above showed the extent of one Sentinel-5P ground track. You can load multiple ground tracks into a single `xarray` and the `DataArrays` will be concatenated at the `scanline` dimension. This allows you to have a larger region of interest (ROI).
```
s5p_mf_19 = xr.open_mfdataset('./eodata/sentinel5p/co/2019/08/19/*.nc', concat_dim='scanline', combine='nested', group='PRODUCT')
s5p_mf_19
```
<br>
From the `Dataset` object `s5p_mf_19`, you can choose the data variable of interest, e.g. `carbonmonoxide_total_column`. It has three dimensions (`3D`), but the time dimension consisting only of one dimension entry. If you want to reduce it by the dimension time, you can simply select the first dimension and reduce it to a `2D` object. You can again use the function [visualize_pcolormesh](./functions.ipynb#visualize_pcolormesh) to visualize the data.
```
co_19 = s5p_mf_19.carbonmonoxide_total_column[0,:,:]
lat_19 = co_19.latitude
lon_19 = co_19.longitude
unit = co_19.units
long_name = co_19.long_name
visualize_pcolormesh(co_19, lon_19, lat_19, ccrs.PlateCarree(), 'viridis', unit, long_name, 0.01, 1.0, lonmin, lonmax, latmin, latmax, log=True, set_global=False)
```
<br>
<a href="./12_AC_SAF_GOME-2_L2_preprocess.ipynb"><< 12 - AC SAF GOME-2 Level 2 - preprocess </a><span style="float:right;"><a href="./31_case_study_covid-19_GOME2_anomaly_map.ipynb"> 31 - Covid-19 case study - GOME-2 anomaly map >></a></span>
<hr>
<img src='./img/copernicus_logo.png' alt='Logo EU Copernicus' align='right' width='20%'><br><br><br>
<p style="text-align:right;">This project is licensed under the <a href="./LICENSE">MIT License</a> and is developed under a Copernicus contract.
|
github_jupyter
|
%matplotlib inline
import os
import xarray as xr
import numpy as np
import netCDF4 as nc
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from matplotlib.axes import Axes
from cartopy.mpl.geoaxes import GeoAxes
GeoAxes._pcolormesh_patched = Axes.pcolormesh
import cartopy.feature as cfeature
import geopandas as gpd
import warnings
warnings.simplefilter(action = "ignore", category = RuntimeWarning)
s5p_file = nc.Dataset('./eodata/sentinel5p/co/2019/08/19/S5P_OFFL_L2__CO_____20190819T164807_20190819T182937_09581_01_010302_20190825T161022.nc', 'r')
s5p_file.groups
s5p_file.groups['PRODUCT']
co = s5p_file.groups['PRODUCT'].variables['carbonmonoxide_total_column']
lon = s5p_file.groups['PRODUCT'].variables['longitude'][:][0,:,:]
lat = s5p_file.groups['PRODUCT'].variables['latitude'][:][0,:,:]
co, lon, lat
co_data = co[0,:,:].data
print(co_data.min(), co_data.max())
co_data[co_data <= 0.] = co._FillValue
co_data[co_data == co._FillValue] = np.nan
def visualize_pcolormesh(data_array, longitude, latitude, projection, color_scale, unit, long_name, vmin, vmax, lonmin, lonmax, latmin, latmax, log=True, set_global=True):
"""
Visualizes a numpy array with matplotlib's 'pcolormesh' function.
Parameters:
data_array: any numpy MaskedArray, e.g. loaded with the NetCDF library and the Dataset function
longitude: numpy Array holding longitude information
latitude: numpy Array holding latitude information
projection: a projection provided by the cartopy library, e.g. ccrs.PlateCarree()
color_scale (str): string taken from matplotlib's color ramp reference
unit (str): the unit of the parameter, taken from the NetCDF file if possible
long_name (str): long name of the parameter, taken from the NetCDF file if possible
vmin (int): minimum number on visualisation legend
vmax (int): maximum number on visualisation legend
lonmin,lonmax,latmin,latmax: geographic extent of the plot
log (logical): set True, if the values shall be represented in a logarithmic scale
set_global (logical): set True, if the plot shall have a global coverage
"""
fig=plt.figure(figsize=(20, 10))
ax = plt.axes(projection=projection)
# define the coordinate system that the grid lons and grid lats are on
if(log):
img = plt.pcolormesh(longitude, latitude, np.squeeze(data_array), norm=LogNorm(),
cmap=plt.get_cmap(color_scale), transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax)
else:
img = plt.pcolormesh(longitude, latitude, data_array,
cmap=plt.get_cmap(color_scale), transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax)
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1)
if (projection==ccrs.PlateCarree()):
ax.set_extent([lonmin, lonmax, latmin, latmax], projection)
gl = ax.gridlines(draw_labels=True, linestyle='--')
gl.xlabels_top=False
gl.ylabels_right=False
gl.xformatter=LONGITUDE_FORMATTER
gl.yformatter=LATITUDE_FORMATTER
gl.xlabel_style={'size':14}
gl.ylabel_style={'size':14}
if(set_global):
ax.set_global()
ax.gridlines()
cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.04, pad=0.1)
cbar.set_label(unit, fontsize=16)
cbar.ax.tick_params(labelsize=14)
ax.set_title(long_name, fontsize=20, pad=20.0)
return fig, ax
unit = co.units
long_name = co.long_name
visualize_pcolormesh(co_data, lon, lat, ccrs.Mollweide(), 'viridis', unit, long_name, 0.01,1, lon.min(), lon.max(), lat.min(), lat.max(), log=True, set_global=True)
lonmin=-100
lonmax=0
latmin=-80
latmax=40
visualize_pcolormesh(co_data, lon, lat, ccrs.PlateCarree(), 'viridis', unit, long_name, 0.01,1, lonmin, lonmax, latmin, latmax, log=True, set_global=False)
s5p_mf_19 = xr.open_mfdataset('./eodata/sentinel5p/co/2019/08/19/*.nc', concat_dim='scanline', combine='nested', group='PRODUCT')
s5p_mf_19
co_19 = s5p_mf_19.carbonmonoxide_total_column[0,:,:]
lat_19 = co_19.latitude
lon_19 = co_19.longitude
unit = co_19.units
long_name = co_19.long_name
visualize_pcolormesh(co_19, lon_19, lat_19, ccrs.PlateCarree(), 'viridis', unit, long_name, 0.01, 1.0, lonmin, lonmax, latmin, latmax, log=True, set_global=False)
| 0.495361 | 0.964355 |
[](https://colab.research.google.com/github/pronobis/libspn-keras/blob/master/examples/notebooks/Sampling%20with%20conv%20SPNs.ipynb)
# **Image Sampling**: Sampling MNIST images
In this notebook, we'll set up an SPN to generate new MNIST images by sampling from an SPN.
First let's set up the dependencies:
```
!pip install libspn-keras matplotlib
```
## Convolutional SPN
A convolutional SPN consists of convolutional product and convolutional sum nodes. For the sake of
demonstration, we'll use a structure that trains relatively quickly, without worrying too much about the final performance of the model.
```
import libspn_keras as spnk
```
### Setting the Default Sum Accumulator Initializer
In `libspn-keras`, we refer to the unnormalized weights as _accumulators_. These can be represented in linear space or logspace. Setting the ``SumOp`` also configures the default choice of representation space for these accumulators. For example, gradients should be used in the case of _discriminative_ learning and accumulators are then preferrably represented in logspace. This overcomes the need to project the accumulators to $\mathbb R^+$ after gradient updates, since for log accumulators can take any value in $\mathbb R$ (whereas linear accumulators are limited to $\mathbb R^+$).
In this case however, we'll do generative learning so we can set our `SumOp` to `SumOpEMBackprop`.
To set the default initial value (which will be transformed to logspace internally if needed), one can use `spnk.set_default_accumulator_initializer`:
```
from tensorflow import keras
spnk.set_default_accumulator_initializer(
spnk.initializers.Dirichlet()
)
import numpy as np
import tensorflow_datasets as tfds
from libspn_keras.layers import NormalizeAxes
import tensorflow as tf
def take_first(a, b):
return tf.reshape(tf.cast(a, tf.float32), (-1, 28, 28, 1))
normalize = spnk.layers.NormalizeStandardScore(
input_shape=(28, 28, 1), axes=NormalizeAxes.GLOBAL,
normalization_epsilon=1e-3
)
mnist_images = tfds.load(name="mnist", batch_size=32, split="train", as_supervised=True).map(take_first)
normalize.adapt(mnist_images)
mnist_normalized = mnist_images.map(normalize)
location_initializer = spnk.initializers.PoonDomingosMeanOfQuantileSplit(
mnist_normalized
)
```
### Defining the Architecture
We'll go for a relatively simple convolutional SPN architecture. We use solely non-overlapping patches. After 5 convolutions, the nodes' scopes cover all variables. We then add a layer with 10 mixtures, one for each class. We can do this to optimize the joint probability of $P(X,Y)$ instead of just $P(X)$.
```
def build_spn(sum_op, return_logits, infer_no_evidence=False):
spnk.set_default_sum_op(sum_op)
return spnk.models.SequentialSumProductNetwork([
normalize,
spnk.layers.NormalLeaf(
num_components=4,
location_trainable=True,
location_initializer=location_initializer,
scale_trainable=True
),
spnk.layers.Conv2DProduct(
depthwise=False,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.Local2DSum(num_sums=256),
spnk.layers.Conv2DProduct(
depthwise=True,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.Local2DSum(num_sums=512),
# Pad to go from 7x7 to 8x8, so that we can apply 3 more Conv2DProducts
tf.keras.layers.ZeroPadding2D(((0, 1), (0, 1))),
spnk.layers.Conv2DProduct(
depthwise=True,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.Local2DSum(num_sums=512),
spnk.layers.Conv2DProduct(
depthwise=True,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.Local2DSum(num_sums=1024),
spnk.layers.Conv2DProduct(
depthwise=True,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.LogDropout(rate=0.5),
spnk.layers.DenseSum(num_sums=10),
spnk.layers.RootSum(return_weighted_child_logits=return_logits)
], infer_no_evidence=infer_no_evidence, unsupervised=False)
sum_product_network = build_spn(spnk.SumOpEMBackprop(), return_logits=True)
sum_product_network.summary()
```
### Setting up a `tf.Dataset` with `tensorflow_datasets`
Then, we'll configure a train set and a test set using `tensorflow_datasets`.
```
import tensorflow_datasets as tfds
batch_size = 128
mnist_train = (
tfds.load(name="mnist", split="train", as_supervised=True)
.shuffle(1024)
.batch(batch_size)
)
mnist_test = (
tfds.load(name="mnist", split="test", as_supervised=True)
.batch(100)
)
```
### Configuring the remaining training components
Note that our SPN spits out the joint probabities for each $y\in\{Y_i\}_{i=1}^{10}$, so there are 10 outputs per sample. We can optimize the probability of $P(X,Y)$ by using `spnk.metrics.NegativeLogJoint` as the loss.
```
optimizer = spnk.optimizers.OnlineExpectationMaximization(learning_rate=0.05, accumulate_batches=1)
metrics = []
loss = spnk.losses.NegativeLogJoint()
sum_product_network.compile(loss=loss, metrics=metrics, optimizer=optimizer)
```
### Training the SPN
We can simply use the `.fit` function that comes with Keras and pass our `tf.data.Dataset` to it to train!
```
import tensorflow as tf
sum_product_network.fit(mnist_train, epochs=20, callbacks=[tf.keras.callbacks.ReduceLROnPlateau(monitor="loss", min_delta=0.1, patience=2, factor=0.5)])
sum_product_network.evaluate(mnist_test)
```
## Building an SPN to sample
For sampling, we require our sum nodes to backpropagate discrete signals that correspond to the sampled paths. Each
path originates at the root and eventually ends up at the leaves. We can set the backprop op to
`spnk.SumOpSampleBackprop` to ensure all sum layers propagate the discrete sample signal.
We build using the same function as before and copy the weights from the already trained SPN.
```
sum_product_network_sample = build_spn(spnk.SumOpSampleBackprop(), return_logits=False, infer_no_evidence=True)
sum_product_network_sample.set_weights(sum_product_network.get_weights())
```
## Drawing samples
Sampling from SPNs comes down to determining values for variables that are outside of the evidence. When images are
sampled as a whole, all variables are omitted from the evidence. For this special case of inference,
the `SequentialSumProductNetwork` class defines a `zero_evidence_inference` method that takes a size parameter.
Below, we sample 64 images and voilá!
```
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
fig = plt.figure(figsize=(12., 12.))
grid = ImageGrid(
fig, 111,
nrows_ncols=(10, 10),
axes_pad=0.1,
)
sample = sum_product_network_sample.zero_evidence_inference(100)
print("Sampling done... Now ploting results")
for ax, im in zip(grid, sample):
ax.imshow(np.squeeze(im), cmap="gray")
plt.show()
```
|
github_jupyter
|
!pip install libspn-keras matplotlib
import libspn_keras as spnk
from tensorflow import keras
spnk.set_default_accumulator_initializer(
spnk.initializers.Dirichlet()
)
import numpy as np
import tensorflow_datasets as tfds
from libspn_keras.layers import NormalizeAxes
import tensorflow as tf
def take_first(a, b):
return tf.reshape(tf.cast(a, tf.float32), (-1, 28, 28, 1))
normalize = spnk.layers.NormalizeStandardScore(
input_shape=(28, 28, 1), axes=NormalizeAxes.GLOBAL,
normalization_epsilon=1e-3
)
mnist_images = tfds.load(name="mnist", batch_size=32, split="train", as_supervised=True).map(take_first)
normalize.adapt(mnist_images)
mnist_normalized = mnist_images.map(normalize)
location_initializer = spnk.initializers.PoonDomingosMeanOfQuantileSplit(
mnist_normalized
)
def build_spn(sum_op, return_logits, infer_no_evidence=False):
spnk.set_default_sum_op(sum_op)
return spnk.models.SequentialSumProductNetwork([
normalize,
spnk.layers.NormalLeaf(
num_components=4,
location_trainable=True,
location_initializer=location_initializer,
scale_trainable=True
),
spnk.layers.Conv2DProduct(
depthwise=False,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.Local2DSum(num_sums=256),
spnk.layers.Conv2DProduct(
depthwise=True,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.Local2DSum(num_sums=512),
# Pad to go from 7x7 to 8x8, so that we can apply 3 more Conv2DProducts
tf.keras.layers.ZeroPadding2D(((0, 1), (0, 1))),
spnk.layers.Conv2DProduct(
depthwise=True,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.Local2DSum(num_sums=512),
spnk.layers.Conv2DProduct(
depthwise=True,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.Local2DSum(num_sums=1024),
spnk.layers.Conv2DProduct(
depthwise=True,
strides=[2, 2],
dilations=[1, 1],
kernel_size=[2, 2],
padding='valid'
),
spnk.layers.LogDropout(rate=0.5),
spnk.layers.DenseSum(num_sums=10),
spnk.layers.RootSum(return_weighted_child_logits=return_logits)
], infer_no_evidence=infer_no_evidence, unsupervised=False)
sum_product_network = build_spn(spnk.SumOpEMBackprop(), return_logits=True)
sum_product_network.summary()
import tensorflow_datasets as tfds
batch_size = 128
mnist_train = (
tfds.load(name="mnist", split="train", as_supervised=True)
.shuffle(1024)
.batch(batch_size)
)
mnist_test = (
tfds.load(name="mnist", split="test", as_supervised=True)
.batch(100)
)
optimizer = spnk.optimizers.OnlineExpectationMaximization(learning_rate=0.05, accumulate_batches=1)
metrics = []
loss = spnk.losses.NegativeLogJoint()
sum_product_network.compile(loss=loss, metrics=metrics, optimizer=optimizer)
import tensorflow as tf
sum_product_network.fit(mnist_train, epochs=20, callbacks=[tf.keras.callbacks.ReduceLROnPlateau(monitor="loss", min_delta=0.1, patience=2, factor=0.5)])
sum_product_network.evaluate(mnist_test)
sum_product_network_sample = build_spn(spnk.SumOpSampleBackprop(), return_logits=False, infer_no_evidence=True)
sum_product_network_sample.set_weights(sum_product_network.get_weights())
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
fig = plt.figure(figsize=(12., 12.))
grid = ImageGrid(
fig, 111,
nrows_ncols=(10, 10),
axes_pad=0.1,
)
sample = sum_product_network_sample.zero_evidence_inference(100)
print("Sampling done... Now ploting results")
for ax, im in zip(grid, sample):
ax.imshow(np.squeeze(im), cmap="gray")
plt.show()
| 0.819388 | 0.990112 |
```
import pandas as pd
import numpy as np
import requests
import bs4 as bs
import urllib.request
```
## Extracting features of 2020 movies from Wikipedia
```
link = "https://en.wikipedia.org/wiki/List_of_American_films_of_2020"
source = urllib.request.urlopen(link).read()
soup = bs.BeautifulSoup(source,'lxml')
tables = soup.find_all('table',class_='wikitable sortable')
len(tables)
type(tables[0])
df1 = pd.read_html(str(tables[0]))[0]
df2 = pd.read_html(str(tables[1]))[0]
df3 = pd.read_html(str(tables[2]))[0]
df4 = pd.read_html(str(tables[3]).replace("'1\"\'",'"1"'))[0]
df = df1.append(df2.append(df3.append(df4,ignore_index=True),ignore_index=True),ignore_index=True)
df
df_2020 = df[['Title','Cast and crew']]
df_2020
!pip install tmdbv3api
from tmdbv3api import TMDb
import json
import requests
tmdb = TMDb()
tmdb.api_key = 'ee638a2269f21a2c3648d8079d3ff77f'
from tmdbv3api import Movie
tmdb_movie = Movie()
def get_genre(x):
genres = []
result = tmdb_movie.search(x)
if not result:
return np.NaN
else:
movie_id = result[0].id
response = requests.get('https://api.themoviedb.org/3/movie/{}?api_key={}'.format(movie_id,tmdb.api_key))
data_json = response.json()
if data_json['genres']:
genre_str = " "
for i in range(0,len(data_json['genres'])):
genres.append(data_json['genres'][i]['name'])
return genre_str.join(genres)
else:
return np.NaN
df_2020['genres'] = df_2020['Title'].map(lambda x: get_genre(str(x)))
df_2020
def get_director(x):
if " (director)" in x:
return x.split(" (director)")[0]
elif " (directors)" in x:
return x.split(" (directors)")[0]
else:
return x.split(" (director/screenplay)")[0]
df_2020['director_name'] = df_2020['Cast and crew'].map(lambda x: get_director(str(x)))
def get_actor1(x):
return ((x.split("screenplay); ")[-1]).split(", ")[0])
df_2020['actor_1_name'] = df_2020['Cast and crew'].map(lambda x: get_actor1(str(x)))
def get_actor2(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 2:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[1])
df_2020['actor_2_name'] = df_2020['Cast and crew'].map(lambda x: get_actor2(str(x)))
def get_actor3(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 3:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[2])
df_2020['actor_3_name'] = df_2020['Cast and crew'].map(lambda x: get_actor3(str(x)))
df_2020
df_2020 = df_2020.rename(columns={'Title':'movie_title'})
new_df20 = df_2020.loc[:,['director_name','actor_1_name','actor_2_name','actor_3_name','genres','movie_title']]
new_df20
new_df20['comb'] = new_df20['actor_1_name'] + ' ' + new_df20['actor_2_name'] + ' '+ new_df20['actor_3_name'] + ' '+ new_df20['director_name'] +' ' + new_df20['genres']
new_df20.isna().sum()
new_df20 = new_df20.dropna(how='any')
new_df20.isna().sum()
new_df20['movie_title'] = new_df20['movie_title'].str.lower()
new_df20
old_df = pd.read_csv('../modified dataset/final_data.csv')
old_df
final_df = old_df.append(new_df20,ignore_index=True)
final_df
final_df.to_csv('../modified dataset/main_data.csv',index=False)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import requests
import bs4 as bs
import urllib.request
link = "https://en.wikipedia.org/wiki/List_of_American_films_of_2020"
source = urllib.request.urlopen(link).read()
soup = bs.BeautifulSoup(source,'lxml')
tables = soup.find_all('table',class_='wikitable sortable')
len(tables)
type(tables[0])
df1 = pd.read_html(str(tables[0]))[0]
df2 = pd.read_html(str(tables[1]))[0]
df3 = pd.read_html(str(tables[2]))[0]
df4 = pd.read_html(str(tables[3]).replace("'1\"\'",'"1"'))[0]
df = df1.append(df2.append(df3.append(df4,ignore_index=True),ignore_index=True),ignore_index=True)
df
df_2020 = df[['Title','Cast and crew']]
df_2020
!pip install tmdbv3api
from tmdbv3api import TMDb
import json
import requests
tmdb = TMDb()
tmdb.api_key = 'ee638a2269f21a2c3648d8079d3ff77f'
from tmdbv3api import Movie
tmdb_movie = Movie()
def get_genre(x):
genres = []
result = tmdb_movie.search(x)
if not result:
return np.NaN
else:
movie_id = result[0].id
response = requests.get('https://api.themoviedb.org/3/movie/{}?api_key={}'.format(movie_id,tmdb.api_key))
data_json = response.json()
if data_json['genres']:
genre_str = " "
for i in range(0,len(data_json['genres'])):
genres.append(data_json['genres'][i]['name'])
return genre_str.join(genres)
else:
return np.NaN
df_2020['genres'] = df_2020['Title'].map(lambda x: get_genre(str(x)))
df_2020
def get_director(x):
if " (director)" in x:
return x.split(" (director)")[0]
elif " (directors)" in x:
return x.split(" (directors)")[0]
else:
return x.split(" (director/screenplay)")[0]
df_2020['director_name'] = df_2020['Cast and crew'].map(lambda x: get_director(str(x)))
def get_actor1(x):
return ((x.split("screenplay); ")[-1]).split(", ")[0])
df_2020['actor_1_name'] = df_2020['Cast and crew'].map(lambda x: get_actor1(str(x)))
def get_actor2(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 2:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[1])
df_2020['actor_2_name'] = df_2020['Cast and crew'].map(lambda x: get_actor2(str(x)))
def get_actor3(x):
if len((x.split("screenplay); ")[-1]).split(", ")) < 3:
return np.NaN
else:
return ((x.split("screenplay); ")[-1]).split(", ")[2])
df_2020['actor_3_name'] = df_2020['Cast and crew'].map(lambda x: get_actor3(str(x)))
df_2020
df_2020 = df_2020.rename(columns={'Title':'movie_title'})
new_df20 = df_2020.loc[:,['director_name','actor_1_name','actor_2_name','actor_3_name','genres','movie_title']]
new_df20
new_df20['comb'] = new_df20['actor_1_name'] + ' ' + new_df20['actor_2_name'] + ' '+ new_df20['actor_3_name'] + ' '+ new_df20['director_name'] +' ' + new_df20['genres']
new_df20.isna().sum()
new_df20 = new_df20.dropna(how='any')
new_df20.isna().sum()
new_df20['movie_title'] = new_df20['movie_title'].str.lower()
new_df20
old_df = pd.read_csv('../modified dataset/final_data.csv')
old_df
final_df = old_df.append(new_df20,ignore_index=True)
final_df
final_df.to_csv('../modified dataset/main_data.csv',index=False)
| 0.157363 | 0.407098 |
## 2.6 Q学習で迷路を攻略
```
# 使用するパッケージの宣言
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# 初期位置での迷路の様子
# 図を描く大きさと、図の変数名を宣言
fig = plt.figure(figsize=(5, 5))
ax = plt.gca()
# 赤い壁を描く
plt.plot([1, 1], [0, 1], color='red', linewidth=2)
plt.plot([1, 2], [2, 2], color='red', linewidth=2)
plt.plot([2, 2], [2, 1], color='red', linewidth=2)
plt.plot([2, 3], [1, 1], color='red', linewidth=2)
# 状態を示す文字S0~S8を描く
plt.text(0.5, 2.5, 'S0', size=14, ha='center')
plt.text(1.5, 2.5, 'S1', size=14, ha='center')
plt.text(2.5, 2.5, 'S2', size=14, ha='center')
plt.text(0.5, 1.5, 'S3', size=14, ha='center')
plt.text(1.5, 1.5, 'S4', size=14, ha='center')
plt.text(2.5, 1.5, 'S5', size=14, ha='center')
plt.text(0.5, 0.5, 'S6', size=14, ha='center')
plt.text(1.5, 0.5, 'S7', size=14, ha='center')
plt.text(2.5, 0.5, 'S8', size=14, ha='center')
plt.text(0.5, 2.3, 'START', ha='center')
plt.text(2.5, 0.3, 'GOAL', ha='center')
# 描画範囲の設定と目盛りを消す設定
ax.set_xlim(0, 3)
ax.set_ylim(0, 3)
plt.tick_params(axis='both', which='both', bottom='off', top='off',
labelbottom='off', right='off', left='off', labelleft='off')
# 現在地S0に緑丸を描画する
line, = ax.plot([0.5], [2.5], marker="o", color='g', markersize=60)
# 初期の方策を決定するパラメータtheta_0を設定
# 行は状態0~7、列は移動方向で↑、→、↓、←を表す
theta_0 = np.array([[np.nan, 1, 1, np.nan], # s0
[np.nan, 1, np.nan, 1], # s1
[np.nan, np.nan, 1, 1], # s2
[1, 1, 1, np.nan], # s3
[np.nan, np.nan, 1, 1], # s4
[1, np.nan, np.nan, np.nan], # s5
[1, np.nan, np.nan, np.nan], # s6
[1, 1, np.nan, np.nan], # s7、※s8はゴールなので、方策はなし
])
# 方策パラメータtheta_0をランダム方策piに変換する関数の定義
def simple_convert_into_pi_from_theta(theta):
'''単純に割合を計算する'''
[m, n] = theta.shape # thetaの行列サイズを取得
pi = np.zeros((m, n))
for i in range(0, m):
pi[i, :] = theta[i, :] / np.nansum(theta[i, :]) # 割合の計算
pi = np.nan_to_num(pi) # nanを0に変換
return pi
# ランダム行動方策pi_0を求める
pi_0 = simple_convert_into_pi_from_theta(theta_0)
# 初期の行動価値関数Qを設定
[a, b] = theta_0.shape # 行と列の数をa, bに格納
Q = np.random.rand(a, b) * theta_0 * 0.1
# *theta0をすることで要素ごとに掛け算をし、Qの壁方向の値がnanになる
# ε-greedy法を実装
def get_action(s, Q, epsilon, pi_0):
direction = ["up", "right", "down", "left"]
# 行動を決める
if np.random.rand() < epsilon:
# εの確率でランダムに動く
next_direction = np.random.choice(direction, p=pi_0[s, :])
else:
# Qの最大値の行動を採用する
next_direction = direction[np.nanargmax(Q[s, :])]
# 行動をindexに
if next_direction == "up":
action = 0
elif next_direction == "right":
action = 1
elif next_direction == "down":
action = 2
elif next_direction == "left":
action = 3
return action
def get_s_next(s, a, Q, epsilon, pi_0):
direction = ["up", "right", "down", "left"]
next_direction = direction[a] # 行動aの方向
# 行動から次の状態を決める
if next_direction == "up":
s_next = s - 3 # 上に移動するときは状態の数字が3小さくなる
elif next_direction == "right":
s_next = s + 1 # 右に移動するときは状態の数字が1大きくなる
elif next_direction == "down":
s_next = s + 3 # 下に移動するときは状態の数字が3大きくなる
elif next_direction == "left":
s_next = s - 1 # 左に移動するときは状態の数字が1小さくなる
return s_next
# Q学習による行動価値関数Qの更新
def Q_learning(s, a, r, s_next, Q, eta, gamma):
if s_next == 8: # ゴールした場合
Q[s, a] = Q[s, a] + eta * (r - Q[s, a])
else:
Q[s, a] = Q[s, a] + eta * (r + gamma * np.nanmax(Q[s_next,: ]) - Q[s, a])
return Q
# Q学習で迷路を解く関数の定義、状態と行動の履歴および更新したQを出力
def goal_maze_ret_s_a_Q(Q, epsilon, eta, gamma, pi):
s = 0 # スタート地点
a = a_next = get_action(s, Q, epsilon, pi) # 初期の行動
s_a_history = [[0, np.nan]] # エージェントの移動を記録するリスト
while (1): # ゴールするまでループ
a = a_next # 行動更新
s_a_history[-1][1] = a
# 現在の状態(つまり一番最後なのでindex=-1)に行動を代入
s_next = get_s_next(s, a, Q, epsilon, pi)
# 次の状態を格納
s_a_history.append([s_next, np.nan])
# 次の状態を代入。行動はまだ分からないのでnanにしておく
# 報酬を与え, 次の行動を求めます
if s_next == 8:
r = 1 # ゴールにたどり着いたなら報酬を与える
a_next = np.nan
else:
r = 0
a_next = get_action(s_next, Q, epsilon, pi)
# 次の行動a_nextを求めます。
# 価値関数を更新
Q = Q_learning(s, a, r, s_next, Q, eta, gamma)
# 終了判定
if s_next == 8: # ゴール地点なら終了
break
else:
s = s_next
return [s_a_history, Q]
# Q学習で迷路を解く
eta = 0.1 # 学習率
gamma = 0.9 # 時間割引率
epsilon = 0.5 # ε-greedy法の初期値
v = np.nanmax(Q, axis=1) # 状態ごとに価値の最大値を求める
is_continue = True
episode = 1
V = [] # エピソードごとの状態価値を格納する
V.append(np.nanmax(Q, axis=1)) # 状態ごとに行動価値の最大値を求める
while is_continue: # is_continueがFalseになるまで繰り返す
print("エピソード:" + str(episode))
# ε-greedyの値を少しずつ小さくする
epsilon = epsilon / 2
# Q学習で迷路を解き、移動した履歴と更新したQを求める
[s_a_history, Q] = goal_maze_ret_s_a_Q(Q, epsilon, eta, gamma, pi_0)
# 状態価値の変化
new_v = np.nanmax(Q, axis=1) # 状態ごとに行動価値の最大値を求める
print(np.sum(np.abs(new_v - v))) # 状態価値関数の変化を出力
v = new_v
V.append(v) # このエピソード終了時の状態価値関数を追加
print("迷路を解くのにかかったステップ数は" + str(len(s_a_history) - 1) + "です")
# 100エピソード繰り返す
episode = episode + 1
if episode > 100:
break
# 状態価値の変化を可視化します
# 参考URL http://louistiao.me/posts/notebooks/embedding-matplotlib-animations-in-jupyter-notebooks/
from matplotlib import animation
from IPython.display import HTML
import matplotlib.cm as cm # color map
def init():
# 背景画像の初期化
line.set_data([], [])
return (line,)
def animate(i):
# フレームごとの描画内容
# 各マスに状態価値の大きさに基づく色付きの四角を描画
line, = ax.plot([0.5], [2.5], marker="s",
color=cm.jet(V[i][0]), markersize=85) # S0
line, = ax.plot([1.5], [2.5], marker="s",
color=cm.jet(V[i][1]), markersize=85) # S1
line, = ax.plot([2.5], [2.5], marker="s",
color=cm.jet(V[i][2]), markersize=85) # S2
line, = ax.plot([0.5], [1.5], marker="s",
color=cm.jet(V[i][3]), markersize=85) # S3
line, = ax.plot([1.5], [1.5], marker="s",
color=cm.jet(V[i][4]), markersize=85) # S4
line, = ax.plot([2.5], [1.5], marker="s",
color=cm.jet(V[i][5]), markersize=85) # S5
line, = ax.plot([0.5], [0.5], marker="s",
color=cm.jet(V[i][6]), markersize=85) # S6
line, = ax.plot([1.5], [0.5], marker="s",
color=cm.jet(V[i][7]), markersize=85) # S7
line, = ax.plot([2.5], [0.5], marker="s",
color=cm.jet(1.0), markersize=85) # S8
return (line,)
# 初期化関数とフレームごとの描画関数を用いて動画を作成
anim = animation.FuncAnimation(
fig, animate, init_func=init, frames=len(V), interval=200, repeat=False)
HTML(anim.to_jshtml())
```
|
github_jupyter
|
# 使用するパッケージの宣言
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# 初期位置での迷路の様子
# 図を描く大きさと、図の変数名を宣言
fig = plt.figure(figsize=(5, 5))
ax = plt.gca()
# 赤い壁を描く
plt.plot([1, 1], [0, 1], color='red', linewidth=2)
plt.plot([1, 2], [2, 2], color='red', linewidth=2)
plt.plot([2, 2], [2, 1], color='red', linewidth=2)
plt.plot([2, 3], [1, 1], color='red', linewidth=2)
# 状態を示す文字S0~S8を描く
plt.text(0.5, 2.5, 'S0', size=14, ha='center')
plt.text(1.5, 2.5, 'S1', size=14, ha='center')
plt.text(2.5, 2.5, 'S2', size=14, ha='center')
plt.text(0.5, 1.5, 'S3', size=14, ha='center')
plt.text(1.5, 1.5, 'S4', size=14, ha='center')
plt.text(2.5, 1.5, 'S5', size=14, ha='center')
plt.text(0.5, 0.5, 'S6', size=14, ha='center')
plt.text(1.5, 0.5, 'S7', size=14, ha='center')
plt.text(2.5, 0.5, 'S8', size=14, ha='center')
plt.text(0.5, 2.3, 'START', ha='center')
plt.text(2.5, 0.3, 'GOAL', ha='center')
# 描画範囲の設定と目盛りを消す設定
ax.set_xlim(0, 3)
ax.set_ylim(0, 3)
plt.tick_params(axis='both', which='both', bottom='off', top='off',
labelbottom='off', right='off', left='off', labelleft='off')
# 現在地S0に緑丸を描画する
line, = ax.plot([0.5], [2.5], marker="o", color='g', markersize=60)
# 初期の方策を決定するパラメータtheta_0を設定
# 行は状態0~7、列は移動方向で↑、→、↓、←を表す
theta_0 = np.array([[np.nan, 1, 1, np.nan], # s0
[np.nan, 1, np.nan, 1], # s1
[np.nan, np.nan, 1, 1], # s2
[1, 1, 1, np.nan], # s3
[np.nan, np.nan, 1, 1], # s4
[1, np.nan, np.nan, np.nan], # s5
[1, np.nan, np.nan, np.nan], # s6
[1, 1, np.nan, np.nan], # s7、※s8はゴールなので、方策はなし
])
# 方策パラメータtheta_0をランダム方策piに変換する関数の定義
def simple_convert_into_pi_from_theta(theta):
'''単純に割合を計算する'''
[m, n] = theta.shape # thetaの行列サイズを取得
pi = np.zeros((m, n))
for i in range(0, m):
pi[i, :] = theta[i, :] / np.nansum(theta[i, :]) # 割合の計算
pi = np.nan_to_num(pi) # nanを0に変換
return pi
# ランダム行動方策pi_0を求める
pi_0 = simple_convert_into_pi_from_theta(theta_0)
# 初期の行動価値関数Qを設定
[a, b] = theta_0.shape # 行と列の数をa, bに格納
Q = np.random.rand(a, b) * theta_0 * 0.1
# *theta0をすることで要素ごとに掛け算をし、Qの壁方向の値がnanになる
# ε-greedy法を実装
def get_action(s, Q, epsilon, pi_0):
direction = ["up", "right", "down", "left"]
# 行動を決める
if np.random.rand() < epsilon:
# εの確率でランダムに動く
next_direction = np.random.choice(direction, p=pi_0[s, :])
else:
# Qの最大値の行動を採用する
next_direction = direction[np.nanargmax(Q[s, :])]
# 行動をindexに
if next_direction == "up":
action = 0
elif next_direction == "right":
action = 1
elif next_direction == "down":
action = 2
elif next_direction == "left":
action = 3
return action
def get_s_next(s, a, Q, epsilon, pi_0):
direction = ["up", "right", "down", "left"]
next_direction = direction[a] # 行動aの方向
# 行動から次の状態を決める
if next_direction == "up":
s_next = s - 3 # 上に移動するときは状態の数字が3小さくなる
elif next_direction == "right":
s_next = s + 1 # 右に移動するときは状態の数字が1大きくなる
elif next_direction == "down":
s_next = s + 3 # 下に移動するときは状態の数字が3大きくなる
elif next_direction == "left":
s_next = s - 1 # 左に移動するときは状態の数字が1小さくなる
return s_next
# Q学習による行動価値関数Qの更新
def Q_learning(s, a, r, s_next, Q, eta, gamma):
if s_next == 8: # ゴールした場合
Q[s, a] = Q[s, a] + eta * (r - Q[s, a])
else:
Q[s, a] = Q[s, a] + eta * (r + gamma * np.nanmax(Q[s_next,: ]) - Q[s, a])
return Q
# Q学習で迷路を解く関数の定義、状態と行動の履歴および更新したQを出力
def goal_maze_ret_s_a_Q(Q, epsilon, eta, gamma, pi):
s = 0 # スタート地点
a = a_next = get_action(s, Q, epsilon, pi) # 初期の行動
s_a_history = [[0, np.nan]] # エージェントの移動を記録するリスト
while (1): # ゴールするまでループ
a = a_next # 行動更新
s_a_history[-1][1] = a
# 現在の状態(つまり一番最後なのでindex=-1)に行動を代入
s_next = get_s_next(s, a, Q, epsilon, pi)
# 次の状態を格納
s_a_history.append([s_next, np.nan])
# 次の状態を代入。行動はまだ分からないのでnanにしておく
# 報酬を与え, 次の行動を求めます
if s_next == 8:
r = 1 # ゴールにたどり着いたなら報酬を与える
a_next = np.nan
else:
r = 0
a_next = get_action(s_next, Q, epsilon, pi)
# 次の行動a_nextを求めます。
# 価値関数を更新
Q = Q_learning(s, a, r, s_next, Q, eta, gamma)
# 終了判定
if s_next == 8: # ゴール地点なら終了
break
else:
s = s_next
return [s_a_history, Q]
# Q学習で迷路を解く
eta = 0.1 # 学習率
gamma = 0.9 # 時間割引率
epsilon = 0.5 # ε-greedy法の初期値
v = np.nanmax(Q, axis=1) # 状態ごとに価値の最大値を求める
is_continue = True
episode = 1
V = [] # エピソードごとの状態価値を格納する
V.append(np.nanmax(Q, axis=1)) # 状態ごとに行動価値の最大値を求める
while is_continue: # is_continueがFalseになるまで繰り返す
print("エピソード:" + str(episode))
# ε-greedyの値を少しずつ小さくする
epsilon = epsilon / 2
# Q学習で迷路を解き、移動した履歴と更新したQを求める
[s_a_history, Q] = goal_maze_ret_s_a_Q(Q, epsilon, eta, gamma, pi_0)
# 状態価値の変化
new_v = np.nanmax(Q, axis=1) # 状態ごとに行動価値の最大値を求める
print(np.sum(np.abs(new_v - v))) # 状態価値関数の変化を出力
v = new_v
V.append(v) # このエピソード終了時の状態価値関数を追加
print("迷路を解くのにかかったステップ数は" + str(len(s_a_history) - 1) + "です")
# 100エピソード繰り返す
episode = episode + 1
if episode > 100:
break
# 状態価値の変化を可視化します
# 参考URL http://louistiao.me/posts/notebooks/embedding-matplotlib-animations-in-jupyter-notebooks/
from matplotlib import animation
from IPython.display import HTML
import matplotlib.cm as cm # color map
def init():
# 背景画像の初期化
line.set_data([], [])
return (line,)
def animate(i):
# フレームごとの描画内容
# 各マスに状態価値の大きさに基づく色付きの四角を描画
line, = ax.plot([0.5], [2.5], marker="s",
color=cm.jet(V[i][0]), markersize=85) # S0
line, = ax.plot([1.5], [2.5], marker="s",
color=cm.jet(V[i][1]), markersize=85) # S1
line, = ax.plot([2.5], [2.5], marker="s",
color=cm.jet(V[i][2]), markersize=85) # S2
line, = ax.plot([0.5], [1.5], marker="s",
color=cm.jet(V[i][3]), markersize=85) # S3
line, = ax.plot([1.5], [1.5], marker="s",
color=cm.jet(V[i][4]), markersize=85) # S4
line, = ax.plot([2.5], [1.5], marker="s",
color=cm.jet(V[i][5]), markersize=85) # S5
line, = ax.plot([0.5], [0.5], marker="s",
color=cm.jet(V[i][6]), markersize=85) # S6
line, = ax.plot([1.5], [0.5], marker="s",
color=cm.jet(V[i][7]), markersize=85) # S7
line, = ax.plot([2.5], [0.5], marker="s",
color=cm.jet(1.0), markersize=85) # S8
return (line,)
# 初期化関数とフレームごとの描画関数を用いて動画を作成
anim = animation.FuncAnimation(
fig, animate, init_func=init, frames=len(V), interval=200, repeat=False)
HTML(anim.to_jshtml())
| 0.335351 | 0.889481 |
```
from datetime import datetime
import logging
logging.basicConfig(filename='train_initialization.log', filemode='w', format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S', level=logging.INFO)
logging.info('SCRIPT INICIADO')
import os
from keras.preprocessing.image import ImageDataGenerator
from keras.backend import clear_session
from keras.optimizers import SGD
from pathlib import Path
from keras.applications.mobilenet_v2 import MobileNetV2
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Flatten, AveragePooling2D
from keras import initializers, regularizers
logging.info('BIBLIOTECAS IMPORTADAS')
# reusable stuff
import constants
import callbacks
import generators
logging.info('CONFIGURAÇÕES IMPORTADAS')
# No kruft plz
clear_session()
logging.info('SESSÃO REINICIALIZADA COM SUCESSO')
import tensorflow as tf
from tensorflow.compat.v1.keras.backend import set_session
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
sess = tf.compat.v1.Session(config=config)
set_session(sess) # set this TensorFlow session as the default session for Keras
logging.info('AJUSTES DE USO DE GPU FINALIZADO COM SUCESSO')
# Config
height = constants.SIZES['basic']
width = height
weights_file = "weights.best_mobilenet" + str(height) + ".hdf5"
logging.info('PESOS DO MODELO IMPORTADOS COM SUCESSO')
conv_base = MobileNetV2(
weights='imagenet',
include_top=False,
input_shape=(height, width, constants.NUM_CHANNELS)
)
logging.info('MODELO MobileNetV2 IMPORTADO COM SUCESSO')
# First time run, no unlocking
conv_base.trainable = False
logging.info('AJUSTE DE TREINAMENTO DO MODELO REALIZADO COM SUCESSO')
# Let's see it
print('Summary')
print(conv_base.summary())
logging.info('SUMARIO DO MODELO')
logging.info(conv_base.summary())
# Let's construct that top layer replacement
x = conv_base.output
x = AveragePooling2D(pool_size=(7, 7))(x)
x = Flatten()(x)
x = Dense(256, activation='relu', kernel_initializer=initializers.he_normal(seed=None), kernel_regularizer=regularizers.l2(.0005))(x)
x = Dropout(0.5)(x)
# Essential to have another layer for better accuracy
x = Dense(128,activation='relu', kernel_initializer=initializers.he_normal(seed=None))(x)
x = Dropout(0.25)(x)
predictions = Dense(constants.NUM_CLASSES, kernel_initializer="glorot_uniform", activation='softmax')(x)
logging.info('NOVAS CAMADAS CONFIGURADAS COM SUCESSO')
print('Stacking New Layers')
model = Model(inputs = conv_base.input, outputs=predictions)
logging.info('NOVAS CAMADAS ADICIONADAS AO MODELO COM SUCESSO')
# Load checkpoint if one is found
if os.path.exists(weights_file):
print ("loading ", weights_file)
model.load_weights(weights_file)
logging.info('VERIFICAÇÃO DE CHECKPOINT')
# Get all model callbacks
callbacks_list = callbacks.make_callbacks(weights_file)
logging.info('INICIALIZACAO DO CALLBACK REALIZADO')
print('Compile model')
# originally adam, but research says SGD with scheduler
# opt = Adam(lr=0.001, amsgrad=True)
opt = SGD(momentum=.9)
model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy']
)
logging.info('MODELO COMPILADO COM SUCESSO')
# Get training/validation data via generators
train_generator, validation_generator = generators.create_generators(height, width)
logging.info('BASES DE TREINAMENTO E TESTES IMPORTADAS COM SUCESSO')
print('Start training!')
logging.info('TREINAMENTO DO MODELO INICIADO')
history = model.fit_generator(
train_generator,
callbacks=callbacks_list,
epochs=constants.TOTAL_EPOCHS,
steps_per_epoch=constants.STEPS_PER_EPOCH,
shuffle=True,
workers=4,
use_multiprocessing=False,
validation_data=validation_generator,
validation_steps=constants.VALIDATION_STEPS
)
logging.info('TREINAMENTO DO MODELO FINALIZADO')
# Save it for later
print('Saving Model')
model.save("nude_mobilenet2." + str(width) + "x" + str(height) + ".h5")
logging.info('MODELO EXPORTADO COM SUCESSO')
final_loss, final_accuracy = model.evaluate(validation_generator, steps = constants.VALIDATION_STEPS)
logging.info('AVALIACAO DO MODELO REALIZADO COM SUCESSO')
print("Final loss: {:.2f}".format(final_loss))
print("Final accuracy: {:.2f}%".format(final_accuracy * 100))
logging.info('FINAL LOSS: {:.2f}'.format(final_loss))
logging.info('FINAL ACCURACY: {:.2f}%'.format(final_accuracy * 100))
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
saved_model_dir = 'nude_mobilenetv2_train'
Path(saved_model_dir).mkdir(parents=True, exist_ok=True)
tf.saved_model.save(model, saved_model_dir)
keras_model_path = os.path.join(saved_model_dir, "saved_model.h5")
weights_path = os.path.join(saved_model_dir, "saved_model_weights.h5")
model.save(keras_model_path)
model.save_weights(weights_path)
print("SavedModel model exported to", saved_model_dir)
logging.info('MODELO SALVO NA PASTA %s', saved_model_dir)
indexed_labels = [(index, label) for label, index in train_generator.class_indices.items()]
print(indexed_labels)
logging.info('LABELS: %s', indexed_labels)
sorted_indices, sorted_labels = zip(*sorted(indexed_labels))
print(sorted_indices)
print(sorted_labels)
logging.info('SORTED INDICES: %s', sorted_indices)
logging.info('SORTED LABELS: %s', sorted_labels)
labels_dir_path = os.path.dirname(saved_model_dir)
# Ensure dir structure exists
Path(saved_model_dir).mkdir(parents=True, exist_ok=True)
with tf.io.gfile.GFile('labels.txt', "w") as f:
f.write("\n".join(sorted_labels + ("",)))
print("Labels written to", saved_model_dir)
logging.info('ARQUIVO DE LABELS SALVO COM SUCESSO')
model.summary()
```
|
github_jupyter
|
from datetime import datetime
import logging
logging.basicConfig(filename='train_initialization.log', filemode='w', format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S', level=logging.INFO)
logging.info('SCRIPT INICIADO')
import os
from keras.preprocessing.image import ImageDataGenerator
from keras.backend import clear_session
from keras.optimizers import SGD
from pathlib import Path
from keras.applications.mobilenet_v2 import MobileNetV2
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Flatten, AveragePooling2D
from keras import initializers, regularizers
logging.info('BIBLIOTECAS IMPORTADAS')
# reusable stuff
import constants
import callbacks
import generators
logging.info('CONFIGURAÇÕES IMPORTADAS')
# No kruft plz
clear_session()
logging.info('SESSÃO REINICIALIZADA COM SUCESSO')
import tensorflow as tf
from tensorflow.compat.v1.keras.backend import set_session
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
sess = tf.compat.v1.Session(config=config)
set_session(sess) # set this TensorFlow session as the default session for Keras
logging.info('AJUSTES DE USO DE GPU FINALIZADO COM SUCESSO')
# Config
height = constants.SIZES['basic']
width = height
weights_file = "weights.best_mobilenet" + str(height) + ".hdf5"
logging.info('PESOS DO MODELO IMPORTADOS COM SUCESSO')
conv_base = MobileNetV2(
weights='imagenet',
include_top=False,
input_shape=(height, width, constants.NUM_CHANNELS)
)
logging.info('MODELO MobileNetV2 IMPORTADO COM SUCESSO')
# First time run, no unlocking
conv_base.trainable = False
logging.info('AJUSTE DE TREINAMENTO DO MODELO REALIZADO COM SUCESSO')
# Let's see it
print('Summary')
print(conv_base.summary())
logging.info('SUMARIO DO MODELO')
logging.info(conv_base.summary())
# Let's construct that top layer replacement
x = conv_base.output
x = AveragePooling2D(pool_size=(7, 7))(x)
x = Flatten()(x)
x = Dense(256, activation='relu', kernel_initializer=initializers.he_normal(seed=None), kernel_regularizer=regularizers.l2(.0005))(x)
x = Dropout(0.5)(x)
# Essential to have another layer for better accuracy
x = Dense(128,activation='relu', kernel_initializer=initializers.he_normal(seed=None))(x)
x = Dropout(0.25)(x)
predictions = Dense(constants.NUM_CLASSES, kernel_initializer="glorot_uniform", activation='softmax')(x)
logging.info('NOVAS CAMADAS CONFIGURADAS COM SUCESSO')
print('Stacking New Layers')
model = Model(inputs = conv_base.input, outputs=predictions)
logging.info('NOVAS CAMADAS ADICIONADAS AO MODELO COM SUCESSO')
# Load checkpoint if one is found
if os.path.exists(weights_file):
print ("loading ", weights_file)
model.load_weights(weights_file)
logging.info('VERIFICAÇÃO DE CHECKPOINT')
# Get all model callbacks
callbacks_list = callbacks.make_callbacks(weights_file)
logging.info('INICIALIZACAO DO CALLBACK REALIZADO')
print('Compile model')
# originally adam, but research says SGD with scheduler
# opt = Adam(lr=0.001, amsgrad=True)
opt = SGD(momentum=.9)
model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy']
)
logging.info('MODELO COMPILADO COM SUCESSO')
# Get training/validation data via generators
train_generator, validation_generator = generators.create_generators(height, width)
logging.info('BASES DE TREINAMENTO E TESTES IMPORTADAS COM SUCESSO')
print('Start training!')
logging.info('TREINAMENTO DO MODELO INICIADO')
history = model.fit_generator(
train_generator,
callbacks=callbacks_list,
epochs=constants.TOTAL_EPOCHS,
steps_per_epoch=constants.STEPS_PER_EPOCH,
shuffle=True,
workers=4,
use_multiprocessing=False,
validation_data=validation_generator,
validation_steps=constants.VALIDATION_STEPS
)
logging.info('TREINAMENTO DO MODELO FINALIZADO')
# Save it for later
print('Saving Model')
model.save("nude_mobilenet2." + str(width) + "x" + str(height) + ".h5")
logging.info('MODELO EXPORTADO COM SUCESSO')
final_loss, final_accuracy = model.evaluate(validation_generator, steps = constants.VALIDATION_STEPS)
logging.info('AVALIACAO DO MODELO REALIZADO COM SUCESSO')
print("Final loss: {:.2f}".format(final_loss))
print("Final accuracy: {:.2f}%".format(final_accuracy * 100))
logging.info('FINAL LOSS: {:.2f}'.format(final_loss))
logging.info('FINAL ACCURACY: {:.2f}%'.format(final_accuracy * 100))
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
saved_model_dir = 'nude_mobilenetv2_train'
Path(saved_model_dir).mkdir(parents=True, exist_ok=True)
tf.saved_model.save(model, saved_model_dir)
keras_model_path = os.path.join(saved_model_dir, "saved_model.h5")
weights_path = os.path.join(saved_model_dir, "saved_model_weights.h5")
model.save(keras_model_path)
model.save_weights(weights_path)
print("SavedModel model exported to", saved_model_dir)
logging.info('MODELO SALVO NA PASTA %s', saved_model_dir)
indexed_labels = [(index, label) for label, index in train_generator.class_indices.items()]
print(indexed_labels)
logging.info('LABELS: %s', indexed_labels)
sorted_indices, sorted_labels = zip(*sorted(indexed_labels))
print(sorted_indices)
print(sorted_labels)
logging.info('SORTED INDICES: %s', sorted_indices)
logging.info('SORTED LABELS: %s', sorted_labels)
labels_dir_path = os.path.dirname(saved_model_dir)
# Ensure dir structure exists
Path(saved_model_dir).mkdir(parents=True, exist_ok=True)
with tf.io.gfile.GFile('labels.txt', "w") as f:
f.write("\n".join(sorted_labels + ("",)))
print("Labels written to", saved_model_dir)
logging.info('ARQUIVO DE LABELS SALVO COM SUCESSO')
model.summary()
| 0.740268 | 0.141548 |
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
import json
# Import API key
from config import g_key
gmaps.configure(api_key=g_key)
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
city_data = "..\\WeatherPy\\weather_py_city_data.csv"
city_data_df = pd.read_csv(city_data)
city_data_df.head()
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
# Use the Lat and Lng as locations and Humidity as the weight.
locations = city_data_df[['Latitude', 'Longitude']]
humidity = city_data_df['Humidity']
# Add Heatmap layer to map
figure_layout = {'width': '1200px',
'border': '1px solid black',
'margin': '0 auto 0 auto'}
fig = gmaps.figure(layout=figure_layout)
heatmap_layer = gmaps.heatmap_layer(locations, weights=humidity, max_intensity=100, dissipating=False, point_radius=1)
fig.add_layer(heatmap_layer)
#fig.savefig("/Images/heat_map.png")
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
* Narrow down the DataFrame to find your ideal weather condition. For example:
* A max temperature lower than 85 degrees but higher than 70.
* Wind speed less than 15 mph.
* 30% cloudiness.
```
new_city_data = city_data_df.loc[(city_data_df['Max Temp'] >= 70) & (city_data_df['Max Temp'] <= 85) & (city_data_df['Wind Speed'] < 15) & (city_data_df['Cloudiness'] <= 30)]
new_city_data_df = new_city_data.reset_index()
del new_city_data_df['index']
new_city_data_df
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotel_df = []
new_city_data_df['Hotel Name'] = ''
new_city_data_df.head()
cities = new_city_data_df.loc[new_city_data_df['City']]
lat = new_city_data_df.loc[new_city_data_df['Latitude']]
lng = new_city_data_df.loc[new_city_data_df['Longitude']]
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
coordinates = f"{lat},{lng}"
params = {"location": coordinates,"types": "lodging","radius": 5000,"key": g_key}
for i in cities:
try:
hotel_info = requests.get(base_url, params + i).json()
print(f'Processing record')
hotel_df.append(response_data["results"][0]["name"])
except KeyError:
print(f'No hotel near {city}. Skipped.')
# Could not get loop to work :(
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
# Display figure
```
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
import json
# Import API key
from config import g_key
gmaps.configure(api_key=g_key)
city_data = "..\\WeatherPy\\weather_py_city_data.csv"
city_data_df = pd.read_csv(city_data)
city_data_df.head()
# Use the Lat and Lng as locations and Humidity as the weight.
locations = city_data_df[['Latitude', 'Longitude']]
humidity = city_data_df['Humidity']
# Add Heatmap layer to map
figure_layout = {'width': '1200px',
'border': '1px solid black',
'margin': '0 auto 0 auto'}
fig = gmaps.figure(layout=figure_layout)
heatmap_layer = gmaps.heatmap_layer(locations, weights=humidity, max_intensity=100, dissipating=False, point_radius=1)
fig.add_layer(heatmap_layer)
#fig.savefig("/Images/heat_map.png")
fig
new_city_data = city_data_df.loc[(city_data_df['Max Temp'] >= 70) & (city_data_df['Max Temp'] <= 85) & (city_data_df['Wind Speed'] < 15) & (city_data_df['Cloudiness'] <= 30)]
new_city_data_df = new_city_data.reset_index()
del new_city_data_df['index']
new_city_data_df
hotel_df = []
new_city_data_df['Hotel Name'] = ''
new_city_data_df.head()
cities = new_city_data_df.loc[new_city_data_df['City']]
lat = new_city_data_df.loc[new_city_data_df['Latitude']]
lng = new_city_data_df.loc[new_city_data_df['Longitude']]
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
coordinates = f"{lat},{lng}"
params = {"location": coordinates,"types": "lodging","radius": 5000,"key": g_key}
for i in cities:
try:
hotel_info = requests.get(base_url, params + i).json()
print(f'Processing record')
hotel_df.append(response_data["results"][0]["name"])
except KeyError:
print(f'No hotel near {city}. Skipped.')
# Could not get loop to work :(
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
# Display figure
| 0.349533 | 0.834474 |
```
#Dependencies
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import cv2
from scipy.ndimage.filters import gaussian_filter
#load image and smooth it
img = cv2.imread("simulated_rails.png", cv2.IMREAD_GRAYSCALE)
img = gaussian_filter(img, sigma=0.8)
print(img.shape)
plt.imshow(img, cmap ="gray",vmin=0,vmax=255)
#img = img[170:180,170:180]
def addGaussianNoise(img, std, mean =0.0):
img = np.clip(img, 3*std, 255-(3*std))#to not cut off noise
img = (img + np.random.normal(mean, std, img.shape)).astype(np.uint8)
img = np.clip(img, 0, 255) # prevent getting out of bounds due to noise
return img
img_noisy = addGaussianNoise(img, std= 5.0)
plt.imshow(img_noisy, cmap ="gray",vmin=0,vmax=255)
h_plane,w_plane = 3,3
delta_xi_min = - (h_plane // 2) # -1
delta_xi_max = (h_plane // 2) # 1 #EDIT
delta_yi_min = - (w_plane // 2) # -1
delta_yi_max = (w_plane // 2) # 1 #EDIT
def approximate_plane(img):
"""
approximates gradient for each position of an array by a plane. dimensions of the plane are given by
self.h_plane, self.w_plane
:param img: source image
:return: alpha: array of slopes of planes in x-direction
:return: beta: array of slopes of planes in y-direction
"""
alpha = np.zeros(img.shape)
beta = np.zeros(img.shape)
gamma = np.zeros(img.shape)
sum_x_squared = np.zeros(img.shape)
sum_y_squared = np.zeros(img.shape)
sum_xy = np.zeros(img.shape)
for hi in range(img.shape[0]):
for wi in range(img.shape[1]):
for delta_x in range(delta_xi_min, delta_xi_max+1): # deltax: local position {-1, 0, 1}
xi = max(min(hi + delta_x, img.shape[0] - 1), 0) # xi: global position e.g. {19, 20, 21}
for delta_y in range(delta_yi_min, delta_yi_max+1):
yi = max(min(wi + delta_y, img.shape[1] - 1), 0)
alpha[hi, wi] += delta_x * img[xi, yi]
sum_x_squared[hi, wi] += delta_x ** 2
beta[hi, wi] += delta_y * img[xi, yi]
sum_y_squared[hi, wi] += delta_y ** 2
gamma[hi, wi] += img[xi, yi]
sum_xy[hi, wi] += delta_x * delta_y
alpha = alpha / sum_x_squared + 0.000001 # adding a small epsilon to prevent dividing by zero
beta = beta / sum_y_squared + 0.000001
gamma = gamma / (h_plane * w_plane)
return alpha, beta, gamma
alpha,beta, gamma = approximate_plane(img_noisy)
#RECONSTRUCT IMAGE based on facet approximation(estimated alphas, betas, gammas):
reconstruct_img = np.zeros(img.shape)
for hi in range(img.shape[0]):
for wi in range(img.shape[1]):
for delta_x in range(delta_xi_min, delta_xi_max+1): # deltax: local position {-1, 0, 1}
xi = max(min(hi + delta_x, img.shape[0] - 1), 0) # xi: global position e.g. {19, 20, 21}
for delta_y in range(delta_yi_min, delta_yi_max+1):
yi = max(min(wi + delta_y, img.shape[1] - 1), 0)
reconstruct_img[xi,yi]+= (alpha[hi,wi]*delta_y+beta[hi,wi]*delta_x+gamma[hi,wi])/(h_plane * w_plane)
figure = plt.figure(figsize=(10, 4))
#Original Image
subplot1 = figure.add_subplot(1, 4, 1)
subplot1.imshow(img_noisy, cmap="gray",vmin=0, vmax = 255)
subplot1.title.set_text("Original Image with Noise")
#Gamma
subplot2 = figure.add_subplot(1, 4, 2)
subplot2.imshow(gamma, cmap="gray",vmin=0, vmax = 255)
subplot2.title.set_text("Gamma")
#Facet approximated
subplot3 = figure.add_subplot(1, 4, 3)
subplot3.imshow(reconstruct_img, cmap="gray", vmin=0, vmax = 255)
subplot3.title.set_text("Facet approximated image:")
#Difference
subplot4 = figure.add_subplot(1, 4, 4)
subplot4.imshow(np.abs(img_noisy-reconstruct_img).clip(0,255),cmap ="gray", vmin=0, vmax = 255)
subplot4.title.set_text("Difference")
print("differencde between images:")
print("std:",np.sqrt(np.sum((reconstruct_img -img_noisy)**2)/(img.shape[0]*img.shape[1])))
def approximate_plane(img):
"""
approximates gradient for each position of an array by a plane. dimensions of the plane are given by
self.h_plane, self.w_plane
:param img: source image
:return: alpha: array of slopes of planes in x-direction
:return: beta: array of slopes of planes in y-direction
:return: var_alpha: array of variances of alpha (uncertainty)
:return: var_beta: array of variances of beta (uncertainty)
:return: covar_alpha_beta: array of covariances of alpha and beta (joint uncertainty)
"""
alpha = np.zeros(img.shape)
beta = np.zeros(img.shape)
gamma = np.zeros(img.shape)
sum_x_squared = np.zeros(img.shape)
sum_y_squared = np.zeros(img.shape)
sum_xy = np.zeros(img.shape)
h_plane,w_plane = 3,3
delta_xi_min = - (h_plane // 2) # -1
delta_xi_max = (h_plane // 2) # 1 #EDIT
delta_yi_min = - (w_plane // 2) # -1
delta_yi_max = (w_plane // 2) # 1 #EDIT
for hi in range(img.shape[0]):
for wi in range(img.shape[1]):
for delta_x in range(delta_xi_min, delta_xi_max+1): # deltax: local position {-1, 0, 1}
xi = max(min(hi + delta_x, img.shape[0] - 1), 0) # xi: global position e.g. {19, 20, 21}
for delta_y in range(delta_yi_min, delta_yi_max+1):
yi = max(min(wi + delta_y, img.shape[1] - 1), 0)
alpha[hi, wi] += delta_x * img[xi, yi]
sum_x_squared[hi, wi] += delta_x ** 2
beta[hi, wi] += delta_y * img[xi, yi]
sum_y_squared[hi, wi] += delta_y ** 2
gamma[hi, wi] += img[xi, yi]
sum_xy[hi, wi] += delta_x * delta_y
alpha = alpha / sum_x_squared + 0.000001 # adding a small epsilon to prevent dividing by zero
beta = beta / sum_y_squared + 0.000001
gamma = gamma / (h_plane * w_plane)
"""
Additionally estimates the uncertainty of the approximated plane by calculating variances for the parameters
"""
local_noise_var = np.zeros(img.shape) # first calculate local var for each position
epsilon_squared = np.zeros(img.shape) # required to get variance
for hi in range(img.shape[0]):
for wi in range(img.shape[1]):
for delta_x in range(delta_xi_min, delta_xi_max+1): # deltax: local position {-1, 0, 1}
xi = max(min(hi + delta_x, img.shape[0] - 1), 0) # xi: global position e.g. {19, 20, 21}
for delta_y in range(delta_yi_min, delta_yi_max+1):
yi = max(min(wi + delta_y, img.shape[1]- 1), 0)
epsilon_squared[xi, yi] += (img[xi, yi] - (alpha[hi, wi] * delta_y + beta[hi, wi] * delta_x +gamma[hi, wi])) ** 2
local_noise_var = epsilon_squared / (h_plane * w_plane)
local_noise_var = np.sort(local_noise_var, axis = None)
local_noise_var = local_noise_var[int(0.1*len(local_noise_var)):int(0.9*len(local_noise_var))] #exclude outliers
noise_var = np.sum(local_noise_var) / len(local_noise_var)
var_alpha = noise_var / sum_x_squared
var_beta = noise_var / sum_y_squared
covar_alpha_beta = noise_var * sum_xy / (sum_x_squared * sum_y_squared)
return alpha, beta, gamma, var_alpha, var_beta, covar_alpha_beta, noise_var
for sigma in [0,2,5,10]:
print("Add Gaussian Noise with sigma = %.2f "%(sigma))
img_noisy = addGaussianNoise(img, sigma)
print("estimated sigma: %.2f \n"%(np.sqrt(approximate_plane(img_noisy)[6])))
```
|
github_jupyter
|
#Dependencies
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import cv2
from scipy.ndimage.filters import gaussian_filter
#load image and smooth it
img = cv2.imread("simulated_rails.png", cv2.IMREAD_GRAYSCALE)
img = gaussian_filter(img, sigma=0.8)
print(img.shape)
plt.imshow(img, cmap ="gray",vmin=0,vmax=255)
#img = img[170:180,170:180]
def addGaussianNoise(img, std, mean =0.0):
img = np.clip(img, 3*std, 255-(3*std))#to not cut off noise
img = (img + np.random.normal(mean, std, img.shape)).astype(np.uint8)
img = np.clip(img, 0, 255) # prevent getting out of bounds due to noise
return img
img_noisy = addGaussianNoise(img, std= 5.0)
plt.imshow(img_noisy, cmap ="gray",vmin=0,vmax=255)
h_plane,w_plane = 3,3
delta_xi_min = - (h_plane // 2) # -1
delta_xi_max = (h_plane // 2) # 1 #EDIT
delta_yi_min = - (w_plane // 2) # -1
delta_yi_max = (w_plane // 2) # 1 #EDIT
def approximate_plane(img):
"""
approximates gradient for each position of an array by a plane. dimensions of the plane are given by
self.h_plane, self.w_plane
:param img: source image
:return: alpha: array of slopes of planes in x-direction
:return: beta: array of slopes of planes in y-direction
"""
alpha = np.zeros(img.shape)
beta = np.zeros(img.shape)
gamma = np.zeros(img.shape)
sum_x_squared = np.zeros(img.shape)
sum_y_squared = np.zeros(img.shape)
sum_xy = np.zeros(img.shape)
for hi in range(img.shape[0]):
for wi in range(img.shape[1]):
for delta_x in range(delta_xi_min, delta_xi_max+1): # deltax: local position {-1, 0, 1}
xi = max(min(hi + delta_x, img.shape[0] - 1), 0) # xi: global position e.g. {19, 20, 21}
for delta_y in range(delta_yi_min, delta_yi_max+1):
yi = max(min(wi + delta_y, img.shape[1] - 1), 0)
alpha[hi, wi] += delta_x * img[xi, yi]
sum_x_squared[hi, wi] += delta_x ** 2
beta[hi, wi] += delta_y * img[xi, yi]
sum_y_squared[hi, wi] += delta_y ** 2
gamma[hi, wi] += img[xi, yi]
sum_xy[hi, wi] += delta_x * delta_y
alpha = alpha / sum_x_squared + 0.000001 # adding a small epsilon to prevent dividing by zero
beta = beta / sum_y_squared + 0.000001
gamma = gamma / (h_plane * w_plane)
return alpha, beta, gamma
alpha,beta, gamma = approximate_plane(img_noisy)
#RECONSTRUCT IMAGE based on facet approximation(estimated alphas, betas, gammas):
reconstruct_img = np.zeros(img.shape)
for hi in range(img.shape[0]):
for wi in range(img.shape[1]):
for delta_x in range(delta_xi_min, delta_xi_max+1): # deltax: local position {-1, 0, 1}
xi = max(min(hi + delta_x, img.shape[0] - 1), 0) # xi: global position e.g. {19, 20, 21}
for delta_y in range(delta_yi_min, delta_yi_max+1):
yi = max(min(wi + delta_y, img.shape[1] - 1), 0)
reconstruct_img[xi,yi]+= (alpha[hi,wi]*delta_y+beta[hi,wi]*delta_x+gamma[hi,wi])/(h_plane * w_plane)
figure = plt.figure(figsize=(10, 4))
#Original Image
subplot1 = figure.add_subplot(1, 4, 1)
subplot1.imshow(img_noisy, cmap="gray",vmin=0, vmax = 255)
subplot1.title.set_text("Original Image with Noise")
#Gamma
subplot2 = figure.add_subplot(1, 4, 2)
subplot2.imshow(gamma, cmap="gray",vmin=0, vmax = 255)
subplot2.title.set_text("Gamma")
#Facet approximated
subplot3 = figure.add_subplot(1, 4, 3)
subplot3.imshow(reconstruct_img, cmap="gray", vmin=0, vmax = 255)
subplot3.title.set_text("Facet approximated image:")
#Difference
subplot4 = figure.add_subplot(1, 4, 4)
subplot4.imshow(np.abs(img_noisy-reconstruct_img).clip(0,255),cmap ="gray", vmin=0, vmax = 255)
subplot4.title.set_text("Difference")
print("differencde between images:")
print("std:",np.sqrt(np.sum((reconstruct_img -img_noisy)**2)/(img.shape[0]*img.shape[1])))
def approximate_plane(img):
"""
approximates gradient for each position of an array by a plane. dimensions of the plane are given by
self.h_plane, self.w_plane
:param img: source image
:return: alpha: array of slopes of planes in x-direction
:return: beta: array of slopes of planes in y-direction
:return: var_alpha: array of variances of alpha (uncertainty)
:return: var_beta: array of variances of beta (uncertainty)
:return: covar_alpha_beta: array of covariances of alpha and beta (joint uncertainty)
"""
alpha = np.zeros(img.shape)
beta = np.zeros(img.shape)
gamma = np.zeros(img.shape)
sum_x_squared = np.zeros(img.shape)
sum_y_squared = np.zeros(img.shape)
sum_xy = np.zeros(img.shape)
h_plane,w_plane = 3,3
delta_xi_min = - (h_plane // 2) # -1
delta_xi_max = (h_plane // 2) # 1 #EDIT
delta_yi_min = - (w_plane // 2) # -1
delta_yi_max = (w_plane // 2) # 1 #EDIT
for hi in range(img.shape[0]):
for wi in range(img.shape[1]):
for delta_x in range(delta_xi_min, delta_xi_max+1): # deltax: local position {-1, 0, 1}
xi = max(min(hi + delta_x, img.shape[0] - 1), 0) # xi: global position e.g. {19, 20, 21}
for delta_y in range(delta_yi_min, delta_yi_max+1):
yi = max(min(wi + delta_y, img.shape[1] - 1), 0)
alpha[hi, wi] += delta_x * img[xi, yi]
sum_x_squared[hi, wi] += delta_x ** 2
beta[hi, wi] += delta_y * img[xi, yi]
sum_y_squared[hi, wi] += delta_y ** 2
gamma[hi, wi] += img[xi, yi]
sum_xy[hi, wi] += delta_x * delta_y
alpha = alpha / sum_x_squared + 0.000001 # adding a small epsilon to prevent dividing by zero
beta = beta / sum_y_squared + 0.000001
gamma = gamma / (h_plane * w_plane)
"""
Additionally estimates the uncertainty of the approximated plane by calculating variances for the parameters
"""
local_noise_var = np.zeros(img.shape) # first calculate local var for each position
epsilon_squared = np.zeros(img.shape) # required to get variance
for hi in range(img.shape[0]):
for wi in range(img.shape[1]):
for delta_x in range(delta_xi_min, delta_xi_max+1): # deltax: local position {-1, 0, 1}
xi = max(min(hi + delta_x, img.shape[0] - 1), 0) # xi: global position e.g. {19, 20, 21}
for delta_y in range(delta_yi_min, delta_yi_max+1):
yi = max(min(wi + delta_y, img.shape[1]- 1), 0)
epsilon_squared[xi, yi] += (img[xi, yi] - (alpha[hi, wi] * delta_y + beta[hi, wi] * delta_x +gamma[hi, wi])) ** 2
local_noise_var = epsilon_squared / (h_plane * w_plane)
local_noise_var = np.sort(local_noise_var, axis = None)
local_noise_var = local_noise_var[int(0.1*len(local_noise_var)):int(0.9*len(local_noise_var))] #exclude outliers
noise_var = np.sum(local_noise_var) / len(local_noise_var)
var_alpha = noise_var / sum_x_squared
var_beta = noise_var / sum_y_squared
covar_alpha_beta = noise_var * sum_xy / (sum_x_squared * sum_y_squared)
return alpha, beta, gamma, var_alpha, var_beta, covar_alpha_beta, noise_var
for sigma in [0,2,5,10]:
print("Add Gaussian Noise with sigma = %.2f "%(sigma))
img_noisy = addGaussianNoise(img, sigma)
print("estimated sigma: %.2f \n"%(np.sqrt(approximate_plane(img_noisy)[6])))
| 0.571169 | 0.539287 |
<a href="https://colab.research.google.com/github/angeruzzi/RegressionModel_ProdutividadeAgricola/blob/main/RegressionModel_ProdutividadeAgricola_Amendoin.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Predição Produtividade Agrícola: Produção de Amendoin
---------------------------------
**Autor**: [Alessandro S. Angeruzzi](https://www.linkedin.com/in/alessandroangeruzzi/)
**Licença**: Creative Commons
---------------------------------
##**Sumário**
1. [Introdução](#intro)
2. [Bibliotecas](#bibliotecas)
3. [Dados](#dados)
4. [Análises](#analise)
5. [Modelagem](#modelagem)
* [Funções](#funcoes)
* [Modelos](#modelos)
* [Teste Inicial](#teste)
* [Seleção de Variáveis](#selecao)
* [Maior Correlação](#correlacao)
* [Features Importantes](#fimportantes)
* [Hipertunagem](#hipertunagem)
* [Modelo Final](#final)
6. [Conclusão](#conclusao)
7. [Fontes](#fontes)
##**1 - Introdução** <a name="intro"></a>
O amendoim é um grão que sempre esteve presente na cultura brasileira, não somente em doces mas também em alguns pratos salgados e muito consumido também como aperitivo.
A planta é original da América do Sul, porém hoje já é encontrada em quase todo o mundo principalmente na África e Ásia e a semente é consumida mundialmente, principalmente na forma inteira torrada como também em forma de óleo, pasta e farinha na panificação.
O Brasil ocupa apenas a 14ª posição na produção mundial, sendo responsável por pouco mais de 1% dessa produção, já no quadro de exportadores ocupa a 5ª posição; ele é produzido em praticamente todo o território nacional, mas o estado de SP se destaca concentrando cerca de 90% da produção nacional, que também é onde há a maior produtividade, próximo de 3.500 kg/ha, que está entre as maiores mundiais e fica bem além da conseguida no Nordeste que é cerca de 1.164 kg/ha, esta diferença pode ser devida a tradição da produção no interior de SP mas pode ser um indicativo que o clima é um fator de grande importância.
Neste trabalho exploro a relação da produtividade com os dados meteorológicos tentando não somente encontrar a relação mas também um modelo de predição de produtividade com esses dados.
##**2 - Bibliotecas** <a name="bibliotecas"></a>
Importação das Bilbiotecas Python utilizadas
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import LassoLars
from sklearn.linear_model import BayesianRidge
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from lightgbm import LGBMRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import QuantileTransformer
from sklearn.preprocessing import Normalizer
import sklearn.metrics as me
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_validate
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings('ignore')
```
##**3 - Dados** <a name="dados"></a>
O dados utilizados são referentes a 6 cidades do interior do estado de SP: Jaboticabal, Tupã, herculandia, Pompéia, Iacri e Marília.
Eles contêm:
* Local : Cidade de coleta dos dados
* ano : Ano do Registro
* area : Área de cultivo
* produtividade de amendoim : t/ha
e as demais features são de dados meteorológicos dessas cidades,
são todas numéricas e cada uma possui 5 colunas, 4 referentes a medição de diferentes meses, mais a média ou a soma desse período:
_SEP (Setembro), _OCT (Outubro), _NOV (Novembro), _DEC (Dezembro) e _ANN (Média ou Soma)
* temp : temperatura do ar (°C)
* ue : umidade específica (g vapor/kg ar)
* ur : umidade relativa (%)
* vv : velocidade do vento (m/s)
* tmax : temperatura máxima absoluta do ar (°C)
* tmin : temperatura mínima absoluta do ar (°C)
* umidsolos : umidade do solo (0-100%)
* tamplitude : amplitude da temperatura do ar (°C)
* Qo : Irradiância solar no topo da atmosfera (MJ/m2dia)
* Prec : Precipitação pluvial (mm)
* Qg : Irradiância Solar Global (MJ/m2dia)
* ETP : Evapotranspiração Potencial (mm)
Os períodos utilizados, setembro a dezembro, são referentes ao ciclo principal de cultivo no interior de SP:
* Setembro: Plantio e Desenvolvimento Vegetativo
* Outubro: Florada
* Novembro: Desenvolvimento das vagens
* Dezembro: Maturação e inicio da Colheita
```
fonte = 'https://github.com/angeruzzi/Datasource/blob/main/dados_producao_amendoim.xlsx?raw=true'
dados = pd.read_excel(fonte)
#Tamanho do Dataframe: 64 features e 222 registros
dados.shape
dados.info()
dados.head()
pd.set_option('display.max_rows', None, 'display.max_columns', None)
dados.describe().T
```
##**4 - Análises** <a name="analise"></a>
Os dados coletados são referentes a produção de amendoin de 6 cidades do interior de SP: Jaboticabal, Tupa, Herculandia, Pompeia, Iacri e Marilia, dos anos de 1984 a 2020.
```
display(dados['Local'].unique())
display(dados['ano'].unique())
```
Das localidades, Jabotical teve em média uma área de plantio maior e Marilia a menor.
```
sns.boxplot(x='Local', y='area', data=dados, showmeans=True)
```
Já na produtividade Marilia se destacou bastante das demais, que ficaram com médias próximas, e Iacri foi a que teve uma maior variação.
```
sns.boxplot(x='Local', y='produtividade', data=dados, showmeans=True)
```
A produtividade parece ter um variação maior quando se tem uma area de plantio menor, e conforme a área aumenta começa a haver uma diminuição dessa variação, ficando entre 4 e 5 mil ton, que é também próxima a média geral da produtividade. Mas no geral não parece haver relação entre a área e produtividade.
```
sns.scatterplot(x='area', y='produtividade', data=dados)
```
O ano avaliado também parece não ter relação com a produtividade medida.
```
sns.scatterplot(x='ano', y='produtividade', data=dados)
```
E a presença de Outliers na variável alvo da amostra é baixa:
```
dados['produtividade'].plot(kind = 'box');
```
Analisando a Correlação apenas pelas features de consolidação dos meses:
```
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 8.27)
sns.heatmap(dados[["produtividade"
,"temp_ANN"
,"ue_ANN"
,"ur_ANN"
,"vv_ANN"
,"tmax_ANN"
,"tmin_ANN"
,"umidsolos_ANN"
,"tamplitude_ANN"
,"Qo_ANN"
,"Prec_ANN"
,"Qg_ANN"
,"ETP_ANN"]].corr(), annot = True, ax = ax)
```
Fica claro na tabela de correlação a relação direta entre os dados de ETP (Evapotranspiração Potêncial) e Temperatura, visto que o cálculo do primeiro é realizado em função do segundo, então optei por remover os dados de ETP.
E também como a intenção é a construção de um modelo para previsões e de forma independente a localidade, também serão removidas as features de Local, ano e area.
```
dados = dados.loc[:,'produtividade':'Qg_ANN']
dados.head()
```
##**5 - Modelagem** <a name="modelagem"></a>
###**5.1 - Funções** <a name="funcoes"></a>
No tratamento dos dados vou aplicar 3 técnicas diferentes e avaliar nos modelos qual irá trazer melhores resultados:
* Padronização (Standard)
* Transformação pelos Quantis (Quantile)
* Normalização (Normalizer)
A função abaixo recebe os dados e devolve 3 datasets distintos cada um com as transformações citadas.
```
#Tratamento dos Dados
lista_transf = ['Standard', 'Quantile', 'Normalizer']
def transformacao(dadosX):
dadosTX = []
#Standard
escalaS = StandardScaler().fit(dadosX)
transfX = escalaS.transform(dadosX)
df = pd.DataFrame(transfX, columns=dadosX.columns) #Transformando o retorno de numpy.ndarray para data frame
dadosTX.append(df)
#Quantile
escalaQ = QuantileTransformer().fit(dadosX)
transfX = escalaQ.transform(dadosX)
df = pd.DataFrame(transfX, columns=dadosX.columns)
dadosTX.append(df)
#Normalizer
escalaN = Normalizer().fit(dadosX)
transfX = escalaN.transform(dadosX)
df = pd.DataFrame(transfX, columns=dadosX.columns)
dadosTX.append(df)
return dadosTX
```
Para validação dos modelos vou utilizar a validação cruzada e retornar 3 medidas diferentes:
* R Quadrado (R2)
* Erro Quadrático Médio (EQM)
* Erro Médio Absoluto (EMA)
Esta função recebe os dados e uma lista de modelos e executa a validação cruzada kfold para cada um deles, devolvendo as medidas citadas acima
```
#Teste de Modelos
def CompareML_kfold(treinoX, treinoy, lista_de_modelos, nome_dos_modelos, validacao):
lista_medidas = ['r2','neg_mean_squared_error', 'neg_mean_absolute_error']
nome_medidas = ['R2','EQM','EMA']
resultados0 = {}
for i in range(len(lista_de_modelos)):
modelo = lista_de_modelos[i]
testes = cross_validate(modelo, treinoX, treinoy, cv=validacao, scoring=lista_medidas)
r2 = testes['test_r2'].mean()
eqm = testes['test_neg_mean_squared_error'].mean()
ema = testes['test_neg_mean_absolute_error'].mean()
resultados0[nome_dos_modelos[i]] = [r2, eqm, ema]
resultados = pd.DataFrame(resultados0, index = nome_medidas).T
return resultados
```
Dado a quantidade de features presente na base é importante verificar se há um subconjunto de variáveis que possa ser treinado um modelo que ofereça um resultado melhor ou pelo menos no mesmo patamar do que se utilizarmos o conjunto completo das features.
Para fazer essa avaliação criei a função abaixo que recebe um modelo, a base completa de testes e a lista das features na ordem específica que se deseja verificar, a função roda diversas simulações incrementando uma variável por vez nos testes e retorna os resultados obtidos para cada subconjunto.
```
#Seleção de Features
def SelectFeatures(modelo, dadosX, dadosy, validacaoS, features):
i = 0
resultados = pd.DataFrame(columns=['Qtd','Performance'])
for n in np.arange(2,features.count()+1):
featuresS = features.iloc[0:n].index.tolist()
dadosSX = dadosX[featuresS]
testes = cross_validate(modelo, dadosSX, dadosy, cv=validacaoS, scoring='r2')
r2 = testes['test_score'].mean()
resultados.loc[i] = [n, r2]
i += 1
return resultados
```
Como passo final após as seleções de modelo e conjunto de features aplicarei um técnica de hipertunagem de parâmetros nos modelos para otimização e tentar melhorar o resultado obtido.
A função abaixo irá facilitar essa verificação, onde recebe uma lista de modelos e uma lista de parâmetros com um range variações que serão testados utilizando a função GridSearchCV da biblioteca Scikit learn.
```
#Tunagem de Hiperparâmetros
def Tunagem(modelo, treino, targets, parametros, validacao, score):
search = GridSearchCV(modelo, param_grid = parametros,
scoring = score, cv = validacao,
verbose = 1, n_jobs = -1)
search.fit(treino, targets)
bestModel = search.best_estimator_
bestScore = search.best_score_
bestParam = search.best_params_
return {
'bestModel': bestModel,
'bestScore': bestScore,
'bestParam': bestParam
}
```
###**5.2 - Modelos** <a name="modelos"></a>
Para os testes serão utilizados 9 modelos de regressão, sendo 4
Modelos lineares:
* LinearRegression
* LassoLars
* Ridge
* BayesianRidge
e 5 Modelos Não Lineares:
* KNeighborsRegressor
* DecisionTreeRegressor
* RandomForestRegressor
* GradientBoostingRegressor
* LGBMRegressor
```
#Declaração dos modelos que serão utilizados nos testes
lista_de_modelos = [
LinearRegression(),
LassoLars(),
Ridge(),
BayesianRidge(),
KNeighborsRegressor(n_neighbors = 5),
DecisionTreeRegressor(max_depth = 7, min_samples_split=3),
RandomForestRegressor(),
GradientBoostingRegressor(),
LGBMRegressor()
]
nome_dos_modelos = [
'LinearReg',
'LassoLars',
'Ridge',
'BayesianRidge',
'Knn',
'DTree',
'RanFor',
'GradBoost',
'LGBM'
]
```
###**5.3 - Teste Inicial** <a name="teste"></a>
Inicialmente rodo um teste com todas as features com as 3 transformações citadas
```
dadosX = dados.loc[:,'temp_SEP':]
dadosy = dados['produtividade']
dadosTX = transformacao(dadosX)
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
resultado1 = []
for i in np.arange(len(lista_transf)):
res = CompareML_kfold(dadosTX[i], dadosy, lista_de_modelos, nome_dos_modelos, validacao)
resultado1.append(res)
for i in np.arange(len(resultado1)):
print("Transformação : "+lista_transf[i])
print(resultado1[i])
print()
```
No teste com todos os campos os melhores resultados em media foram obtidos na tranformação *Quantile* e independente da transformação os métodos não lineares obtiveram os melhores R2, destacando o *Random Forest* e o *LGBM*.
Uma observação importante é que alguns métodos tiveram um R2 negativo, isto indica que o método não conseguiu convergir adequadamente.
###**5.4 - Seleção de Variáveis** <a name="selecao"></a>
Agora para tentar obter um melhor resultado irei aplicar 2 técnicas de seleção de variáveis:
* Maior Correlação
* Pelas Features mais importantes pela classificação do Randon Forest
####**5.4.1 - Maior Correlação** <a name="correlacao"></a>
Neste método calculo a correlação de todas as features independentes em relação a variável alvo, pego o valor absoluto e assim ordeno pelos valores maiores independente se a correlação é positiva ou negativa.
```
df_corProdut = dados.corr()['produtividade']
df_corProdut.drop(labels=['produtividade'],inplace=True)
df_corProdut = df_corProdut.abs()
df_corProdut.sort_values(inplace=True, ascending=False)
df_corProdut
```
Com as features ordenadas pelas de maiores correlação posso utilizar a função de teste de seleção explicada acima em "Funções"; para rodar o teste utilizei o modelo Lasso Lars visto ter sido o melhor modelo linear no teste anterior com todas as variáveis.
```
modelo = LassoLars()
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
resultados2 = []
dadosX = dados.loc[:,'temp_SEP':]
dadosy = dados['produtividade']
dadosTX = transformacao(dadosX)
for t in [0,1,2]:
r = SelectFeatures(modelo, dadosTX[t], dadosy, validacao, df_corProdut)
resultados2.append(r)
best_p1 = 0
best_t1 = 0
best_q1 = 0
for t in [0,1,2]:
for q in np.arange(len(resultados2[t])):
if resultados2[t]['Performance'][q] > best_p1:
best_p1 = resultados2[t]['Performance'][q]
best_t1 = t
best_q1 = int(resultados2[t]['Qtd'][q])
```
Após rodar o teste de seleção foi obtido o seguinte resultado:
```
print("Melhor Performance:", best_p1)
print("com a Qtd de features:", best_q1)
print("na Transformação:", lista_transf[best_t1])
```
Abaixo podemos ver todos os resultados obtidos de forma gráfica:
```
fig, axs = plt.subplots(3, figsize = [14,20])
for n in [0,1,2]:
axs[n].plot(resultados2[n]['Qtd'], resultados2[n]['Performance'])
axs[n].set_title(lista_transf[n])
axs[n].grid()
axs[n].set_xticks(resultados2[0]['Qtd'])
axs[n].set_yticks(np.arange(0.00, 0.30,0.01))
axs[0].set(xlabel='Qtd', ylabel='Performance')
plt.show()
```
Com a definição da quantidade de features faço agora a seleção dos campos que serão utilizados para teste dos modelos
```
features_correl = df_corProdut.iloc[0:best_q1].index.tolist()
features_correl
```
E rodo novamente os modelos com a seleção de features para observar os resultados
```
dadosCorX = dados[features_correl]
dadosy = dados['produtividade']
dadosCorTX = transformacao(dadosCorX)
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
resultado2 = []
for i in np.arange(len(lista_transf)):
res = CompareML_kfold(dadosCorTX[i], dadosy, lista_de_modelos, nome_dos_modelos, validacao)
resultado2.append(res)
for i in np.arange(len(resultado2)):
print("Transformação : "+lista_transf[i])
print(resultado2[i])
print()
```
A expectativa seria que ao final do teste fosse possível determinar qual melhor combinação de features traria um melhor resultado para um modelo linear, visto que selecionamos as features de maior correlação, e realmente houve em média uma melhora nos resultados obtidos nestes modelos.
Porém constatei também que os melhores resultados obtidos com os modelos lineares não superaram os dos modelos não lineares, que por sua vez devem ainda trazer melhores resultados ao ser utilizado técnicas de seleção específicas para estes modelos.
Logo esta metodologia de seleção de variáveis provavelmente não será a melhor opção para este problema.
####**5.4.2 - Features Importants** <a name="fimportantes"></a>
Utilizando o Random Forest podemos aproveitar um atributo do próprio modelo para identificar quais são as features mais importantes identificadas.
Primeiro rodo o modelo com todas as variáveis e depois armazeno a lista das variáveis ordenando pela informação "feature importances" gerada pelo modelo.
```
modelo = RandomForestRegressor()
dadosX = dados.loc[:,'temp_SEP':]
dadosy = dados['produtividade']
dadosTX = transformacao(dadosX)
modelo.fit(dadosTX[0], dadosy)
variaveis = pd.DataFrame()
variaveis['variavel'] = dadosTX[0].columns
variaveis['importância'] = modelo.feature_importances_
variaveis.sort_values(by = 'importância', ascending = False, inplace = True)
features_import = pd.Series(data=variaveis['importância'].array, index=variaveis['variavel'].array)
features_import
```
Com a lista ordenada de features posso rodar as simulações novamente utilizando a função de seleção de features.
```
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
resultados3 = []
for t in [0,1,2]:
r = SelectFeatures(modelo, dadosTX[t], dadosy, validacao, features_import)
resultados3.append(r)
best_p2 = 0
best_t2 = 0
best_q2 = 0
for t in [0,1,2]:
for q in np.arange(len(resultados3[t])):
if resultados3[t]['Performance'][q] > best_p2:
best_p2 = resultados3[t]['Performance'][q]
best_t2 = t
best_q2 = int(resultados3[t]['Qtd'][q])
```
E agora observamos os dados do melhor resultado obtido:
```
print("Melhor Performance:", best_p2)
print("com a Qtd de features:", best_q2)
print("na Transformação:", lista_transf[best_t2])
fig, axs = plt.subplots(3, figsize = [14,40])
for n in [0,1,2]:
axs[n].plot(resultados3[n]['Qtd'], resultados3[n]['Performance'])
axs[n].set_title(lista_transf[n])
axs[n].grid()
axs[n].set_xticks(resultados3[0]['Qtd'])
axs[n].set_yticks(np.arange(0.0, 0.40,0.01))
axs[0].set(xlabel='Qtd', ylabel='Performance')
plt.show()
```
Com os dados da seleção defino o subconjunto de features que obteve a melhor pontuação e rodo novamente os testes com os modelos.
```
features_import_selec = variaveis.iloc[0:best_q2,:1]
features_import_selec = features_import_selec['variavel'].values.tolist()
features_import_selec
dadosFeatImpX = dados[features_import_selec]
dadosy = dados['produtividade']
dadosFeatImpTX = transformacao(dadosFeatImpX)
resultado4 = []
for i in np.arange(len(lista_transf)):
res = CompareML_kfold(dadosFeatImpTX[i], dadosy, lista_de_modelos, nome_dos_modelos, validacao)
resultado4.append(res)
for i in np.arange(len(resultado4)):
print("Transformação : "+lista_transf[i])
print(resultado4[i])
print()
```
Com a nova seleção de variáveis os melhores resultados ainda foram com o *Random Forest* e o *Gradient Boosting* também com a transformação *Standard*.
###**5.5 - Hipertunagem** <a name="hipertunagem"></a>
Após ter identificado o modelo, o subconjunto de features e a tranformação que obtiveram o melhor resultado, vou tentar otimizar o modelo com uma técnica de hipertunagem de parâmetros testando algumas combinações e se estas melhoram os resultados obtidos.
```
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
testeTunModels = [RandomForestRegressor()]
testeTunParams = [
{
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 3, 5],
'max_depth' : [2, 4, 5, 6, 7],
'n_estimators': [50, 100, 125, 150, 175]
}
]
dadosHipX = dados[features_import_selec]
dadosy = dados['produtividade']
#Standard
escalaS = StandardScaler().fit(dadosHipX)
transfX = escalaS.transform(dadosHipX)
df = pd.DataFrame(transfX, columns=dadosHipX.columns) #Transformando o retorno de numpy.ndarray para data frame
dadosHipTX = df
for i in range(len(testeTunModels)):
ret = Tunagem(testeTunModels[i], dadosHipTX, dadosy, testeTunParams[i], validacao, 'r2')
print(ret)
```
No resultado da hipertunagem do *Random Forest* consegui obter uma melhoria no resultado com algumas mudanças nos parâmetros utilizados.
```
#Best Selection
#'bestParam': {'max_depth': 6, 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 150}
#'bestScore': 0.3512870654721305
```
###**5.6 - Modelo Final** <a name="final"></a>
Com todos os resultados e parâmetros obtidos nos passos anteriores é possível agora configurar o modelo final
```
dadosFX = dados[features_import_selec]
dadosy = dados['produtividade']
#Transformação
escalaS = StandardScaler().fit(dadosFX)
transfX = escalaS.transform(dadosFX)
df = pd.DataFrame(transfX, columns=dadosFX.columns)
dadosFTX = df
modelo = RandomForestRegressor(max_depth=6, min_samples_leaf=1, min_samples_split=2, n_estimators=150)
modelo.fit(dadosFTX, dadosy)
```
##**6 - Conclusão** <a name="conclusao"></a>
Nos testes realizados foi possível obter um r2 médio de cerca de 0.35, esta métrica indica que com apenas os dados meteorológicos podemos explicar até 35% da variabilidade da produtividade na cultura do amendoim, com isso é possível concluir que:
* Os fatores ambientais são sim significativos para o resultado da colheita;
* Mas apenas estes fatores são insuficientes para se obter uma predição mais precisa;
É bem provável que a inclusão de outros fatores na análise como: condições do solo, qualidade das sementes e características do manejo, tragam uma melhora na predição de produtividade do modelo.
Além disso, um quantidade maior de registros, preferencialmente com variação da região analisada, pode trazer uma melhor confiabilidade ao modelo.
Uma análise interessante a ser feita é a observação das principais features selecionadas tanto pelo método final utilizando o Randon Forest quanto pelo método de seleção das variáveis de maior correlação, cruzando as 2 seleções temos destaque para:
* A Precipitação no início do ciclo ( Setembro );
* A Temperatura principalmente de Outubro;
* A Irradiação solar dos meses de Setembro e Novembro;
* E a Velocidade do Vento em praticamente todo o ciclo, que pode ter relação tanto no controle climática quanto algum impacto físico nas plantas;
Entendendo melhor o impacto dessas variáveis na cultivo poderia levar a técnicas ou práticas para se ter maior controle no resultado das colheitas.
##**7 - Fontes** <a name="fontes"></a>
**Dados de produção de amendoim**:
cedidos pelo prof. Dr. Glauco de Souza Rolim - Unesp / Mini-curso da Penaut Tech
**Outras Informações**:
* Embrapa - Sistema de Produção de Amendoim : https://www.spo.cnptia.embrapa.br/conteudo?p_p_id=conteudoportlet_WAR_sistemasdeproducaolf6_1ga1ceportlet - Acessado em 01/11/2021
* Revista Globo Rural: https://revistagloborural.globo.com/Noticias/Empresas-e-Negocios/noticia/2021/06/exportacao-de-amendoim-natura-brasileiro-cresce-12-em-meio-pandemia.html - Acessado em 01/11/2021
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import LassoLars
from sklearn.linear_model import BayesianRidge
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from lightgbm import LGBMRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import QuantileTransformer
from sklearn.preprocessing import Normalizer
import sklearn.metrics as me
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_validate
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings('ignore')
fonte = 'https://github.com/angeruzzi/Datasource/blob/main/dados_producao_amendoim.xlsx?raw=true'
dados = pd.read_excel(fonte)
#Tamanho do Dataframe: 64 features e 222 registros
dados.shape
dados.info()
dados.head()
pd.set_option('display.max_rows', None, 'display.max_columns', None)
dados.describe().T
display(dados['Local'].unique())
display(dados['ano'].unique())
sns.boxplot(x='Local', y='area', data=dados, showmeans=True)
sns.boxplot(x='Local', y='produtividade', data=dados, showmeans=True)
sns.scatterplot(x='area', y='produtividade', data=dados)
sns.scatterplot(x='ano', y='produtividade', data=dados)
dados['produtividade'].plot(kind = 'box');
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 8.27)
sns.heatmap(dados[["produtividade"
,"temp_ANN"
,"ue_ANN"
,"ur_ANN"
,"vv_ANN"
,"tmax_ANN"
,"tmin_ANN"
,"umidsolos_ANN"
,"tamplitude_ANN"
,"Qo_ANN"
,"Prec_ANN"
,"Qg_ANN"
,"ETP_ANN"]].corr(), annot = True, ax = ax)
dados = dados.loc[:,'produtividade':'Qg_ANN']
dados.head()
#Tratamento dos Dados
lista_transf = ['Standard', 'Quantile', 'Normalizer']
def transformacao(dadosX):
dadosTX = []
#Standard
escalaS = StandardScaler().fit(dadosX)
transfX = escalaS.transform(dadosX)
df = pd.DataFrame(transfX, columns=dadosX.columns) #Transformando o retorno de numpy.ndarray para data frame
dadosTX.append(df)
#Quantile
escalaQ = QuantileTransformer().fit(dadosX)
transfX = escalaQ.transform(dadosX)
df = pd.DataFrame(transfX, columns=dadosX.columns)
dadosTX.append(df)
#Normalizer
escalaN = Normalizer().fit(dadosX)
transfX = escalaN.transform(dadosX)
df = pd.DataFrame(transfX, columns=dadosX.columns)
dadosTX.append(df)
return dadosTX
#Teste de Modelos
def CompareML_kfold(treinoX, treinoy, lista_de_modelos, nome_dos_modelos, validacao):
lista_medidas = ['r2','neg_mean_squared_error', 'neg_mean_absolute_error']
nome_medidas = ['R2','EQM','EMA']
resultados0 = {}
for i in range(len(lista_de_modelos)):
modelo = lista_de_modelos[i]
testes = cross_validate(modelo, treinoX, treinoy, cv=validacao, scoring=lista_medidas)
r2 = testes['test_r2'].mean()
eqm = testes['test_neg_mean_squared_error'].mean()
ema = testes['test_neg_mean_absolute_error'].mean()
resultados0[nome_dos_modelos[i]] = [r2, eqm, ema]
resultados = pd.DataFrame(resultados0, index = nome_medidas).T
return resultados
#Seleção de Features
def SelectFeatures(modelo, dadosX, dadosy, validacaoS, features):
i = 0
resultados = pd.DataFrame(columns=['Qtd','Performance'])
for n in np.arange(2,features.count()+1):
featuresS = features.iloc[0:n].index.tolist()
dadosSX = dadosX[featuresS]
testes = cross_validate(modelo, dadosSX, dadosy, cv=validacaoS, scoring='r2')
r2 = testes['test_score'].mean()
resultados.loc[i] = [n, r2]
i += 1
return resultados
#Tunagem de Hiperparâmetros
def Tunagem(modelo, treino, targets, parametros, validacao, score):
search = GridSearchCV(modelo, param_grid = parametros,
scoring = score, cv = validacao,
verbose = 1, n_jobs = -1)
search.fit(treino, targets)
bestModel = search.best_estimator_
bestScore = search.best_score_
bestParam = search.best_params_
return {
'bestModel': bestModel,
'bestScore': bestScore,
'bestParam': bestParam
}
#Declaração dos modelos que serão utilizados nos testes
lista_de_modelos = [
LinearRegression(),
LassoLars(),
Ridge(),
BayesianRidge(),
KNeighborsRegressor(n_neighbors = 5),
DecisionTreeRegressor(max_depth = 7, min_samples_split=3),
RandomForestRegressor(),
GradientBoostingRegressor(),
LGBMRegressor()
]
nome_dos_modelos = [
'LinearReg',
'LassoLars',
'Ridge',
'BayesianRidge',
'Knn',
'DTree',
'RanFor',
'GradBoost',
'LGBM'
]
dadosX = dados.loc[:,'temp_SEP':]
dadosy = dados['produtividade']
dadosTX = transformacao(dadosX)
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
resultado1 = []
for i in np.arange(len(lista_transf)):
res = CompareML_kfold(dadosTX[i], dadosy, lista_de_modelos, nome_dos_modelos, validacao)
resultado1.append(res)
for i in np.arange(len(resultado1)):
print("Transformação : "+lista_transf[i])
print(resultado1[i])
print()
df_corProdut = dados.corr()['produtividade']
df_corProdut.drop(labels=['produtividade'],inplace=True)
df_corProdut = df_corProdut.abs()
df_corProdut.sort_values(inplace=True, ascending=False)
df_corProdut
modelo = LassoLars()
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
resultados2 = []
dadosX = dados.loc[:,'temp_SEP':]
dadosy = dados['produtividade']
dadosTX = transformacao(dadosX)
for t in [0,1,2]:
r = SelectFeatures(modelo, dadosTX[t], dadosy, validacao, df_corProdut)
resultados2.append(r)
best_p1 = 0
best_t1 = 0
best_q1 = 0
for t in [0,1,2]:
for q in np.arange(len(resultados2[t])):
if resultados2[t]['Performance'][q] > best_p1:
best_p1 = resultados2[t]['Performance'][q]
best_t1 = t
best_q1 = int(resultados2[t]['Qtd'][q])
print("Melhor Performance:", best_p1)
print("com a Qtd de features:", best_q1)
print("na Transformação:", lista_transf[best_t1])
fig, axs = plt.subplots(3, figsize = [14,20])
for n in [0,1,2]:
axs[n].plot(resultados2[n]['Qtd'], resultados2[n]['Performance'])
axs[n].set_title(lista_transf[n])
axs[n].grid()
axs[n].set_xticks(resultados2[0]['Qtd'])
axs[n].set_yticks(np.arange(0.00, 0.30,0.01))
axs[0].set(xlabel='Qtd', ylabel='Performance')
plt.show()
features_correl = df_corProdut.iloc[0:best_q1].index.tolist()
features_correl
dadosCorX = dados[features_correl]
dadosy = dados['produtividade']
dadosCorTX = transformacao(dadosCorX)
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
resultado2 = []
for i in np.arange(len(lista_transf)):
res = CompareML_kfold(dadosCorTX[i], dadosy, lista_de_modelos, nome_dos_modelos, validacao)
resultado2.append(res)
for i in np.arange(len(resultado2)):
print("Transformação : "+lista_transf[i])
print(resultado2[i])
print()
modelo = RandomForestRegressor()
dadosX = dados.loc[:,'temp_SEP':]
dadosy = dados['produtividade']
dadosTX = transformacao(dadosX)
modelo.fit(dadosTX[0], dadosy)
variaveis = pd.DataFrame()
variaveis['variavel'] = dadosTX[0].columns
variaveis['importância'] = modelo.feature_importances_
variaveis.sort_values(by = 'importância', ascending = False, inplace = True)
features_import = pd.Series(data=variaveis['importância'].array, index=variaveis['variavel'].array)
features_import
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
resultados3 = []
for t in [0,1,2]:
r = SelectFeatures(modelo, dadosTX[t], dadosy, validacao, features_import)
resultados3.append(r)
best_p2 = 0
best_t2 = 0
best_q2 = 0
for t in [0,1,2]:
for q in np.arange(len(resultados3[t])):
if resultados3[t]['Performance'][q] > best_p2:
best_p2 = resultados3[t]['Performance'][q]
best_t2 = t
best_q2 = int(resultados3[t]['Qtd'][q])
print("Melhor Performance:", best_p2)
print("com a Qtd de features:", best_q2)
print("na Transformação:", lista_transf[best_t2])
fig, axs = plt.subplots(3, figsize = [14,40])
for n in [0,1,2]:
axs[n].plot(resultados3[n]['Qtd'], resultados3[n]['Performance'])
axs[n].set_title(lista_transf[n])
axs[n].grid()
axs[n].set_xticks(resultados3[0]['Qtd'])
axs[n].set_yticks(np.arange(0.0, 0.40,0.01))
axs[0].set(xlabel='Qtd', ylabel='Performance')
plt.show()
features_import_selec = variaveis.iloc[0:best_q2,:1]
features_import_selec = features_import_selec['variavel'].values.tolist()
features_import_selec
dadosFeatImpX = dados[features_import_selec]
dadosy = dados['produtividade']
dadosFeatImpTX = transformacao(dadosFeatImpX)
resultado4 = []
for i in np.arange(len(lista_transf)):
res = CompareML_kfold(dadosFeatImpTX[i], dadosy, lista_de_modelos, nome_dos_modelos, validacao)
resultado4.append(res)
for i in np.arange(len(resultado4)):
print("Transformação : "+lista_transf[i])
print(resultado4[i])
print()
validacao = RepeatedKFold(n_splits = 10, n_repeats = 30)
testeTunModels = [RandomForestRegressor()]
testeTunParams = [
{
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 3, 5],
'max_depth' : [2, 4, 5, 6, 7],
'n_estimators': [50, 100, 125, 150, 175]
}
]
dadosHipX = dados[features_import_selec]
dadosy = dados['produtividade']
#Standard
escalaS = StandardScaler().fit(dadosHipX)
transfX = escalaS.transform(dadosHipX)
df = pd.DataFrame(transfX, columns=dadosHipX.columns) #Transformando o retorno de numpy.ndarray para data frame
dadosHipTX = df
for i in range(len(testeTunModels)):
ret = Tunagem(testeTunModels[i], dadosHipTX, dadosy, testeTunParams[i], validacao, 'r2')
print(ret)
#Best Selection
#'bestParam': {'max_depth': 6, 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 150}
#'bestScore': 0.3512870654721305
dadosFX = dados[features_import_selec]
dadosy = dados['produtividade']
#Transformação
escalaS = StandardScaler().fit(dadosFX)
transfX = escalaS.transform(dadosFX)
df = pd.DataFrame(transfX, columns=dadosFX.columns)
dadosFTX = df
modelo = RandomForestRegressor(max_depth=6, min_samples_leaf=1, min_samples_split=2, n_estimators=150)
modelo.fit(dadosFTX, dadosy)
| 0.596903 | 0.954605 |
# Odds and Addends
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
```
This chapter presents a new way to represent a degree of certainty, **odds**, and a new form of Bayes's Theorem, called **Bayes's Rule**.
Bayes's Rule is convenient if you want to do a Bayesian update on paper or in your head.
It also sheds light on the important idea of **evidence** and how we can quantify the strength of evidence.
The second part of the chapter is about "addends", that is, quantities being added, and how we can compute their distributions.
We'll define functions that compute the distribution of sums, differences, products, and other operations.
Then we'll use those distributions as part of a Bayesian update.
## Odds
One way to represent a probability is with a number between 0 and 1, but that's not the only way.
If you have ever bet on a football game or a horse race, you have probably encountered another representation of probability, called **odds**.
You might have heard expressions like "the odds are three to one", but you might not know what that means.
The **odds in favor** of an event are the ratio of the probability
it will occur to the probability that it will not.
The following function does this calculation.
```
def odds(p):
return p / (1-p)
```
For example, if my team has a 75% chance of winning, the odds in their favor are three to one, because the chance of winning is three times the chance of losing.
```
odds(0.75)
```
You can write odds in decimal form, but it is also common to
write them as a ratio of integers.
So "three to one" is sometimes written $3:1$.
When probabilities are low, it is more common to report the
**odds against** rather than the odds in favor.
For example, if my horse has a 10% chance of winning, the odds in favor are $1:9$.
```
odds(0.1)
```
But in that case it would be more common I to say that the odds against are $9:1$.
```
odds(0.9)
```
Given the odds in favor, in decimal form, you can convert to probability like this:
```
def prob(o):
return o / (o+1)
```
For example, if the odds are $3/2$, the corresponding probability is $3/5$:
```
prob(3/2)
```
Or if you represent odds with a numerator and denominator, you can convert to probability like this:
```
def prob2(yes, no):
return yes / (yes + no)
prob2(3, 2)
```
Probabilities and odds are different representations of the
same information.
Given either one, you can compute the other.
## Bayes's Rule
So far we have worked with Bayes's theorem in the "probability form":
$$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$
Writing $\mathrm{odds}(A)$ for odds in favor of $A$, we can express Bayes's Theorem in "odds form":
$$\mathrm{odds}(A|D) = \mathrm{odds}(A)~\frac{P(D|A)}{P(D|B)}$$
This is Bayes's Rule, which says that the posterior odds are the prior odds times the likelihood ratio.
Bayes's Rule is convenient for computing a Bayesian update on paper or in your head. For example, let's go back to the cookie problem:
> Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. Bowl 2 contains 20 of each. Now suppose you choose one of the bowls at random and, without looking, select a cookie at random. The cookie is vanilla. What is the probability that it came from Bowl 1?
The prior probability is 50%, so the prior odds are 1. The likelihood ratio is $\frac{3}{4} / \frac{1}{2}$, or $3/2$. So the posterior odds are $3/2$, which corresponds to probability $3/5$.
```
prior_odds = 1
likelihood_ratio = (3/4) / (1/2)
post_odds = prior_odds * likelihood_ratio
post_odds
post_prob = prob(post_odds)
post_prob
```
If we draw another cookie and it's chocolate, we can do another update:
```
likelihood_ratio = (1/4) / (1/2)
post_odds *= likelihood_ratio
post_odds
```
And convert back to probability.
```
post_prob = prob(post_odds)
post_prob
```
## Oliver's blood
I’ll use Bayes’s Rule to solve another problem from MacKay’s
[*Information Theory, Inference, and Learning Algorithms*](https://www.inference.org.uk/mackay/itila/):
> Two people have left traces of their own blood at the scene of a crime. A suspect, Oliver, is tested and found to have type ‘O’ blood. The blood groups of the two traces are found to be of type ‘O’ (a common type in the local population, having frequency 60%) and of type ‘AB’ (a rare type, with frequency 1%). Do these data \[the traces found at the scene\] give evidence in favor of the proposition that Oliver was one of the people \[who left blood at the scene\]?
To answer this question, we need to think about what it means for data
to give evidence in favor of (or against) a hypothesis. Intuitively, we might say that data favor a hypothesis if the hypothesis is more likely in light of the data than it was before.
In the cookie problem, the prior odds are $1$, or probability 50%. The
posterior odds are $3/2$, or probability 60%. So the vanilla cookie is
evidence in favor of Bowl 1.
Bayes's Rule provides a way to make this intuition more precise. Again
$$\mathrm{odds}(A|D) = \mathrm{odds}(A)~\frac{P(D|A)}{P(D|B)}$$
Dividing through by $\mathrm{odds}(A)$, we get:
$$\frac{\mathrm{odds}(A|D)}{\mathrm{odds}(A)} = \frac{P(D|A)}{P(D|B)}$$
The term on the left is the ratio of the posterior and prior odds. The term on the right is the likelihood ratio, also called the **Bayes
factor**.
If the Bayes factor is greater than 1, that means that the data were
more likely under $A$ than under $B$. And that means that the odds are
greater, in light of the data, than they were before.
If the Bayes factor is less than 1, that means the data were less likely under $A$ than under $B$, so the odds in favor of $A$ go down.
Finally, if the Bayes factor is exactly 1, the data are equally likely
under either hypothesis, so the odds do not change.
Let's apply that to the problem at hand. If Oliver is one of the people who left blood at the crime scene, he accounts for the ‘O’ sample; in that case, the probability of the data is the probability that a random member of the population has type ‘AB’ blood, which is 1%.
If Oliver did not leave blood at the scene, we have two samples to
account for.
If we choose two random people from the population, what is the chance of finding one with type ‘O’ and one with type ‘AB’?
Well, there are two ways it might happen:
* The first person might have ‘O’ and the second ‘AB’,
* Or the first person might have ‘AB’ and the second ‘O’.
The probability of either combination is $(0.6) (0.01)$, which is 0.6%, so the total probability is twice that, or 1.2%.
So the data are a little more likely if Oliver is *not* one of the people who left blood at the scene.
We can use these probabilities to compute the likelihood ratio:
```
like1 = 0.01
like2 = 2 * 0.6 * 0.01
likelihood_ratio = like1 / like2
likelihood_ratio
```
Since the likelihood ratio is less than 1, the blood tests are evidence *against* the hypothesis that Oliver left blood at the scence.
But it is weak evidence. For example, if the prior odds were 1 (that is, 50% probability), the posterior odds would be 0.83, which corresponds to a probability of 45%:
```
post_odds = 1 * like1 / like2
prob(post_odds)
```
So this evidence doesn't "move the needle" very much.
This example is a little contrived, but it is demonstrates the
counterintuitive result that data *consistent* with a hypothesis are
not necessarily *in favor of* the hypothesis.
If this result still bothers you, this way of thinking might help: the
data consist of a common event, type ‘O’ blood, and a rare event, type
‘AB’ blood. If Oliver accounts for the common event, that leaves the
rare event unexplained. If Oliver doesn’t account for the ‘O’ blood, we
have two chances to find someone in the population with ‘AB’ blood. And
that factor of two makes the difference.
**Exercise:** Suppose other evidence made you 90% confident of Oliver's guilt. How much would this exculpatory evidence change your beliefs? What if you initially thought there was only a 10% chance of his guilt?
```
# Solution
post_odds = odds(0.9) * like1 / like2
prob(post_odds)
# Solution
post_odds = odds(0.1) * like1 / like2
prob(post_odds)
```
## Addends
The second half of this chapter is about distributions of sums and results of other operations.
We'll start with a forward problem, where we are given the inputs and compute the distribution of the output.
Then we'll work on inverse problems, where we are given the outputs and we compute the distribution of the inputs.
As a first example, suppose you roll two dice and add them up. What is the distribution of the sum?
I’ll use the following function to create a `Pmf` that represents the
possible outcomes of a die:
```
import numpy as np
from empiricaldist import Pmf
def make_die(sides):
outcomes = np.arange(1, sides+1)
die = Pmf(1/sides, outcomes)
return die
```
On a six-sided die, the outcomes are 1 through 6, all
equally likely.
```
die = make_die(6)
from utils import decorate
die.bar(alpha=0.4)
decorate(xlabel='Outcome',
ylabel='PMF')
```
If we roll two dice and add them up, there are 11 possible outcomes, 2
through 12, but they are not equally likely. To compute the distribution
of the sum, we have to enumerate the possible outcomes.
And that's how this function works:
```
def add_dist(pmf1, pmf2):
"""Compute the distribution of a sum."""
res = Pmf()
for q1, p1 in pmf1.items():
for q2, p2 in pmf2.items():
q = q1 + q2
p = p1 * p2
res[q] = res(q) + p
return res
```
The parameters are `Pmf` objects representing distributions.
The loops iterate though the quantities and probabilities in the `Pmf` objects.
Each time through the loop `q` gets the sum of a pair of quantities, and `p` gets the probability of the pair.
Because the same sum might appear more than once, we have to add up the total probability for each sum.
Notice a subtle element of this line:
```
res[q] = res(q) + p
```
I use parentheses on the right side of the assignment, which returns 0 if `q` does not appear yet in `res`.
I use brackets on the left side of the assignment to create or update an element in `res`; using parentheses on the left side would not work.
`Pmf` provides `add_dist`, which does the same thing.
You can call it as a method, like this:
```
twice = die.add_dist(die)
```
Or as a function, like this:
```
twice = Pmf.add_dist(die, die)
```
And here's what the result looks like:
```
from utils import decorate
def decorate_dice(title=''):
decorate(xlabel='Outcome',
ylabel='PMF',
title=title)
twice = add_dist(die, die)
twice.bar(color='C1', alpha=0.5)
decorate_dice()
```
If we have a sequence of `Pmf` objects that represent dice, we can compute the distribution of the sum like this:
```
def add_dist_seq(seq):
"""Compute Pmf of the sum of values from seq."""
total = seq[0]
for other in seq[1:]:
total = total.add_dist(other)
return total
```
As an example, we can make a list of three dice like this:
```
dice = [die] * 3
```
And we can compute the distribution of their sum like this.
```
thrice = add_dist_seq(dice)
```
The following figure shows what these three distributions look like:
- The distribution of a single die is uniform from 1 to 6.
- The sum of two dice has a triangle distribution between 2 and 12.
- The sum of three dice has a bell-shaped distribution between 3
and 18.
```
import matplotlib.pyplot as plt
die.plot(label='once')
twice.plot(label='twice')
thrice.plot(label='thrice')
plt.xticks([0,3,6,9,12,15,18])
decorate_dice(title='Distributions of sums')
```
As an aside, this example demonstrates the Central Limit Theorem, which says that the distribution of a sum converges on a bell-shaped normal distribution, at least under some conditions.
## Gluten sensitivity
In 2015 I read a paper that tested whether people diagnosed with gluten sensitivity (but not celiac disease) were able to distinguish gluten flour from non-gluten flour in a blind challenge
([you can read the paper here](https://onlinelibrary.wiley.com/doi/full/10.1111/apt.13372)).
Out of 35 subjects, 12 correctly identified the gluten flour based on
resumption of symptoms while they were eating it. Another 17 wrongly
identified the gluten-free flour based on their symptoms, and 6 were
unable to distinguish.
The authors conclude, "Double-blind gluten challenge induces symptom
recurrence in just one-third of patients."
This conclusion seems odd to me, because if none of the patients were
sensitive to gluten, we would expect some of them to identify the gluten flour by chance.
So here's the question: based on this data, how many of the subjects are sensitive to gluten and how many are guessing?
We can use Bayes's Theorem to answer this question, but first we have to make some modeling decisions. I'll assume:
- People who are sensitive to gluten have a 95% chance of correctly
identifying gluten flour under the challenge conditions, and
- People who are not sensitive have a 40% chance of identifying the
gluten flour by chance (and a 60% chance of either choosing the
other flour or failing to distinguish).
These particular values are arbitrary, but the results are not sensitive to these choices.
I will solve this problem in two steps. First, assuming that we know how many subjects are sensitive, I will compute the distribution of the data.
Then, using the likelihood of the data, I will compute the posterior distribution of the number of sensitive patients.
The first is the **forward problem**; the second is the **inverse
problem**.
## The forward problem
Suppose we know that 10 of the 35 subjects are sensitive to gluten. That
means that 25 are not:
```
n = 35
n_sensitive = 10
n_insensitive = n - n_sensitive
```
Each sensitive subject has a 95% chance of identifying the gluten flour,
so the number of correct identifications follows a binomial distribution.
I'll use `make_binomial`, which we defined in Section xx, to make a `Pmf` that represents the binomial distribution.
```
from utils import make_binomial
dist_sensitive = make_binomial(n_sensitive, 0.95)
dist_insensitive = make_binomial(n_insensitive, 0.40)
```
The results are the distributions for the number of correct identifications in each group.
Now we can use `add_dist` to compute the total number of correct identifications:
```
dist_total = Pmf.add_dist(dist_sensitive, dist_insensitive)
```
Here are the results:
```
dist_sensitive.plot(label='sensitive', linestyle='dashed')
dist_insensitive.plot(label='insensitive', linestyle='dashed')
dist_total.plot(label='total')
decorate(xlabel='Number of correct identifications',
ylabel='PMF',
title='Gluten sensitivity')
```
We expect most of the sensitive subjects to identify the gluten flour correctly.
Of the 25 insensitive subjects, we expect about 10 to identify the gluten flour by chance.
So we expect about 20 correct identifications in total.
This is the answer to the forward problem: given the number of sensitive subjects, we can compute the distribution of the data.
## The inverse problem
Now let's solve the inverse problem: given the data, we'll compute the posterior distribution of the number of sensitive subjects.
Here's how. I'll loop through the possible values of `n_sensitive` and compute the distribution of the data for each:
```
import pandas as pd
table = pd.DataFrame()
for n_sensitive in range(0, n+1):
n_insensitive = n - n_sensitive
dist_sensitive = make_binomial(n_sensitive, 0.95)
dist_insensitive = make_binomial(n_insensitive, 0.4)
dist_total = Pmf.add_dist(dist_sensitive, dist_insensitive)
table[n_sensitive] = dist_total
```
The loop enumerates the possible values of `n_sensitive`.
For each value, it computes the distribution of the total number of correct identifications, and stores the result as a column in a Pandas `DataFrame`.
```
table.head(3)
```
The following figure shows selected columns from the `DataFrame`, corresponding to different hypothetical values of `n_sensitive`:
```
table[0].plot(label='n_sensitive = 0')
table[10].plot(label='n_sensitive = 10')
table[20].plot(label='n_sensitive = 20', linestyle='dashed')
table[30].plot(label='n_sensitive = 30', linestyle='dotted')
decorate(xlabel='Number of correct identifications',
ylabel='PMF',
title='Gluten sensitivity')
```
Now we can use this table to compute the likelihood of the data:
```
likelihood1 = table.loc[12]
```
`loc` selects a row from the `DataFrame`.
The row with index 12 contains the probability of 12 correct identifications for each hypothetical value of `n_sensitive`.
And that's exactly the likelihood we need to do a Bayesian update.
I'll use a uniform prior, which implies that I would be equally surprised by any value of `n_sensitive`:
```
hypos = np.arange(n+1)
prior = Pmf(1, hypos)
```
And here's the update:
```
posterior1 = prior * likelihood1
posterior1.normalize()
```
For comparison, I also compute the posterior for another possible outcome, 20 correct identifications.
```
likelihood2 = table.loc[20]
posterior2 = prior * likelihood2
posterior2.normalize()
```
The following figure shows posterior distributions of `n_sensitive` based on the actual data, 12 correct identifications, and the other possible outcome, 20 correct identifications.
```
posterior1.plot(label='posterior with 12 correct')
posterior2.plot(label='posterior with 20 correct')
decorate(xlabel='Number of sensitive subjects',
ylabel='PMF',
title='Posterior distributions')
```
With 12 correct identifications, the most likely conclusion is that none of the subjects are sensitive to gluten.
If there had been 20 correct identifications, the most likely conclusion would be that 11-12 of the subjects were sensitive.
```
posterior1.max_prob()
posterior2.max_prob()
```
## Summary
This chapter presents two topics that are almost unrelated except that they make the title of the chapter catchy.
The first part of the chapter is about Bayes's Rule, evidence, and how we can quantify the strength of evidence using a likelihood ratio or Bayes factor.
The second part is about functions that compute the distribution of a sum, product, or the result of another binary operation.
We can use these functions to solve forward problems and inverse problems; that is, given the parameters of a system, we can compute the distribution of the data or, given the data, we can compute the distribution of the parameters.
In the next chapter, we'll compute distributions for minimums and maximums, and use them to solve more Bayesian problems.
But first you might want to work on these exercises.
## Exercises
**Exercise:** Let's use Bayes's Rule to solve the Elvis problem from Chapter xxx:
> Elvis Presley had a twin brother who died at birth. What is the probability that Elvis was an identical twin?
In 1935, about 2/3 of twins were fraternal and 1/3 were identical.
The question contains two pieces of information we can use to update this prior.
* First, Elvis's twin was also male, which is more likely if they were identical twins, with a likelihood ratio of 2.
* Also, Elvis's twin died at birth, which is more likely if they were identical twins, with a likelihood ratio of 1.25.
If you are curious about where those numbers come from, I wrote [a blog post about it](https://www.allendowney.com/blog/2020/01/28/the-elvis-problem-revisited).
```
# Solution
prior_odds = odds(1/3)
# Solution
post_odds = prior_odds * 2 * 1.25
# Solution
prob(post_odds)
```
**Exercise:** The following is an [interview question that appeared on glassdoor.com](https://www.glassdoor.com/Interview/You-re-about-to-get-on-a-plane-to-Seattle-You-want-to-know-if-you-should-bring-an-umbrella-You-call-3-random-friends-of-y-QTN_519262.htm), attributed to Facebook:
> You're about to get on a plane to Seattle. You want to know if you should bring an umbrella. You call 3 random friends of yours who live there and ask each independently if it's raining. Each of your friends has a 2/3 chance of telling you the truth and a 1/3 chance of messing with you by lying. All 3 friends tell you that "Yes" it is raining. What is the probability that it's actually raining in Seattle?
Use Bayes's Rule to solve this problem. As a prior you can assume that it rains in Seattle about 10% of the time.
This question causes some confusion about the differences between Bayesian and frequentist interpretations of probability; if you are curious about this point, [I wrote a blog article about it](http://allendowney.blogspot.com/2016/09/bayess-theorem-is-not-optional.html).
```
# Solution
prior_odds = odds(0.1)
# Solution
post_odds = prior_odds * 2 * 2 * 2
# Solution
prob(post_odds)
```
**Exercise:** [According to the CDC](https://www.cdc.gov/tobacco/data_statistics/fact_sheets/health_effects/effects_cig_smoking), people who smoke are about 25 times more likely to develop lung cancer than nonsmokers.
[Also according to the CDC](https://www.cdc.gov/tobacco/data_statistics/fact_sheets/adult_data/cig_smoking/index.htm), about 14\% of adults in the U.S. are smokers.
If you learn that someone has lung cancer, what is the probability they are a smoker?
```
# Solution
prior_odds = odds(0.14)
# Solution
post_odds = prior_odds * 25
# Solution
prob(post_odds)
```
**Exercise:** In *Dungeons & Dragons*, the amount of damage a goblin can withstand is the sum of two six-sided dice. The amount of damage you inflict with a short sword is determined by rolling one six-sided die.
A goblin is defeated if the total damage you inflict is greater than or equal to the amount it can withstand.
Suppose you are fighting a goblin and you have already inflicted 3 points of damage. What is your probability of defeating the goblin with your next successful attack?
Hint: You can use `Pmf.add_dist` to add a constant amount, like 3, to a `Pmf` and `Pmf.sub_dist` to compute the distribution of remaining points.
```
# Solution
d6 = make_die(6)
# Solution
# The amount the goblin can withstand is the sum of two d6
hit_points = Pmf.add_dist(d6, d6)
# Solution
# The total damage after a second attack is one d6 + 3
damage = Pmf.add_dist(d6, 3)
# Solution
# Here's what the distributions look like
hit_points.plot(label='Hit points')
damage.plot(label='Total damage')
decorate_dice('The Goblin Problem')
# Solution
# Here's the distribution of points the goblin has left
points_left = Pmf.sub_dist(hit_points, damage)
# Solution
# And here's the probability the goblin is dead
points_left.prob_le(0)
```
**Exercise:** Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die.
I choose one of the dice at random, roll it twice, multiply the outcomes, and report that the product is 12.
What is the probability that I chose the 8-sided die?
Hint: `Pmf` provides a function called `mul_dist` that takes two `Pmf` objects and returns a `Pmf` that represents the distribution of the product.
```
# Solution
hypos = [6, 8, 12]
prior = Pmf(1, hypos)
# Solution
# Here's the distribution of the product for the 4-sided die
d4 = make_die(4)
Pmf.mul_dist(d4, d4)
# Solution
# Here's the likelihood of getting a 12 for each die
likelihood = []
for sides in hypos:
die = make_die(sides)
pmf = Pmf.mul_dist(die, die)
likelihood.append(pmf[12])
likelihood
# Solution
# And here's the update
posterior = prior * likelihood
posterior.normalize()
posterior
```
**Exercise:** *Betrayal at House on the Hill* is a strategy game in which characters with different attributes explore a haunted house. Depending on their attributes, the characters roll different numbers of dice. For example, if attempting a task that depends on knowledge, Professor Longfellow rolls 5 dice, Madame Zostra rolls 4, and Ox Bellows rolls 3. Each die yields 0, 1, or 2 with equal probability.
If a randomly chosen character attempts a task three times and rolls a total of 3 on the first attempt, 4 on the second, and 5 on the third, which character do you think it was?
```
# Solution
die = Pmf(1/3, [0,1,2])
die
# Solution
pmfs = {}
pmfs['Bellows'] = add_dist_seq([die]*3)
pmfs['Zostra'] = add_dist_seq([die]*4)
pmfs['Longfellow'] = add_dist_seq([die]*5)
# Solution
pmfs['Zostra'](4)
# Solution
pmfs['Zostra']([3,4,5]).prod()
# Solution
hypos = pmfs.keys()
prior = Pmf(1/3, hypos)
prior
# Solution
likelihood = prior.copy()
for hypo in hypos:
likelihood[hypo] = pmfs[hypo]([3,4,5]).prod()
likelihood
# Solution
posterior = (prior * likelihood)
posterior.normalize()
posterior
```
**Exercise:** There are 538 members of the United States Congress.
Suppose we audit their investment portfolios and find that 312 of them out-perform the market.
Let's assume that an honest member of Congress has only a 50% chance of out-performing the market, but a dishonest member who trades on inside information has a 90% chance. How many members of Congress are honest?
```
# Solution
n = 538
table = pd.DataFrame()
for n_honest in range(0, n+1):
n_dishonest = n - n_honest
dist_honest = make_binomial(n_honest, 0.5)
dist_dishonest = make_binomial(n_dishonest, 0.9)
dist_total = Pmf.add_dist(dist_honest, dist_dishonest)
table[n_honest] = dist_total
table.shape
# Solution
data = 312
likelihood = table.loc[312]
len(likelihood)
# Solution
hypos = np.arange(n+1)
prior = Pmf(1, hypos)
len(prior)
# Solution
posterior = prior * likelihood
posterior.normalize()
posterior.mean()
# Solution
posterior.plot(label='posterior')
decorate(xlabel='Number of honest members of Congress',
ylabel='PMF')
# Solution
posterior.max_prob()
# Solution
posterior.credible_interval(0.9)
```
|
github_jupyter
|
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
def odds(p):
return p / (1-p)
odds(0.75)
odds(0.1)
odds(0.9)
def prob(o):
return o / (o+1)
prob(3/2)
def prob2(yes, no):
return yes / (yes + no)
prob2(3, 2)
prior_odds = 1
likelihood_ratio = (3/4) / (1/2)
post_odds = prior_odds * likelihood_ratio
post_odds
post_prob = prob(post_odds)
post_prob
likelihood_ratio = (1/4) / (1/2)
post_odds *= likelihood_ratio
post_odds
post_prob = prob(post_odds)
post_prob
like1 = 0.01
like2 = 2 * 0.6 * 0.01
likelihood_ratio = like1 / like2
likelihood_ratio
post_odds = 1 * like1 / like2
prob(post_odds)
# Solution
post_odds = odds(0.9) * like1 / like2
prob(post_odds)
# Solution
post_odds = odds(0.1) * like1 / like2
prob(post_odds)
import numpy as np
from empiricaldist import Pmf
def make_die(sides):
outcomes = np.arange(1, sides+1)
die = Pmf(1/sides, outcomes)
return die
die = make_die(6)
from utils import decorate
die.bar(alpha=0.4)
decorate(xlabel='Outcome',
ylabel='PMF')
def add_dist(pmf1, pmf2):
"""Compute the distribution of a sum."""
res = Pmf()
for q1, p1 in pmf1.items():
for q2, p2 in pmf2.items():
q = q1 + q2
p = p1 * p2
res[q] = res(q) + p
return res
res[q] = res(q) + p
twice = die.add_dist(die)
twice = Pmf.add_dist(die, die)
from utils import decorate
def decorate_dice(title=''):
decorate(xlabel='Outcome',
ylabel='PMF',
title=title)
twice = add_dist(die, die)
twice.bar(color='C1', alpha=0.5)
decorate_dice()
def add_dist_seq(seq):
"""Compute Pmf of the sum of values from seq."""
total = seq[0]
for other in seq[1:]:
total = total.add_dist(other)
return total
dice = [die] * 3
thrice = add_dist_seq(dice)
import matplotlib.pyplot as plt
die.plot(label='once')
twice.plot(label='twice')
thrice.plot(label='thrice')
plt.xticks([0,3,6,9,12,15,18])
decorate_dice(title='Distributions of sums')
n = 35
n_sensitive = 10
n_insensitive = n - n_sensitive
from utils import make_binomial
dist_sensitive = make_binomial(n_sensitive, 0.95)
dist_insensitive = make_binomial(n_insensitive, 0.40)
dist_total = Pmf.add_dist(dist_sensitive, dist_insensitive)
dist_sensitive.plot(label='sensitive', linestyle='dashed')
dist_insensitive.plot(label='insensitive', linestyle='dashed')
dist_total.plot(label='total')
decorate(xlabel='Number of correct identifications',
ylabel='PMF',
title='Gluten sensitivity')
import pandas as pd
table = pd.DataFrame()
for n_sensitive in range(0, n+1):
n_insensitive = n - n_sensitive
dist_sensitive = make_binomial(n_sensitive, 0.95)
dist_insensitive = make_binomial(n_insensitive, 0.4)
dist_total = Pmf.add_dist(dist_sensitive, dist_insensitive)
table[n_sensitive] = dist_total
table.head(3)
table[0].plot(label='n_sensitive = 0')
table[10].plot(label='n_sensitive = 10')
table[20].plot(label='n_sensitive = 20', linestyle='dashed')
table[30].plot(label='n_sensitive = 30', linestyle='dotted')
decorate(xlabel='Number of correct identifications',
ylabel='PMF',
title='Gluten sensitivity')
likelihood1 = table.loc[12]
hypos = np.arange(n+1)
prior = Pmf(1, hypos)
posterior1 = prior * likelihood1
posterior1.normalize()
likelihood2 = table.loc[20]
posterior2 = prior * likelihood2
posterior2.normalize()
posterior1.plot(label='posterior with 12 correct')
posterior2.plot(label='posterior with 20 correct')
decorate(xlabel='Number of sensitive subjects',
ylabel='PMF',
title='Posterior distributions')
posterior1.max_prob()
posterior2.max_prob()
# Solution
prior_odds = odds(1/3)
# Solution
post_odds = prior_odds * 2 * 1.25
# Solution
prob(post_odds)
# Solution
prior_odds = odds(0.1)
# Solution
post_odds = prior_odds * 2 * 2 * 2
# Solution
prob(post_odds)
# Solution
prior_odds = odds(0.14)
# Solution
post_odds = prior_odds * 25
# Solution
prob(post_odds)
# Solution
d6 = make_die(6)
# Solution
# The amount the goblin can withstand is the sum of two d6
hit_points = Pmf.add_dist(d6, d6)
# Solution
# The total damage after a second attack is one d6 + 3
damage = Pmf.add_dist(d6, 3)
# Solution
# Here's what the distributions look like
hit_points.plot(label='Hit points')
damage.plot(label='Total damage')
decorate_dice('The Goblin Problem')
# Solution
# Here's the distribution of points the goblin has left
points_left = Pmf.sub_dist(hit_points, damage)
# Solution
# And here's the probability the goblin is dead
points_left.prob_le(0)
# Solution
hypos = [6, 8, 12]
prior = Pmf(1, hypos)
# Solution
# Here's the distribution of the product for the 4-sided die
d4 = make_die(4)
Pmf.mul_dist(d4, d4)
# Solution
# Here's the likelihood of getting a 12 for each die
likelihood = []
for sides in hypos:
die = make_die(sides)
pmf = Pmf.mul_dist(die, die)
likelihood.append(pmf[12])
likelihood
# Solution
# And here's the update
posterior = prior * likelihood
posterior.normalize()
posterior
# Solution
die = Pmf(1/3, [0,1,2])
die
# Solution
pmfs = {}
pmfs['Bellows'] = add_dist_seq([die]*3)
pmfs['Zostra'] = add_dist_seq([die]*4)
pmfs['Longfellow'] = add_dist_seq([die]*5)
# Solution
pmfs['Zostra'](4)
# Solution
pmfs['Zostra']([3,4,5]).prod()
# Solution
hypos = pmfs.keys()
prior = Pmf(1/3, hypos)
prior
# Solution
likelihood = prior.copy()
for hypo in hypos:
likelihood[hypo] = pmfs[hypo]([3,4,5]).prod()
likelihood
# Solution
posterior = (prior * likelihood)
posterior.normalize()
posterior
# Solution
n = 538
table = pd.DataFrame()
for n_honest in range(0, n+1):
n_dishonest = n - n_honest
dist_honest = make_binomial(n_honest, 0.5)
dist_dishonest = make_binomial(n_dishonest, 0.9)
dist_total = Pmf.add_dist(dist_honest, dist_dishonest)
table[n_honest] = dist_total
table.shape
# Solution
data = 312
likelihood = table.loc[312]
len(likelihood)
# Solution
hypos = np.arange(n+1)
prior = Pmf(1, hypos)
len(prior)
# Solution
posterior = prior * likelihood
posterior.normalize()
posterior.mean()
# Solution
posterior.plot(label='posterior')
decorate(xlabel='Number of honest members of Congress',
ylabel='PMF')
# Solution
posterior.max_prob()
# Solution
posterior.credible_interval(0.9)
| 0.70028 | 0.983597 |
```
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
train_data = pd.read_csv('data/heart.csv')
train_data.head()
## removing the outliers
#df1 = train_data[train_data['trestbps'] < (train_data['trestbps'].mean() + train_data['trestbps'].std()*3)]
#df2 = df1[df1['chol'] < (df1['chol'].mean() + df1['chol'].std()*3)]
#df3 = df2[df2['thalach'] < (df2['thalach'].mean() + df2['thalach'].std()*3)]
#df4 = df3[df3['oldpeak'] < (df3['oldpeak'].mean() + df3['oldpeak'].std()*3)]
#train_data = df4.copy()
#train_data.head()
def onehot_encode(df, column_dict):
df = df.copy()
for column, prefix in column_dict.items():
dummies = pd.get_dummies(df[column], prefix=prefix)
df = pd.concat([df, dummies] , axis=1)
df = df.drop(column, axis=1)
return df
def preprocess_inputs(df, scaler):
df = df.copy()
# One-hot encode the nominal features
nominal_features = ['cp', 'slope', 'thal']
df = onehot_encode(df, dict(zip(nominal_features, ['cp','slope','thal'])))
X = df.drop('target', axis=1).copy()
y = df['target'].copy()
#Scale X
X = pd.DataFrame(scaler.fit_transform(X), columns=X.columns)
return X, y
X, y = preprocess_inputs(train_data, StandardScaler())
X.head()
_X = torch.FloatTensor(X.values)
_y = torch.LongTensor(y.values)
X_train, X_test, y_train, y_test = train_test_split(_X,_y,test_size=0.20)
X_train[0]
y_train[0]
class Model(nn.Module):
def __init__(self, inp=21, h1=42, h2=20, out=2):
super().__init__()
self.f1 = nn.Linear(inp, h1)
self.f2 = nn.Linear(h1, h2)
#self.f3 = nn.Linear(h2, h3)
self.out = nn.Linear(h2, out)
#self.dropout1 = nn.Dropout(0.2)
#self.dropout2 = nn.Dropout(0.5)
def forward(self, x):
x = F.relu(self.f1(x))
#x = self.dropout1(x)
x = F.relu(self.f2(x))
#x = self.dropout2(x)
#x = F.relu(self.f3(x))
x = torch.sigmoid(self.out(x))
return x
model = Model()
model
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
epochs = 300
losses = []
for i in range(epochs):
i += 1
y_predict = model.forward(X_train) ## predict using the model
loss = loss_function(y_predict, y_train) ## calculate the loss
losses.append(loss)
if i%10==0:
print(f'Epochs {i} ; Loss : {loss.item()}')
optimizer.zero_grad() ## before update the weights, make it zero to remove remaining derivatives
loss.backward() ## calculate the backward gradient
optimizer.step() ## calculate the loss
plt.plot(range(epochs), losses)
plt.title("Loss Graph")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
correct = 0
with torch.no_grad():
for i, data in enumerate(X_test):
y_val = model.forward(data)
#print(f'Row:{i} \t {y_val} \t {y_val.argmax()} \t {y_test[i]}')
if y_val.argmax() == y_test[i]:
correct += 1
print(f'{correct} out of {len(y_test)} = {100*correct/len(y_test):.2f}% correct')
with torch.no_grad():
y_test_val = model.forward(X_test)
loss = loss_function(y_test_val, y_test)
print(f'loss: {loss:.8f}')
torch.save(model.state_dict(),'model/medical-heart-pred-nn-20210717-2.pt')
```
## Predicting Test data
```
test_data = pd.read_excel('data/MedicalData_cts.xlsx', sheet_name='test', engine='openpyxl')
test_data.head()
test_data.info()
X_pred, _ = preprocess_inputs(test_data, StandardScaler())
X_pred.head()
new_model = Model()
new_model.load_state_dict(torch.load('model/medical-heart-pred-nn-20210717-2.pt'))
new_model.eval()
X_pred_tensor = torch.FloatTensor(X_pred.values)
preds = []
with torch.no_grad():
for i, data in enumerate(X_pred_tensor):
predictions = new_model.forward(data).argmax()
preds.append(predictions.detach().numpy())
result = pd.DataFrame({
'age':test_data['age'],
'sex': test_data['sex'],
'cp': test_data['cp'],
'trestbps': test_data['trestbps'],
'chol': test_data['chol'],
'fbs': test_data['fbs'],
'restecg': test_data['restecg'],
'thalach': test_data['thalach'],
'exang': test_data['exang'],
'oldpeak': test_data['oldpeak'],
'slope': test_data['slope'],
'ca': test_data['ca'],
'thal': test_data['thal'],
'target':preds
})
print(result[['age','target']])
#result.to_csv('data/result.csv', index=False)
result['target'].sum()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
train_data = pd.read_csv('data/heart.csv')
train_data.head()
## removing the outliers
#df1 = train_data[train_data['trestbps'] < (train_data['trestbps'].mean() + train_data['trestbps'].std()*3)]
#df2 = df1[df1['chol'] < (df1['chol'].mean() + df1['chol'].std()*3)]
#df3 = df2[df2['thalach'] < (df2['thalach'].mean() + df2['thalach'].std()*3)]
#df4 = df3[df3['oldpeak'] < (df3['oldpeak'].mean() + df3['oldpeak'].std()*3)]
#train_data = df4.copy()
#train_data.head()
def onehot_encode(df, column_dict):
df = df.copy()
for column, prefix in column_dict.items():
dummies = pd.get_dummies(df[column], prefix=prefix)
df = pd.concat([df, dummies] , axis=1)
df = df.drop(column, axis=1)
return df
def preprocess_inputs(df, scaler):
df = df.copy()
# One-hot encode the nominal features
nominal_features = ['cp', 'slope', 'thal']
df = onehot_encode(df, dict(zip(nominal_features, ['cp','slope','thal'])))
X = df.drop('target', axis=1).copy()
y = df['target'].copy()
#Scale X
X = pd.DataFrame(scaler.fit_transform(X), columns=X.columns)
return X, y
X, y = preprocess_inputs(train_data, StandardScaler())
X.head()
_X = torch.FloatTensor(X.values)
_y = torch.LongTensor(y.values)
X_train, X_test, y_train, y_test = train_test_split(_X,_y,test_size=0.20)
X_train[0]
y_train[0]
class Model(nn.Module):
def __init__(self, inp=21, h1=42, h2=20, out=2):
super().__init__()
self.f1 = nn.Linear(inp, h1)
self.f2 = nn.Linear(h1, h2)
#self.f3 = nn.Linear(h2, h3)
self.out = nn.Linear(h2, out)
#self.dropout1 = nn.Dropout(0.2)
#self.dropout2 = nn.Dropout(0.5)
def forward(self, x):
x = F.relu(self.f1(x))
#x = self.dropout1(x)
x = F.relu(self.f2(x))
#x = self.dropout2(x)
#x = F.relu(self.f3(x))
x = torch.sigmoid(self.out(x))
return x
model = Model()
model
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
epochs = 300
losses = []
for i in range(epochs):
i += 1
y_predict = model.forward(X_train) ## predict using the model
loss = loss_function(y_predict, y_train) ## calculate the loss
losses.append(loss)
if i%10==0:
print(f'Epochs {i} ; Loss : {loss.item()}')
optimizer.zero_grad() ## before update the weights, make it zero to remove remaining derivatives
loss.backward() ## calculate the backward gradient
optimizer.step() ## calculate the loss
plt.plot(range(epochs), losses)
plt.title("Loss Graph")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show()
correct = 0
with torch.no_grad():
for i, data in enumerate(X_test):
y_val = model.forward(data)
#print(f'Row:{i} \t {y_val} \t {y_val.argmax()} \t {y_test[i]}')
if y_val.argmax() == y_test[i]:
correct += 1
print(f'{correct} out of {len(y_test)} = {100*correct/len(y_test):.2f}% correct')
with torch.no_grad():
y_test_val = model.forward(X_test)
loss = loss_function(y_test_val, y_test)
print(f'loss: {loss:.8f}')
torch.save(model.state_dict(),'model/medical-heart-pred-nn-20210717-2.pt')
test_data = pd.read_excel('data/MedicalData_cts.xlsx', sheet_name='test', engine='openpyxl')
test_data.head()
test_data.info()
X_pred, _ = preprocess_inputs(test_data, StandardScaler())
X_pred.head()
new_model = Model()
new_model.load_state_dict(torch.load('model/medical-heart-pred-nn-20210717-2.pt'))
new_model.eval()
X_pred_tensor = torch.FloatTensor(X_pred.values)
preds = []
with torch.no_grad():
for i, data in enumerate(X_pred_tensor):
predictions = new_model.forward(data).argmax()
preds.append(predictions.detach().numpy())
result = pd.DataFrame({
'age':test_data['age'],
'sex': test_data['sex'],
'cp': test_data['cp'],
'trestbps': test_data['trestbps'],
'chol': test_data['chol'],
'fbs': test_data['fbs'],
'restecg': test_data['restecg'],
'thalach': test_data['thalach'],
'exang': test_data['exang'],
'oldpeak': test_data['oldpeak'],
'slope': test_data['slope'],
'ca': test_data['ca'],
'thal': test_data['thal'],
'target':preds
})
print(result[['age','target']])
#result.to_csv('data/result.csv', index=False)
result['target'].sum()
| 0.705886 | 0.500854 |
# Training Neural Networks
The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.
<img src="assets/function_approx.png" width=500px>
At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.
To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.
<img src='assets/gradient_descent.png' width=350px>
## Backpropagation
For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.
Training multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.
<img src='assets/backprop_diagram.png' width=550px>
In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.
To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2}
$$
**Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.
We update our weights using this gradient with some learning rate $\alpha$.
$$
\large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1}
$$
The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.
## Losses in PyTorch
Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.
Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),
> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.
>
> The input is expected to contain scores for each class.
This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.
```
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
### Note
If you haven't seen `nn.Sequential` yet, please finish the end of the Part 2 notebook.
```
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).
>**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss. Note that for `nn.LogSoftmax` and `F.log_softmax` you'll need to set the `dim` keyword argument appropriately. `dim=0` calculates softmax across the rows, so each column sums to 1, while `dim=1` calculates across the columns so each row sums to 1. Think about what you want the output to be and choose `dim` appropriately.
```
# TODO: Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
# TODO: Define the loss
criterion = nn.NLLLoss()
### Run this to check your work
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
## Autograd
Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`.
You can turn off gradients for a block of code with the `torch.no_grad()` content:
```python
x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
```
Also, you can turn on or off gradients altogether with `torch.set_grad_enabled(True|False)`.
The gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`.
```
x = torch.randn(2,2, requires_grad=True)
print(x)
y = x**2
print(y)
```
Below we can see the operation that created `y`, a power operation `PowBackward0`.
```
## grad_fn shows the function that generated this variable
print(y.grad_fn)
```
The autograd module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor `y` to a scalar value, the mean.
```
z = y.mean()
print(z)
```
You can check the gradients for `x` and `y` but they are empty currently.
```
print(x.grad, y.grad)
```
To calculate the gradients, you need to run the `.backward` method on a Variable, `z` for example. This will calculate the gradient for `z` with respect to `x`
$$
\frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2}
$$
```
z.backward()
print(x.grad)
print(x/2)
```
These gradients calculations are particularly useful for neural networks. For training we need the gradients of the cost with respect to the weights. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step.
## Loss and Autograd together
When we create a network with PyTorch, all of the parameters are initialized with `requires_grad = True`. This means that when we calculate the loss and call `loss.backward()`, the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.
```
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
logits = model(images)
loss = criterion(logits, labels)
print('Before backward pass: \n', model[0].weight.grad)
loss.backward()
print('After backward pass: \n', model[0].weight.grad)
```
## Training the network!
There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below.
```
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch:
* Make a forward pass through the network
* Use the network output to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code `optimizer.zero_grad()`. When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.
```
print('Initial weights - ', model[0].weight)
images, labels = next(iter(trainloader))
images.resize_(64, 784)
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Forward pass, then backward pass, then update weights
output = model(images)
loss = criterion(output, labels)
loss.backward()
print('Gradient -', model[0].weight.grad)
# Take an update step and few the new weights
optimizer.step()
print('Updated weights - ', model[0].weight)
```
### Training for real
Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an *epoch*. So here we're going to loop through `trainloader` to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.
>**Exercise:** Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.
```
## Your solution here
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print("Training loss: %.3f"%(running_loss/len(trainloader)))
```
With the network trained, we can check out it's predictions.
```
%matplotlib inline
import helper
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
helper.view_classify(img.view(1, 28, 28), ps)
```
Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset.
|
github_jupyter
|
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
# TODO: Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
# TODO: Define the loss
criterion = nn.NLLLoss()
### Run this to check your work
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
x = torch.randn(2,2, requires_grad=True)
print(x)
y = x**2
print(y)
## grad_fn shows the function that generated this variable
print(y.grad_fn)
z = y.mean()
print(z)
print(x.grad, y.grad)
z.backward()
print(x.grad)
print(x/2)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
logits = model(images)
loss = criterion(logits, labels)
print('Before backward pass: \n', model[0].weight.grad)
loss.backward()
print('After backward pass: \n', model[0].weight.grad)
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
print('Initial weights - ', model[0].weight)
images, labels = next(iter(trainloader))
images.resize_(64, 784)
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Forward pass, then backward pass, then update weights
output = model(images)
loss = criterion(output, labels)
loss.backward()
print('Gradient -', model[0].weight.grad)
# Take an update step and few the new weights
optimizer.step()
print('Updated weights - ', model[0].weight)
## Your solution here
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print("Training loss: %.3f"%(running_loss/len(trainloader)))
%matplotlib inline
import helper
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
helper.view_classify(img.view(1, 28, 28), ps)
| 0.843057 | 0.992489 |
### Geek and the equation
Given a number N, find the value of below equation for the given number.
Input:
First line of input contains testcase T. For each testcase, there will be a single line containing a number N as input.
Output:
For each testcase, print the resultant of the equation.
Constraints:
1<=T<=100
1<=N<=105
Example:
Input:
4
1
2
121
99
Output:
1
5
597861
328350
Explanation:
For testcase 2, the resultant of the equation for N = 2 comes out as 5.
```
t = int(input())
for i in range(t):
n = int(input())
res = 0
for m in range(1, n+1):
res += ((m+1)**2) - ((3*m)+1) + m
print(res)
```
---
### Geek and the base
The task is to convert decimal numbers(base 10) to the ternary numbers(base 3).
Input:
First line of input contains testcase T. For each testcase there will be a single line containing a decimal number N as input.
Output:
For each testcase, print the ternary(base 3) equivalent of the given decimal number.
Constraints:
1<=T<=120
0<=N<=107
Example:
Input:
5
1
2
3
4
5
Output:
1
2
10
11
12
Explanation:
For testcase 1, we have input 1: The ternary equivalent of 1 is 1.
For testcase 5, we have input 5: The ternary equivalent of 5 is 12.
```
# Input:
# 5
# 1
# 2
# 3
# 4
# 5
# Output:
# 1
# 2
# 10
# 11
# 12
def ternary (n):
if n == 0:
return '0'
nums = []
while n:
n, r = divmod(n, 3)
nums.append(str(r))
return ''.join(reversed(nums))
t = int(input())
for i in range(t):
n = int(input())
print(ternary(n))
```
---
### Geeks and the test
Given a number, find if it is Palindromic Prime ore not. A Palindromic Prime is any number that is both a palindrome and a prime.
Input:
First line of input contains testcase T. For each testcase, there will be a single line containing a number N as input.
Output:
For each testcase, print 1 if N is palindromic prime, else print 0.
Constraints:
1 <= T <= 200
0 <= N <= 107
Example:
Input:
4
1
11
121
99
Output:
0
1
0
0
Explanation:
For testcase 1, we have input 1: We know 1 is not a prime so we print 0.
For testcase 2, we have input 11: 11 is both a prime and palindrome so we print 1.
```
def reverse(s):
return s[::-1]
def isPalindrome(s):
# Calling reverse function
rev = reverse(s)
# Checking if both string are equal or not
if (s == rev):
return True
return False
def create_sieve(n):
prime = [True for i in range(n+1)]
prime[0] = prime[1] = False
p = 2
while (p * p <= n):
# If prime[p] is not changed, then it is a prime
if (prime[p] == True):
# Update all multiples of p
for i in range(p * 2, n+1, p):
prime[i] = False
p += 1
return prime
def prime(n):
return
lst = create_sieve(10000000)
t = int(input())
for i in range(t):
n = int(input())
if isPalindrome(str(n)) and lst[n]:
print('1')
else:
print('0')
```
---
### Geek and the lockers
Geek's high school has N lockers. On a particular day, Geeks decides to play a game and open only those lockers that are multiple of M. Initially all lockers are closed. Lockers are numbered from 1 to N. Find the number of lockers that remain closed.
Input:
First line of input contains testcase T. For each testcase, there will be two lines of input:
First line contains N, the number of lockers.
Second line contains M. Geeks will open lockers that are multiple of M.
Output:
For each testcase, print the number of lockers that remain closed.
Constraints:
1 <= T <= 100
1 <= N <= 10000000
0 <= M <= N
Example:
Input:
2
10
2
12
3
Output:
5
8
Explanation:
For testcase 1:
N = 10
M = 2
We have 10 lockers and Geek opens lockers numbered 2,4,6,8,10. So, the lockers that remain closed are 1,3,5,7,9. Which are a total of 5 lockers.
```
t = int(input())
for i in range(t):
n = int(input())
m = int(input())
if m == 0:
print(str(n))
else:
print(n - (n//m))
```
|
github_jupyter
|
t = int(input())
for i in range(t):
n = int(input())
res = 0
for m in range(1, n+1):
res += ((m+1)**2) - ((3*m)+1) + m
print(res)
# Input:
# 5
# 1
# 2
# 3
# 4
# 5
# Output:
# 1
# 2
# 10
# 11
# 12
def ternary (n):
if n == 0:
return '0'
nums = []
while n:
n, r = divmod(n, 3)
nums.append(str(r))
return ''.join(reversed(nums))
t = int(input())
for i in range(t):
n = int(input())
print(ternary(n))
def reverse(s):
return s[::-1]
def isPalindrome(s):
# Calling reverse function
rev = reverse(s)
# Checking if both string are equal or not
if (s == rev):
return True
return False
def create_sieve(n):
prime = [True for i in range(n+1)]
prime[0] = prime[1] = False
p = 2
while (p * p <= n):
# If prime[p] is not changed, then it is a prime
if (prime[p] == True):
# Update all multiples of p
for i in range(p * 2, n+1, p):
prime[i] = False
p += 1
return prime
def prime(n):
return
lst = create_sieve(10000000)
t = int(input())
for i in range(t):
n = int(input())
if isPalindrome(str(n)) and lst[n]:
print('1')
else:
print('0')
t = int(input())
for i in range(t):
n = int(input())
m = int(input())
if m == 0:
print(str(n))
else:
print(n - (n//m))
| 0.196017 | 0.970716 |
```
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="robotic-tract-334610-6605cbffd65c.json"
import numpy as np
import pandas as pd
from google.cloud import bigquery
client = bigquery.Client()
import random
random.seed(38)
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import pylab
# Join Etherium and Polygon Datasets based on the address
count_token_transactions = """
SELECT
from_address,
COUNT(*) as count
FROM
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers
WHERE TRUE
AND DATE(block_timestamp) >= '2021-01-01' AND DATE(block_timestamp) < '2022-01-01'
GROUP BY from_address
ORDER BY count DESC
"""
count_token_transactions = client.query(count_token_transactions).to_dataframe()
count_token_transactions.head()
count_token_transactions.to_csv("count_token_transactions.csv")
randomNum = random.randint(30,100)
count_token_transactions.loc[randomNum]
sent_from_selected_address = """
SELECT
token_address,
symbol,
name,
from_address,
to_address,
value,
token_transfers.block_timestamp as date
FROM
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers,
`bigquery-public-data.crypto_ethereum.tokens` AS tokens
WHERE TRUE
AND token_transfers.token_address = tokens.address
AND from_address = '0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98'
AND DATE(token_transfers.block_timestamp) >= '2021-01-01'
AND DATE(token_transfers.block_timestamp) < '2022-01-01'
"""
sent_from_selected_address = client.query(sent_from_selected_address).to_dataframe()
sent_from_selected_address.head()
sent_from_selected_address.to_csv("sent_from_0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98.csv")
sent_from_selected_address["token_address"].nunique()
sent_from_selected_address["name"].value_counts()[:20]
sent_to_selected_address = """
SELECT
token_address,
symbol,
name,
from_address,
to_address,
value,
token_transfers.block_timestamp as date
FROM
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers,
`bigquery-public-data.crypto_ethereum.tokens` AS tokens
WHERE TRUE
AND token_transfers.token_address = tokens.address
AND to_address = '0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98'
AND DATE(token_transfers.block_timestamp) >= '2021-01-01'
AND DATE(token_transfers.block_timestamp) < '2022-01-01'
"""
sent_to_selected_address = client.query(sent_to_selected_address).to_dataframe()
sent_to_selected_address.head()
sent_to_selected_address["token_address"].nunique()
sent_to_selected_address.to_csv("sent_to_0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98.csv")
```
**tokens that she bought**
```
tokens_bought = sent_to_selected_address["name"].value_counts()
```
**tokens that she sold**
```
tokens_sold = sent_from_selected_address["name"].value_counts()
```
**Tokens she made more transactions for buying than selling**
```
(sent_to_selected_address["name"].value_counts()-sent_from_selected_address["name"].value_counts()).sort_values(ascending=False)
for token in tokens_bought.keys():
buying_value = sent_to_selected_address.loc[sent_to_selected_address.name==token]["value"].sum()
selling_value = sent_from_selected_address.loc[sent_from_selected_address.name==token]["value"].sum()
still_exist = (int(buying_value)-int(selling_value))
print("EXISTS: "+ token + "\t" if still_exist>0 else "NOT EXISTS: " + token)
pd.set_option('display.max_rows', 500)
selected_token_sent_to = sent_to_selected_address.loc[sent_to_selected_address.name=="Universal Euro"].sort_values("date")[["date", "value"]]
selected_token_sent_from=sent_from_selected_address.loc[sent_from_selected_address.name=="Universal Euro"].copy()
selected_token_sent_from = sent_from_selected_address.loc[sent_from_selected_address.name=="Universal Euro"].copy()
selected_token_sent_from["value"] = pd.to_numeric(selected_token_sent_from["value"]).apply(lambda x: x*-1)
selected_token_sent_from = selected_token_sent_from.sort_values("date")[["date", "value"]]
selected_token_sent_to.append(selected_token_sent_from, ignore_index=True).sort_values("date").to_csv("0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98_UPEUR_transactions.csv")
tokens_bought_symbol = sent_to_selected_address["symbol"].value_counts()[:10]
G = nx.DiGraph()
for token in tokens_bought_symbol.keys():
buying_value = sent_to_selected_address.loc[sent_to_selected_address.symbol==token]["value"].sum()
selling_value = sent_from_selected_address.loc[sent_from_selected_address.symbol==token]["value"].sum()
still_exist = (int(buying_value)-int(selling_value))
if still_exist>0:
G.add_edges_from([(token, 'Account')])
else:
G.add_edges_from([('Account', token)])
pos=nx.spring_layout(G, seed = 44, k = 0.8)
nx.draw(G,pos, node_size=2500, with_labels=True, node_color='#0B7C32')
pylab.show()
balance_selected_address = """
SELECT
eth_balance
FROM
`bigquery-public-data.crypto_ethereum.balances` AS balances
WHERE TRUE
AND address = '0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98'
"""
balance_selected_address = client.query(balance_selected_address).to_dataframe()
balance_selected_address
count_token_transactions_high_balance = """
SELECT
token_transfers.from_address,
COUNT(*) as count
FROM
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers,
`bigquery-public-data.crypto_ethereum.balances` AS balances,
(
SELECT CAST(AVG(balances1.eth_balance/POWER(10,15)) AS INT64) as avg_balance
FROM
`bigquery-public-data.crypto_ethereum.balances` as balances1,
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers1
WHERE TRUE
AND token_transfers1.from_address = balances1.address
AND DATE(token_transfers1.block_timestamp) >= '2021-12-01' AND DATE(token_transfers1.block_timestamp) <= '2022-01-15'
)
WHERE TRUE
AND token_transfers.from_address = balances.address
AND DATE(token_transfers.block_timestamp) >= '2021-12-01' AND DATE(token_transfers.block_timestamp) <= '2022-01-15'
AND (CAST(balances.eth_balance AS INT64))/POWER(10,16)>avg_balance
GROUP BY token_transfers.from_address
ORDER BY count DESC
"""
count_token_transactions_high_balance = client.query(count_token_transactions_high_balance).to_dataframe()
count_token_transactions_high_balance.head()
```
|
github_jupyter
|
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="robotic-tract-334610-6605cbffd65c.json"
import numpy as np
import pandas as pd
from google.cloud import bigquery
client = bigquery.Client()
import random
random.seed(38)
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import pylab
# Join Etherium and Polygon Datasets based on the address
count_token_transactions = """
SELECT
from_address,
COUNT(*) as count
FROM
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers
WHERE TRUE
AND DATE(block_timestamp) >= '2021-01-01' AND DATE(block_timestamp) < '2022-01-01'
GROUP BY from_address
ORDER BY count DESC
"""
count_token_transactions = client.query(count_token_transactions).to_dataframe()
count_token_transactions.head()
count_token_transactions.to_csv("count_token_transactions.csv")
randomNum = random.randint(30,100)
count_token_transactions.loc[randomNum]
sent_from_selected_address = """
SELECT
token_address,
symbol,
name,
from_address,
to_address,
value,
token_transfers.block_timestamp as date
FROM
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers,
`bigquery-public-data.crypto_ethereum.tokens` AS tokens
WHERE TRUE
AND token_transfers.token_address = tokens.address
AND from_address = '0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98'
AND DATE(token_transfers.block_timestamp) >= '2021-01-01'
AND DATE(token_transfers.block_timestamp) < '2022-01-01'
"""
sent_from_selected_address = client.query(sent_from_selected_address).to_dataframe()
sent_from_selected_address.head()
sent_from_selected_address.to_csv("sent_from_0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98.csv")
sent_from_selected_address["token_address"].nunique()
sent_from_selected_address["name"].value_counts()[:20]
sent_to_selected_address = """
SELECT
token_address,
symbol,
name,
from_address,
to_address,
value,
token_transfers.block_timestamp as date
FROM
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers,
`bigquery-public-data.crypto_ethereum.tokens` AS tokens
WHERE TRUE
AND token_transfers.token_address = tokens.address
AND to_address = '0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98'
AND DATE(token_transfers.block_timestamp) >= '2021-01-01'
AND DATE(token_transfers.block_timestamp) < '2022-01-01'
"""
sent_to_selected_address = client.query(sent_to_selected_address).to_dataframe()
sent_to_selected_address.head()
sent_to_selected_address["token_address"].nunique()
sent_to_selected_address.to_csv("sent_to_0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98.csv")
tokens_bought = sent_to_selected_address["name"].value_counts()
tokens_sold = sent_from_selected_address["name"].value_counts()
(sent_to_selected_address["name"].value_counts()-sent_from_selected_address["name"].value_counts()).sort_values(ascending=False)
for token in tokens_bought.keys():
buying_value = sent_to_selected_address.loc[sent_to_selected_address.name==token]["value"].sum()
selling_value = sent_from_selected_address.loc[sent_from_selected_address.name==token]["value"].sum()
still_exist = (int(buying_value)-int(selling_value))
print("EXISTS: "+ token + "\t" if still_exist>0 else "NOT EXISTS: " + token)
pd.set_option('display.max_rows', 500)
selected_token_sent_to = sent_to_selected_address.loc[sent_to_selected_address.name=="Universal Euro"].sort_values("date")[["date", "value"]]
selected_token_sent_from=sent_from_selected_address.loc[sent_from_selected_address.name=="Universal Euro"].copy()
selected_token_sent_from = sent_from_selected_address.loc[sent_from_selected_address.name=="Universal Euro"].copy()
selected_token_sent_from["value"] = pd.to_numeric(selected_token_sent_from["value"]).apply(lambda x: x*-1)
selected_token_sent_from = selected_token_sent_from.sort_values("date")[["date", "value"]]
selected_token_sent_to.append(selected_token_sent_from, ignore_index=True).sort_values("date").to_csv("0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98_UPEUR_transactions.csv")
tokens_bought_symbol = sent_to_selected_address["symbol"].value_counts()[:10]
G = nx.DiGraph()
for token in tokens_bought_symbol.keys():
buying_value = sent_to_selected_address.loc[sent_to_selected_address.symbol==token]["value"].sum()
selling_value = sent_from_selected_address.loc[sent_from_selected_address.symbol==token]["value"].sum()
still_exist = (int(buying_value)-int(selling_value))
if still_exist>0:
G.add_edges_from([(token, 'Account')])
else:
G.add_edges_from([('Account', token)])
pos=nx.spring_layout(G, seed = 44, k = 0.8)
nx.draw(G,pos, node_size=2500, with_labels=True, node_color='#0B7C32')
pylab.show()
balance_selected_address = """
SELECT
eth_balance
FROM
`bigquery-public-data.crypto_ethereum.balances` AS balances
WHERE TRUE
AND address = '0xfbb1b73c4f0bda4f67dca266ce6ef42f520fbb98'
"""
balance_selected_address = client.query(balance_selected_address).to_dataframe()
balance_selected_address
count_token_transactions_high_balance = """
SELECT
token_transfers.from_address,
COUNT(*) as count
FROM
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers,
`bigquery-public-data.crypto_ethereum.balances` AS balances,
(
SELECT CAST(AVG(balances1.eth_balance/POWER(10,15)) AS INT64) as avg_balance
FROM
`bigquery-public-data.crypto_ethereum.balances` as balances1,
`bigquery-public-data.crypto_ethereum.token_transfers` AS token_transfers1
WHERE TRUE
AND token_transfers1.from_address = balances1.address
AND DATE(token_transfers1.block_timestamp) >= '2021-12-01' AND DATE(token_transfers1.block_timestamp) <= '2022-01-15'
)
WHERE TRUE
AND token_transfers.from_address = balances.address
AND DATE(token_transfers.block_timestamp) >= '2021-12-01' AND DATE(token_transfers.block_timestamp) <= '2022-01-15'
AND (CAST(balances.eth_balance AS INT64))/POWER(10,16)>avg_balance
GROUP BY token_transfers.from_address
ORDER BY count DESC
"""
count_token_transactions_high_balance = client.query(count_token_transactions_high_balance).to_dataframe()
count_token_transactions_high_balance.head()
| 0.367157 | 0.449634 |
# Artificial Intelligence Nanodegree
## Convolutional Neural Networks
---
In this notebook, we train an MLP to classify images from the MNIST database.
### 1. Load MNIST Database
```
from keras.datasets import mnist
# use Keras to import pre-shuffled MNIST database
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("The MNIST database has a training set of %d examples." % len(X_train))
print("The MNIST database has a test set of %d examples." % len(X_test))
```
### 2. Visualize the First Six Training Images
```
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np
# plot first six training images
fig = plt.figure(figsize=(20,20))
for i in range(6):
ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
ax.imshow(X_train[i], cmap='gray')
ax.set_title(str(y_train[i]))
```
### 3. View an Image in More Detail
```
def visualize_input(img, ax):
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
ax.annotate(str(round(img[x][y],2)), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)
```
### 4. Rescale the Images by Dividing Every Pixel in Every Image by 255
```
# rescale [0,255] --> [0,1]
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255
```
### 5. Encode Categorical Integer Labels Using a One-Hot Scheme
```
from keras.utils import np_utils
# print first ten (integer-valued) training labels
print('Integer-valued labels:')
print(y_train[:10])
# one-hot encode the labels
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
# print first ten (one-hot) training labels
print('One-hot labels:')
print(y_train[:10])
```
### 6. Define the Model Architecture
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
# summarize the model
model.summary()
```
### 7. Compile the Model
```
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
```
### 8. Calculate the Classification Accuracy on the Test Set (Before Training)
```
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
```
### 9. Train the Model
```
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='mnist.model.best.hdf5',
verbose=1, save_best_only=True)
hist = model.fit(X_train, y_train, batch_size=128, epochs=10,
validation_split=0.2, callbacks=[checkpointer],
verbose=1, shuffle=True)
```
### 10. Load the Model with the Best Classification Accuracy on the Validation Set
```
# load the weights that yielded the best validation accuracy
model.load_weights('mnist.model.best.hdf5')
```
### 11. Calculate the Classification Accuracy on the Test Set
```
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
```
|
github_jupyter
|
from keras.datasets import mnist
# use Keras to import pre-shuffled MNIST database
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("The MNIST database has a training set of %d examples." % len(X_train))
print("The MNIST database has a test set of %d examples." % len(X_test))
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np
# plot first six training images
fig = plt.figure(figsize=(20,20))
for i in range(6):
ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
ax.imshow(X_train[i], cmap='gray')
ax.set_title(str(y_train[i]))
def visualize_input(img, ax):
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
ax.annotate(str(round(img[x][y],2)), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)
# rescale [0,255] --> [0,1]
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255
from keras.utils import np_utils
# print first ten (integer-valued) training labels
print('Integer-valued labels:')
print(y_train[:10])
# one-hot encode the labels
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
# print first ten (one-hot) training labels
print('One-hot labels:')
print(y_train[:10])
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
# summarize the model
model.summary()
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='mnist.model.best.hdf5',
verbose=1, save_best_only=True)
hist = model.fit(X_train, y_train, batch_size=128, epochs=10,
validation_split=0.2, callbacks=[checkpointer],
verbose=1, shuffle=True)
# load the weights that yielded the best validation accuracy
model.load_weights('mnist.model.best.hdf5')
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
| 0.686055 | 0.980375 |
# Deep Q-Network (DQN)
---
In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment.
### 1. Import the Necessary Packages
```
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
```
### 2. Instantiate the Environment and Agent
Initialize the environment in the code cell below.
```
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
```
Please refer to the instructions in `Deep_Q_Network.ipynb` if you would like to write your own DQN agent. Otherwise, run the code cell below to load the solution files.
```
from dqn_agent import Agent
agent = Agent(state_size=8, action_size=4, seed=0)
# watch an untrained agent
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
### 3. Train the Agent with DQN
Run the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance!
Alternatively, you can skip to the next step below (**4. Watch a Smart Agent!**), to load the saved model weights from a pre-trained agent.
```
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
state = env.reset()
score = 0
for t in range(max_t):
action = agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=200.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
### 4. Watch a Smart Agent!
In the next code cell, you will load the trained weights from file to watch a smart agent!
```
# load the weights from file
agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
for i in range(3):
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
### 5. Explore
In this exercise, you have implemented a DQN agent and demonstrated how to use it to solve an OpenAI Gym environment. To continue your learning, you are encouraged to complete any (or all!) of the following tasks:
- Amend the various hyperparameters and network architecture to see if you can get your agent to solve the environment faster. Once you build intuition for the hyperparameters that work well with this environment, try solving a different OpenAI Gym task with discrete actions!
- You may like to implement some improvements such as prioritized experience replay, Double DQN, or Dueling DQN!
- Write a blog post explaining the intuition behind the DQN algorithm and demonstrating how to use it to solve an RL environment of your choosing.
|
github_jupyter
|
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
from dqn_agent import Agent
agent = Agent(state_size=8, action_size=4, seed=0)
# watch an untrained agent
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
state = env.reset()
score = 0
for t in range(max_t):
action = agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=200.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# load the weights from file
agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
for i in range(3):
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
| 0.608478 | 0.953923 |
# Módulo 4: APIs
## Spotify
<img src="https://developer.spotify.com/assets/branding-guidelines/[email protected]" width=400></img>
En este módulo utilizaremos APIs para obtener información sobre artistas, discos y tracks disponibles en Spotify. Pero primero.. ¿Qué es una **API**?<br>
Por sus siglas en inglés, una API es una interfaz para programar aplicaciones (*Application Programming Interface*). Es decir que es un conjunto de funciones, métodos, reglas y definiciones que nos permitirán desarrollar aplicaciones (en este caso un scraper) que se comuniquen con los servidores de Spotify. Las APIs son diseñadas y desarrolladas por las empresas que tienen interés en que se desarrollen aplicaciones (públicas o privadas) que utilicen sus servicios. Spotify tiene APIs públicas y bien documentadas que estaremos usando en el desarrollo de este proyecto.
#### REST
Un término se seguramente te vas a encontrar cuando estés buscando información en internet es **REST** o *RESTful*. Significa *representational state transfer* y si una API es REST o RESTful, implica que respeta unos determinados principios de arquitectura, como por ejemplo un protocolo de comunicación cliente/servidor (que será HTTP) y (entre otras cosas) un conjunto de operaciones definidas que conocemos como **métodos**. Ya veníamos usando el método GET para hacer solicitudes a servidores web.
#### Documentación
Como mencioné antes, las APIs son diseñadas por las mismas empresas que tienen interés en que se desarrollen aplicaciones (públicas o privadas) que consuman sus servicios o información. Es por eso que la forma de utilizar las APIs variará dependiendo del servicio que querramos consumir. No es lo mismo utilizar las APIs de Spotify que las APIs de Twitter. Por esta razón es de suma importancia leer la documentación disponible, generalmente en la sección de desarrolladores de cada sitio. Te dejo el [link a la de Spotify](https://developer.spotify.com/documentation/)
#### JSON
Json significa *JavaScript Object Notation* y es un formato para describir objetos que ganó tanta popularidad en su uso que ahora se lo considera independiente del lenguaje. De hecho, lo utilizaremos en este proyecto por más que estemos trabajando en Python, porque es la forma en la que obtendremos las respuestas a las solicitudes que realicemos utilizando las APIs. Para nosotros, no será ni más ni menos que un diccionario con algunas particularidades que iremos viendo a lo largo del curso.
Links útiles para la clase:
- [Documentación de Spotify - Artistas](https://developer.spotify.com/documentation/web-api/reference/artists/)
- [Iron Maiden en Spotify](https://open.spotify.com/artist/6mdiAmATAx73kdxrNrnlao)
```
import requests
id_im = '6mdiAmATAx73kdxrNrnlao'
url_base = 'https://api.spotify.com/v1'
ep_artist = '/artists/{artist_id}'
url_base+ep_artist.format(artist_id=id_im)
r = requests.get(url_base+ep_artist.format(artist_id=id_im))
r.status_code
r.json()
```
## Clase 3
Links útiles para la clase:
- [Guía de autorización de Spotify](https://developer.spotify.com/documentation/general/guides/authorization-guide/)
- https://www.base64encode.org/
- [Endpoint de búsqueda de Spotify](https://developer.spotify.com/documentation/web-api/reference/search/search/)
|
github_jupyter
|
import requests
id_im = '6mdiAmATAx73kdxrNrnlao'
url_base = 'https://api.spotify.com/v1'
ep_artist = '/artists/{artist_id}'
url_base+ep_artist.format(artist_id=id_im)
r = requests.get(url_base+ep_artist.format(artist_id=id_im))
r.status_code
r.json()
| 0.244724 | 0.924005 |
# Tokenizing notebook
First, the all important `import` statement.
```
from ideas import token_utils
```
## Getting information
We start with a very simple example, where we have a repeated token, `a`.
```
source = "a = a"
tokens = token_utils.tokenize(source)
for token in tokens:
print(token)
```
Notice how the `NEWLINE` token here, in spite of its name, does not correpond to `\n`.
### Comparing tokens
Tokens are considered equals if they have the same `string` attribute. Given this notion of equality, we make things even simpler by allowing to compare a token directly to a string as shown below.
```
print(tokens[0] == tokens[2])
print(tokens[0] == tokens[2].string)
print(tokens[0] == 'a') # <-- Our normal choice
```
### Printing tokens by line of code
If we simply want to tokenize a source and print the result, or simply print a list of tokens, we can use `print_tokens` to do it in a single instruction, with the added benefit of separating tokens from different lines of code.
```
source = """
if True:
pass
"""
token_utils.print_tokens(source)
```
### Getting tokens by line of code
Once a source is broken down into token, it might be difficult to find some particular tokens of interest if we print the entire content. Instead, using `get_lines`, we can tokenize by line of code , and just focus on a few lines of interest.
```
source = """
if True:
if False:
pass
else:
a = 42 # a comment
print('ok')
"""
lines = token_utils.get_lines(source)
for line in lines[4:6]:
for token in line:
print(token)
print()
```
### Getting particular tokens
Let's focus on the sixth line.
```
line = lines[5]
print( token_utils.untokenize(line) )
```
Ignoring the indentation, the first token is `a`; ignoring newlines indicator and comments, the last token is `42`. We can get at these tokens using some utility functions.
```
print("The first useful token is:\n ", token_utils.get_first(line))
print("The index of the first token is: ", token_utils.get_first_index(line))
print()
print("The last useful token on that line is:\n ", token_utils.get_last(line))
print("Its index is", token_utils.get_last_index(line))
```
Note that these four functions, `get_first`, `get_first_index`, `get_last`, `get_last_index` exclude end of line comments by default; but this can be changed by setting the optional parameter `exclude_comment` to `False`.
```
print( token_utils.get_last(line, exclude_comment=False))
```
### Getting the indentation of a line
The sixth line starts with an `INDENT` token. We can get the indentation of that line, either by printing the length of the `INDENT` token string, or by looking at the `start_col` attribute of the first "useful" token. The attribute `start_col` is part of the two-tuple `start = (start_row, start_col)`.
```
print(len(line[0].string))
first = token_utils.get_first(line)
print(first.start_col)
```
In general, **the second method is more reliable**. For example, if we look at tokens the previous line (line 5, index 4), we can see that the length of the string of the first token, `INDENT`, does not give us the information about the line indentation. Furthermore, a given line may start with multiple `INDENT` tokens. However, once again, the `start_col` attribute of the first "useful" token can give us this value.
```
for token in lines[4]:
print(token)
print("-" * 50)
print(token_utils.untokenize(lines[4]))
first = token_utils.get_first(lines[4])
print("indentation = ", first.start_col)
```
## Changing information
Suppose we wish to do the following replacement
```
repeat n: --> for some_variable in range(n):
```
Here `n` might be anything that evaluates as an integer. Let's see a couple of different ways to do this.
First, we simply change the string content of two tokens.
```
source = "repeat 2 * 3 : "
tokens = token_utils.tokenize(source)
repeat = token_utils.get_first(tokens)
colon = token_utils.get_last(tokens)
repeat.string = "for some_variable in range("
colon.string = "):"
print(token_utils.untokenize(tokens))
```
Let's revert back the change for the colon, to see a different way of doing the same thing.
```
colon.string = ":"
print(token_utils.untokenize(tokens))
```
This time, let's **insert** an extra token, written as a simple Python string.
```
colon_index = token_utils.get_last_index(tokens)
tokens.insert(colon_index, ")")
for token in tokens:
print(token)
```
In spite of `')'` being a normal Python string, it can still be processed correctly by the `untokenize` function.
```
print(token_utils.untokenize(tokens))
```
Thus, unlike Python's own untokenize function, we do not have to worry about token types when we wish to insert extra tokens.
## Changing indentation
We can easily change the indentation of a given line using either the `indent` or `dedent` function.
```
source = """
if True:
a = 1
b = 2
"""
# First, reducing the indentation of the "b = 2" line
lines = token_utils.get_lines(source)
a_line = lines[2]
a = token_utils.get_first(a_line)
assert a == "a"
b_line = lines[3]
b = token_utils.get_first(b_line)
lines[3] = token_utils.dedent(b_line, b.start_col - a.start_col)
print(token_utils.untokenize(a_line))
print(token_utils.untokenize(lines[3]))
```
Alternatively, we can indent the "a = 1" line
```
lines = token_utils.get_lines(source)
a_line = lines[2]
a = token_utils.get_first(a_line)
assert a == "a"
b_line = lines[3]
b = token_utils.get_first(b_line)
lines[2] = token_utils.indent(a_line, b.start_col - a.start_col)
print(token_utils.untokenize(lines[2]))
print(token_utils.untokenize(b_line))
```
Finally, let's recover the entire source with the fixed indentation.
```
new_tokens = []
for line in lines:
new_tokens.extend(line)
print(token_utils.untokenize(new_tokens))
```
|
github_jupyter
|
from ideas import token_utils
source = "a = a"
tokens = token_utils.tokenize(source)
for token in tokens:
print(token)
print(tokens[0] == tokens[2])
print(tokens[0] == tokens[2].string)
print(tokens[0] == 'a') # <-- Our normal choice
source = """
if True:
pass
"""
token_utils.print_tokens(source)
source = """
if True:
if False:
pass
else:
a = 42 # a comment
print('ok')
"""
lines = token_utils.get_lines(source)
for line in lines[4:6]:
for token in line:
print(token)
print()
line = lines[5]
print( token_utils.untokenize(line) )
print("The first useful token is:\n ", token_utils.get_first(line))
print("The index of the first token is: ", token_utils.get_first_index(line))
print()
print("The last useful token on that line is:\n ", token_utils.get_last(line))
print("Its index is", token_utils.get_last_index(line))
print( token_utils.get_last(line, exclude_comment=False))
print(len(line[0].string))
first = token_utils.get_first(line)
print(first.start_col)
for token in lines[4]:
print(token)
print("-" * 50)
print(token_utils.untokenize(lines[4]))
first = token_utils.get_first(lines[4])
print("indentation = ", first.start_col)
repeat n: --> for some_variable in range(n):
source = "repeat 2 * 3 : "
tokens = token_utils.tokenize(source)
repeat = token_utils.get_first(tokens)
colon = token_utils.get_last(tokens)
repeat.string = "for some_variable in range("
colon.string = "):"
print(token_utils.untokenize(tokens))
colon.string = ":"
print(token_utils.untokenize(tokens))
colon_index = token_utils.get_last_index(tokens)
tokens.insert(colon_index, ")")
for token in tokens:
print(token)
print(token_utils.untokenize(tokens))
source = """
if True:
a = 1
b = 2
"""
# First, reducing the indentation of the "b = 2" line
lines = token_utils.get_lines(source)
a_line = lines[2]
a = token_utils.get_first(a_line)
assert a == "a"
b_line = lines[3]
b = token_utils.get_first(b_line)
lines[3] = token_utils.dedent(b_line, b.start_col - a.start_col)
print(token_utils.untokenize(a_line))
print(token_utils.untokenize(lines[3]))
lines = token_utils.get_lines(source)
a_line = lines[2]
a = token_utils.get_first(a_line)
assert a == "a"
b_line = lines[3]
b = token_utils.get_first(b_line)
lines[2] = token_utils.indent(a_line, b.start_col - a.start_col)
print(token_utils.untokenize(lines[2]))
print(token_utils.untokenize(b_line))
new_tokens = []
for line in lines:
new_tokens.extend(line)
print(token_utils.untokenize(new_tokens))
| 0.338952 | 0.977989 |
Compare speed between native and cythonized math functions
```
%load_ext Cython
%%cython
from libc.math cimport log, sqrt
def log_c(float x):
return log(x)/2.302585092994046
def sqrt_c(float x):
return sqrt(x)
import os
from os.path import expanduser
import pandas as pd
import pandas.io.sql as pd_sql
import math
from functions.auth.connections import postgres_connection
connection_uri = postgres_connection('mountain_project')
def transform_features(df):
""" Add log and sqrt values
"""
# add log values for ols linear regression
df['log_star_ratings'] = df['star_ratings'].apply(lambda x: math.log(x+1, 10))
df['log_ticks'] = df['ticks'].apply(lambda x: math.log(x+1, 10))
df['log_avg_stars'] = df['avg_stars'].apply(lambda x: math.log(x+1, 10))
df['log_length'] = df['length_'].apply(lambda x: math.log(x+1, 10))
df['log_grade'] = df['grade'].apply(lambda x: math.log(x+2, 10))
df['log_on_to_do_lists'] = df['on_to_do_lists'].apply(lambda x: math.log(x+1, 10)) # Target
# add sqrt values for Poisson regression
df['sqrt_star_ratings'] = df['star_ratings'].apply(lambda x: math.sqrt(x))
df['sqrt_ticks'] = df['ticks'].apply(lambda x: math.sqrt(x))
df['sqrt_avg_stars'] = df['avg_stars'].apply(lambda x: math.sqrt(x))
df['sqrt_length'] = df['length_'].apply(lambda x: math.sqrt(x))
df['sqrt_grade'] = df['grade'].apply(lambda x: math.sqrt(x+1))
return df
def transform_features_cythonized(df):
""" Add log and sqrt values using cythonized math functions
"""
# add log values for ols linear regression
df['log_star_ratings'] = df.star_ratings.apply(lambda x: log_c(x+1))
df['log_ticks'] = df.ticks.apply(lambda x: log_c(x+1))
df['log_avg_stars'] = df.avg_stars.apply(lambda x: log_c(x+1))
df['log_length'] = df.length_.apply(lambda x: log_c(x+1))
df['log_grade'] = df.grade.apply(lambda x: log_c(x+2))
df['log_on_to_do_lists'] = df.on_to_do_lists.apply(lambda x: log_c(x+1))
# add sqrt values for Poisson regression
df['sqrt_star_ratings'] = df.star_ratings.apply(lambda x: sqrt_c(x))
df['sqrt_ticks'] = df.ticks.apply(lambda x: sqrt_c(x))
df['sqrt_avg_stars'] = df.avg_stars.apply(lambda x: sqrt_c(x))
df['sqrt_length'] = df.length_.apply(lambda x: sqrt_c(x))
df['sqrt_grade'] = df.grade.apply(lambda x: sqrt_c(x+1))
return df
query = """
SELECT b.avg_stars, b.your_stars, b.length_, b.grade,
r.star_ratings, r.suggested_ratings, r.on_to_do_lists, r.ticks
FROM routes b
LEFT JOIN ratings r ON b.url_id = r.url_id
WHERE b.area_name = 'buttermilks'
AND length_ IS NOT NULL
"""
df = pd_sql.read_sql(query, connection_uri) # grab data as a dataframe
df.head()
%%timeit
transform_features(df)
%%timeit
transform_features_cythonized(df) # Cythonized math functions are only barely faster
```
|
github_jupyter
|
%load_ext Cython
%%cython
from libc.math cimport log, sqrt
def log_c(float x):
return log(x)/2.302585092994046
def sqrt_c(float x):
return sqrt(x)
import os
from os.path import expanduser
import pandas as pd
import pandas.io.sql as pd_sql
import math
from functions.auth.connections import postgres_connection
connection_uri = postgres_connection('mountain_project')
def transform_features(df):
""" Add log and sqrt values
"""
# add log values for ols linear regression
df['log_star_ratings'] = df['star_ratings'].apply(lambda x: math.log(x+1, 10))
df['log_ticks'] = df['ticks'].apply(lambda x: math.log(x+1, 10))
df['log_avg_stars'] = df['avg_stars'].apply(lambda x: math.log(x+1, 10))
df['log_length'] = df['length_'].apply(lambda x: math.log(x+1, 10))
df['log_grade'] = df['grade'].apply(lambda x: math.log(x+2, 10))
df['log_on_to_do_lists'] = df['on_to_do_lists'].apply(lambda x: math.log(x+1, 10)) # Target
# add sqrt values for Poisson regression
df['sqrt_star_ratings'] = df['star_ratings'].apply(lambda x: math.sqrt(x))
df['sqrt_ticks'] = df['ticks'].apply(lambda x: math.sqrt(x))
df['sqrt_avg_stars'] = df['avg_stars'].apply(lambda x: math.sqrt(x))
df['sqrt_length'] = df['length_'].apply(lambda x: math.sqrt(x))
df['sqrt_grade'] = df['grade'].apply(lambda x: math.sqrt(x+1))
return df
def transform_features_cythonized(df):
""" Add log and sqrt values using cythonized math functions
"""
# add log values for ols linear regression
df['log_star_ratings'] = df.star_ratings.apply(lambda x: log_c(x+1))
df['log_ticks'] = df.ticks.apply(lambda x: log_c(x+1))
df['log_avg_stars'] = df.avg_stars.apply(lambda x: log_c(x+1))
df['log_length'] = df.length_.apply(lambda x: log_c(x+1))
df['log_grade'] = df.grade.apply(lambda x: log_c(x+2))
df['log_on_to_do_lists'] = df.on_to_do_lists.apply(lambda x: log_c(x+1))
# add sqrt values for Poisson regression
df['sqrt_star_ratings'] = df.star_ratings.apply(lambda x: sqrt_c(x))
df['sqrt_ticks'] = df.ticks.apply(lambda x: sqrt_c(x))
df['sqrt_avg_stars'] = df.avg_stars.apply(lambda x: sqrt_c(x))
df['sqrt_length'] = df.length_.apply(lambda x: sqrt_c(x))
df['sqrt_grade'] = df.grade.apply(lambda x: sqrt_c(x+1))
return df
query = """
SELECT b.avg_stars, b.your_stars, b.length_, b.grade,
r.star_ratings, r.suggested_ratings, r.on_to_do_lists, r.ticks
FROM routes b
LEFT JOIN ratings r ON b.url_id = r.url_id
WHERE b.area_name = 'buttermilks'
AND length_ IS NOT NULL
"""
df = pd_sql.read_sql(query, connection_uri) # grab data as a dataframe
df.head()
%%timeit
transform_features(df)
%%timeit
transform_features_cythonized(df) # Cythonized math functions are only barely faster
| 0.367043 | 0.787605 |
# 横向联邦学习任务示例
这是一个使用Delta框架编写的横向联邦学习的任务示例。
数据是分布在多个节点上的[MNIST数据集](http://yann.lecun.com/exdb/mnist/),每个节点上只有其中的一部分样本。任务是训练一个卷积神经网络的模型,进行手写数字的识别。
本示例可以直接在Deltaboard中执行并查看结果。<span style="color:#FF8F8F;font-weight:bold">在点击执行之前,需要修改一下个人的Deltaboard API的地址,具体请看下面第4节的说明。</span>
## 1. 引入需要的包
我们的计算逻辑是用torch写的。所以首先引入```numpy```和```torch```,以及一些辅助的工具,然后从```delta-task```的包中,引入Delta框架的内容,包括```DeltaNode```节点,用于调用API发送任务,以及我们本示例中要执行的横向联邦学习任务```HorizontalTask```等等:
```
from typing import Dict, Iterable, List, Tuple, Any, Union
import numpy as np
import torch
from delta import DeltaNode
from delta.task import HorizontalTask
from delta.algorithm.horizontal import FedAvg
```
## 2. 定义神经网络模型
接下来我们来定义神经网络模型,这里和传统的神经网络模型定义完全一样:
```
class LeNet(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = torch.nn.Conv2d(1, 16, 5, padding=2)
self.pool1 = torch.nn.AvgPool2d(2, stride=2)
self.conv2 = torch.nn.Conv2d(16, 16, 5)
self.pool2 = torch.nn.AvgPool2d(2, stride=2)
self.dense1 = torch.nn.Linear(400, 100)
self.dense2 = torch.nn.Linear(100, 10)
def forward(self, x: torch.Tensor):
x = self.conv1(x)
x = torch.relu(x)
x = self.pool1(x)
x = self.conv2(x)
x = torch.relu(x)
x = self.pool2(x)
x = x.view(-1, 400)
x = self.dense1(x)
x = torch.relu(x)
x = self.dense2(x)
return x
```
## 3. 定义隐私计算任务
然后可以开始定义我们的横向联邦任务了,用横向联邦学习的方式,在多节点上训练上面定义的神经网络模型
在定义横向联邦学习任务时,有几部分内容是需要用户自己定义的:
* ***模型训练方法***:包括损失函数、优化器,以及训练步骤的定义
* ***数据预处理方法***:在执行训练步骤以前,对于加载的每个样本数据进行预处理的方法,具体的参数说明,可以参考[这篇文档](https://docs.deltampc.com/network-deployment/prepare-data)
* ***模型验证方法***:在每个节点上通过验证样本集,计算模型精确度的方法
* ***横向联邦配置***:每轮训练需要多少个节点,如何在节点上划分验证样本集合等等
```
class ExampleTask(HorizontalTask):
def __init__(self):
super().__init__(
name="example", # 任务名称,用于在Deltaboard中的展示
dataset="mnist", # 任务用到的数据集的文件名,对应于Delta Node的data文件夹下的一个文件/文件夹
max_rounds=2, # 任务训练的总轮次,每聚合更新一次权重,代表一轮
validate_interval=1, # 验证的轮次间隔,1表示每完成一轮,进行一次验证
validate_frac=0.1, # 验证集的比例,范围(0,1)
)
# 传入刚刚定义的神经网络模型
self.model = LeNet()
# 模型训练时用到的损失函数
self.loss_func = torch.nn.CrossEntropyLoss()
# 模型训练时的优化器
self.optimizer = torch.optim.SGD(
self.model.parameters(),
lr=0.1,
momentum=0.9,
weight_decay=1e-3,
nesterov=True,
)
def preprocess(self, x, y=None):
"""
数据预处理方法,会在数据加载时,对每一个样本进行预处理。
具体的参数说明,可以参考https://docs.deltampc.com/network-deployment/prepare-data
x: 原始数据集中的一个样本,类型与指定的数据集相关
y: 数据对应的标签,如果数据集中不包含标签,则为None
return: 预处理完的数据和标签(如果存在),类型需要为torch.Tensor或np.ndarray
"""
x /= 255.0
x *= 2
x -= 1
x = x.reshape((1, 28, 28))
return torch.from_numpy(x), torch.tensor(int(y), dtype=torch.long)
def train(self, dataloader: Iterable):
"""
训练步骤
dataloader: 训练数据集对应的dataloader
return: None
"""
for batch in dataloader:
x, y = batch
y_pred = self.model(x)
loss = self.loss_func(y_pred, y)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
def validate(self, dataloader: Iterable) -> Dict[str, float]:
"""
验证步骤,输出验证的指标值
dataloader: 验证集对应的dataloader
return: Dict[str, float],一个字典,键为指标的名称(str),值为对应的指标值(float)
"""
total_loss = 0
count = 0
ys = []
y_s = []
for batch in dataloader:
x, y = batch
y_pred = self.model(x)
loss = self.loss_func(y_pred, y)
total_loss += loss.item()
count += 1
y_ = torch.argmax(y_pred, dim=1)
y_s.extend(y_.tolist())
ys.extend(y.tolist())
avg_loss = total_loss / count
tp = len([1 for i in range(len(ys)) if ys[i] == y_s[i]])
precision = tp / len(ys)
return {"loss": avg_loss, "precision": precision}
def get_params(self) -> List[torch.Tensor]:
"""
需要训练的模型参数
在聚合更新、保存结果时,只会更新、保存get_params返回的参数
return: List[torch.Tensor], 模型参数列表
"""
return list(self.model.parameters())
def algorithm(self):
"""
聚合更新算法的配置,可选算法包含在delta.algorithm.horizontal包中
"""
return FedAvg(
merge_interval_epoch=0, # 聚合更新的间隔,merge_interval_epoch表示每多少个epoch聚合更新一次权重
merge_interval_iter=20, # 聚合更新的间隔,merge_interval_iter表示每多少个iteration聚合更新一次,merge_interval_epoch与merge_interval_iter互斥,必须有一个为0
wait_timeout=10, # 等待超时时间,用来控制等待各个客户端加入的超时时间
connection_timeout=10, # 连接超时时间,用来控制聚合算法中,每轮通信的超时时间
min_clients=2, # 算法所需的最少客户端数,至少为2
max_clients=2, # 算法所支持的最大客户端数,必须大雨等于min_clients
)
def dataloader_config(
self,
) -> Union[Dict[str, Any], Tuple[Dict[str, Any], Dict[str, Any]]]:
"""
训练集dataloader和验证集dataloader的配置,
每个配置为一个字典,对应pytorch中dataloader的配置
详情参见 https://pytorch.org/docs/stable/data.html
return: 一个或两个Dict[str, Any],返回一个时,同时配置训练集和验证集的dataloader,返回两个时,分别对应训练集和验证集
"""
train_config = {"batch_size": 64, "shuffle": True, "drop_last": True}
val_config = {"batch_size": 64, "shuffle": False, "drop_last": False}
return train_config, val_config
```
## 4. 指定执行任务用的Delta Node的API
定义好了任务,我们就可以开始准备在Delta Node上执行任务了。
Delta Task框架可以直接调用Delta Node API发送任务到Delta Node开始执行,只要在任务执行时指定Delta Node的API地址即可。
Deltaboard提供了对于Delta Node的API的封装,为每个用户提供了一个独立的API地址,支持多人同时使用同一个Delta Node,并且能够在Deltaboard中管理自己提交的任务。
在这里,我们使用Deltaboard提供的API来执行任务。如果用户自己搭建了Delta Node,也可以直接使用Delta Node的API。
在左侧导航栏中进入“个人中心”,在Deltaboard API中,复制自己的API地址,并粘贴到下面的代码中:
```
DELTA_NODE_API = "http://127.0.0.1:6704"
```
## 5. 执行隐私计算任务
接下来我们可以开始运行这个模型了:
```
task = ExampleTask()
delta_node = DeltaNode(DELTA_NODE_API)
delta_node.create_task(task)
```
## 6. 查看执行状态
点击执行后,可以从输出的日志看出,任务已经提交到了Delta Node的节点上。
接下来,可以从左侧的导航栏中,前往“任务列表”,找到刚刚提交的任务,点击进去查看具体的执行日志了。
|
github_jupyter
|
from typing import Dict, Iterable, List, Tuple, Any, Union
import numpy as np
import torch
from delta import DeltaNode
from delta.task import HorizontalTask
from delta.algorithm.horizontal import FedAvg
class LeNet(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = torch.nn.Conv2d(1, 16, 5, padding=2)
self.pool1 = torch.nn.AvgPool2d(2, stride=2)
self.conv2 = torch.nn.Conv2d(16, 16, 5)
self.pool2 = torch.nn.AvgPool2d(2, stride=2)
self.dense1 = torch.nn.Linear(400, 100)
self.dense2 = torch.nn.Linear(100, 10)
def forward(self, x: torch.Tensor):
x = self.conv1(x)
x = torch.relu(x)
x = self.pool1(x)
x = self.conv2(x)
x = torch.relu(x)
x = self.pool2(x)
x = x.view(-1, 400)
x = self.dense1(x)
x = torch.relu(x)
x = self.dense2(x)
return x
class ExampleTask(HorizontalTask):
def __init__(self):
super().__init__(
name="example", # 任务名称,用于在Deltaboard中的展示
dataset="mnist", # 任务用到的数据集的文件名,对应于Delta Node的data文件夹下的一个文件/文件夹
max_rounds=2, # 任务训练的总轮次,每聚合更新一次权重,代表一轮
validate_interval=1, # 验证的轮次间隔,1表示每完成一轮,进行一次验证
validate_frac=0.1, # 验证集的比例,范围(0,1)
)
# 传入刚刚定义的神经网络模型
self.model = LeNet()
# 模型训练时用到的损失函数
self.loss_func = torch.nn.CrossEntropyLoss()
# 模型训练时的优化器
self.optimizer = torch.optim.SGD(
self.model.parameters(),
lr=0.1,
momentum=0.9,
weight_decay=1e-3,
nesterov=True,
)
def preprocess(self, x, y=None):
"""
数据预处理方法,会在数据加载时,对每一个样本进行预处理。
具体的参数说明,可以参考https://docs.deltampc.com/network-deployment/prepare-data
x: 原始数据集中的一个样本,类型与指定的数据集相关
y: 数据对应的标签,如果数据集中不包含标签,则为None
return: 预处理完的数据和标签(如果存在),类型需要为torch.Tensor或np.ndarray
"""
x /= 255.0
x *= 2
x -= 1
x = x.reshape((1, 28, 28))
return torch.from_numpy(x), torch.tensor(int(y), dtype=torch.long)
def train(self, dataloader: Iterable):
"""
训练步骤
dataloader: 训练数据集对应的dataloader
return: None
"""
for batch in dataloader:
x, y = batch
y_pred = self.model(x)
loss = self.loss_func(y_pred, y)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
def validate(self, dataloader: Iterable) -> Dict[str, float]:
"""
验证步骤,输出验证的指标值
dataloader: 验证集对应的dataloader
return: Dict[str, float],一个字典,键为指标的名称(str),值为对应的指标值(float)
"""
total_loss = 0
count = 0
ys = []
y_s = []
for batch in dataloader:
x, y = batch
y_pred = self.model(x)
loss = self.loss_func(y_pred, y)
total_loss += loss.item()
count += 1
y_ = torch.argmax(y_pred, dim=1)
y_s.extend(y_.tolist())
ys.extend(y.tolist())
avg_loss = total_loss / count
tp = len([1 for i in range(len(ys)) if ys[i] == y_s[i]])
precision = tp / len(ys)
return {"loss": avg_loss, "precision": precision}
def get_params(self) -> List[torch.Tensor]:
"""
需要训练的模型参数
在聚合更新、保存结果时,只会更新、保存get_params返回的参数
return: List[torch.Tensor], 模型参数列表
"""
return list(self.model.parameters())
def algorithm(self):
"""
聚合更新算法的配置,可选算法包含在delta.algorithm.horizontal包中
"""
return FedAvg(
merge_interval_epoch=0, # 聚合更新的间隔,merge_interval_epoch表示每多少个epoch聚合更新一次权重
merge_interval_iter=20, # 聚合更新的间隔,merge_interval_iter表示每多少个iteration聚合更新一次,merge_interval_epoch与merge_interval_iter互斥,必须有一个为0
wait_timeout=10, # 等待超时时间,用来控制等待各个客户端加入的超时时间
connection_timeout=10, # 连接超时时间,用来控制聚合算法中,每轮通信的超时时间
min_clients=2, # 算法所需的最少客户端数,至少为2
max_clients=2, # 算法所支持的最大客户端数,必须大雨等于min_clients
)
def dataloader_config(
self,
) -> Union[Dict[str, Any], Tuple[Dict[str, Any], Dict[str, Any]]]:
"""
训练集dataloader和验证集dataloader的配置,
每个配置为一个字典,对应pytorch中dataloader的配置
详情参见 https://pytorch.org/docs/stable/data.html
return: 一个或两个Dict[str, Any],返回一个时,同时配置训练集和验证集的dataloader,返回两个时,分别对应训练集和验证集
"""
train_config = {"batch_size": 64, "shuffle": True, "drop_last": True}
val_config = {"batch_size": 64, "shuffle": False, "drop_last": False}
return train_config, val_config
DELTA_NODE_API = "http://127.0.0.1:6704"
task = ExampleTask()
delta_node = DeltaNode(DELTA_NODE_API)
delta_node.create_task(task)
| 0.850701 | 0.976602 |
# Building Python Function-based Components
> Building your own lightweight pipelines components using the Pipelines SDK v2 and Python
A Kubeflow Pipelines component is a self-contained set of code that performs one step in your
ML workflow. A pipeline component is composed of:
* The component code, which implements the logic needed to perform a step in your ML workflow.
* A component specification, which defines the following:
* The component's metadata, its name and description.
* The component's interface, the component's inputs and outputs.
* The component's implementation, the Docker container image
to run, how to pass inputs to your component code, and how
to get the component's outputs.
Python function-based components make it easier to iterate quickly by letting you build your
component code as a Python function and generating the [component specification][component-spec] for you.
This document describes how to build Python function-based components and use them in your pipeline.
**Note:** This guide demonstrates how to build components using the Pipelines SDK v2.
Currently, Kubeflow Pipelines v2 is in development. You can use this guide to start
building and running pipelines that are compatible with the Pipelines SDK v2.
[Learn more about Pipelines SDK v2][kfpv2].
[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/v2-compatibility/
[component-spec]: https://www.kubeflow.org/docs/components/pipelines/reference/component-spec/
## Before you begin
1. Run the following command to install the Kubeflow Pipelines SDK v1.6.2 or higher. If you run this command in a Jupyter
notebook, restart the kernel after installing the SDK.
```
!pip install --upgrade kfp
```
2. Import the `kfp`, `kfp.dsl`, and `kfp.v2.dsl` packages.
```
import kfp
import kfp.dsl as dsl
from kfp.v2.dsl import (
component,
Input,
Output,
Dataset,
Metrics,
)
```
3. Create an instance of the [`kfp.Client` class][kfp-client] following steps in [connecting to Kubeflow Pipelines using the SDK client][connect-api].
[kfp-client]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client
[connect-api]: https://www.kubeflow.org/docs/components/pipelines/sdk/connect-api
```
client = kfp.Client() # change arguments accordingly
```
For more information about the Kubeflow Pipelines SDK, see the [SDK reference guide][sdk-ref].
[sdk-ref]: https://kubeflow-pipelines.readthedocs.io/en/stable/index.html
## Getting started with Python function-based components
This section demonstrates how to get started building Python function-based components by walking
through the process of creating a simple component.
1. Define your component's code as a [standalone python function](#standalone).
In this example, the function adds two floats and returns the sum of the two
arguments. Use the `kfp.v2.dsl.component` annotation to convert the function
into a factory function that you can use to create
[`kfp.dsl.ContainerOp`][container-op] class instances to use as steps in your pipeline.
[container-op]: https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.dsl.html#kfp.dsl.ContainerOp
```
@component
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
```
2. Create and run your pipeline. [Learn more about creating and running pipelines][build-pipelines].
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline
```
import kfp.dsl as dsl
@dsl.pipeline(
name='addition-pipeline',
description='An example pipeline that performs addition calculations.',
pipeline_root='gs://my-pipeline-root/example-pipeline'
)
def add_pipeline(
a: float=1,
b: float=7,
):
# Passes a pipeline parameter and a constant value to the `add` factory
# function.
first_add_task = add(a, 4)
# Passes an output reference from `first_add_task` and a pipeline parameter
# to the `add` factory function. For operations with a single return
# value, the output reference can be accessed as `task.output` or
# `task.outputs['output_name']`.
second_add_task = add(first_add_task.output, b)
# Specify pipeline argument values
arguments = {'a': 7, 'b': 8}
```
3. Compile and run your pipeline. [Learn more about compiling and running pipelines][build-pipelines].
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline
```
# Submit a pipeline run using the v2 compatible mode
client.create_run_from_pipeline_func(
add_pipeline,
arguments=arguments,
mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE)
```
## Building Python function-based components
Use the following instructions to build a Python function-based component:
<a name="standalone"></a>
1. Define a standalone Python function. This function must meet the following
requirements:
* It should not use any code declared outside of the function definition.
* Import statements must be added inside the function. [Learn more about
using and installing Python packages in your component](#packages).
* Helper functions must be defined inside this function.
1. Kubeflow Pipelines uses your function's inputs and outputs to define your
component's interface. [Learn more about passing data between
components](#pass-data). Your function's inputs and outputs must meet the
following requirements:
* All your function's arguments must have data type annotations.
* If the function accepts or returns large amounts of data or complex
data types, you must annotate that argument as an _artifact_.
[Learn more about using large amounts of data as inputs or outputs](#pass-by-file).
* If your component returns multiple outputs, you can annotate your
function with the [`typing.NamedTuple`][named-tuple-hint] type hint
and use the [`collections.namedtuple`][named-tuple] function return
your function's outputs as a new subclass of tuple. For an example, read
[Passing parameters by value](#pass-by-value).
1. (Optional.) If your function has complex dependencies, choose or build a
container image for your Python function to run in. [Learn more about
selecting or building your component's container image](#containers).
1. Add the [`kfp.v2.dsl.component`][vs-dsl-component] decorator to convert your function
into a pipeline component. You can specify the following arguments to the decorator:
* **base_image**: (Optional.) Specify the Docker container image to run
this function in. [Learn more about selecting or building a container
image](#containers).
* **output_component_file**: (Optional.) Writes your component definition
to a file. You can use this file to share the component with colleagues
or reuse it in different pipelines.
* **packages_to_install**: (Optional.) A list of versioned Python
packages to install before running your function.
<a name="packages"></a>
### Using and installing Python packages
When Kubeflow Pipelines runs your pipeline, each component runs within a Docker
container image on a Kubernetes Pod. To load the packages that your Python
function depends on, one of the following must be true:
* The package must be installed on the container image.
* The package must be defined using the `packages_to_install` parameter of the
[`kfp.v2.dsl.component`][vs-dsl-component] decorator.
* Your function must install the package. For example, your function can use
the [`subprocess` module][subprocess] to run a command like `pip install`
that installs a package.
<a name="containers"></a>
### Selecting or building a container image
Currently, if you do not specify a container image, your Python-function based
component uses the [`python:3.7` container image][python37]. If your function
has complex dependencies, you may benefit from using a container image that has
your dependencies preinstalled, or building a custom container image.
Preinstalling your dependencies reduces the amount of time that your component
runs in, since your component does not need to download and install packages
each time it runs.
Many frameworks, such as [TensorFlow][tf-docker] and [PyTorch][pytorch-docker],
and cloud service providers offer prebuilt container images that have common
dependencies installed.
If a prebuilt container is not available, you can build a custom container
image with your Python function's dependencies. For more information about
building a custom container, read the [Dockerfile reference guide in the Docker
documentation][dockerfile].
If you build or select a container image, instead of using the default
container image, the container image must use Python 3.5 or later.
<a name="pass-data"></a>
### Understanding how data is passed between components
When Kubeflow Pipelines runs your component, a container image is started in a
Kubernetes Pod and your component's inputs are passed in as command-line
arguments. When your component has finished, the component’s outputs are
returned as files.
Python function-based components make it easier to build pipeline components by
building the component specification for you. Python function-based components
also handle the complexity of passing inputs into your component and passing
your function's outputs back to your pipeline.
Component inputs and outputs are classified as either _parameters_ or _artifacts_,
depending on their data type.
* Parameters typically represent settings that affect the behavior of your pipeline.
Parameters are passed into your component by value, and can be of any of
the following types: `int`, `double`, `float`, or `str`. Since parameters are
passed by value, the quantity of data passed in a parameter must be appropriate
to pass as a command-line argument.
* Artifacts represent large or complex data structures like datasets or models, and
are passed into components as a reference to a file path.
If you have large amounts of string data to pass to your component, such as a JSON
file, annotate that input or output as a type of [`Artifact`][kfp-artifact], such
as [`Dataset`][kfp-artifact], to let Kubeflow Pipelines know to pass this to
your component as a file.
In addition to the artifact’s data, you can also read and write the artifact's
metadata. For output artifacts, you can record metadata as key-value pairs, such
as the accuracy of a trained model. For input artifacts, you can read the
artifact's metadata — for example, you could use metadata to decide if a
model is accurate enough to deploy for predictions.
All outputs are returned as files, using the the paths that Kubeflow Pipelines
provides.
[kfp-artifact]: https://github.com/kubeflow/pipelines/blob/sdk/release-1.8/sdk/python/kfp/dsl/io_types.py
The following sections describe how to pass parameters and artifacts to your function.
<a name="pass-by-value"></a>
#### Passing parameters by value
Python function-based components make it easier to pass parameters between
components by value (such as numbers, booleans, and short strings), by letting
you define your component’s interface by annotating your Python function.
Parameters can be of any type that is appropriate to pass as a command-line argument, such as `int`, `float`, `double`, or `str`.
If your component returns multiple outputs by value, annotate your function
with the [`typing.NamedTuple`][named-tuple-hint] type hint and use the
[`collections.namedtuple`][named-tuple] function to return your function's
outputs as a new subclass of `tuple`.
The following example demonstrates how to return multiple outputs by value.
[python37]: https://hub.docker.com/layers/python/library/python/3.7/images/sha256-7eef781ed825f3b95c99f03f4189a8e30e718726e8490651fa1b941c6c815ad1?context=explore
[create-component-from-func]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.create_component_from_func
[subprocess]: https://docs.python.org/3/library/subprocess.html
[tf-docker]: https://www.tensorflow.org/install/docker
[pytorch-docker]: https://hub.docker.com/r/pytorch/pytorch/tags
[dockerfile]: https://docs.docker.com/engine/reference/builder/
[named-tuple-hint]: https://docs.python.org/3/library/typing.html#typing.NamedTuple
[named-tuple]: https://docs.python.org/3/library/collections.html#collections.namedtuple
[kfp-visualize]: https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/
[kfp-metrics]: https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/
[input-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.InputPath
[output-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.OutputPath
[vs-dsl-component]: https://github.com/kubeflow/pipelines/blob/sdk/release-1.8/sdk/python/kfp/v2/components/component_decorator.py
```
from typing import NamedTuple
@component
def multiple_return_values_example(a: float, b: float) -> NamedTuple(
'ExampleOutputs',
[
('sum', float),
('product', float)
]):
"""Example function that demonstrates how to return multiple values."""
sum_value = a + b
product_value = a * b
from collections import namedtuple
example_output = namedtuple('ExampleOutputs', ['sum', 'product'])
return example_output(sum_value, product_value)
```
<a name="pass-by-file"></a>
#### Passing artifacts by file
Python function-based components make it easier to pass files to your
component, or to return files from your component, by letting you annotate
your Python function's arguments as _artifacts_.
Artifacts represent large or complex data structures like datasets or models, and are passed into components as a reference to a file path.
In addition to the artifact’s data, you can also read and write the artifact's metadata. For output artifacts, you can record metadata as key-value pairs, such as the accuracy of a trained model. For input artifacts, you can read the artifact's metadata — for example, you could use metadata to decide if a model is accurate enough to deploy for predictions.
If your artifact is an output file, Kubeflow Pipelines passes your function a
path or stream that you can use to store your output file. This path is a
location within your pipeline's `pipeline_root` that your component can write to.
The following example accepts a file as an input and returns two files as outputs.
```
@component
def split_text_lines(
source: Input[Dataset],
odd_lines: Output[Dataset],
even_lines_path: Output[Dataset]):
"""Splits a text file into two files, with even lines going to one file
and odd lines to the other."""
with open(source.path, 'r') as reader:
with open(odd_lines.path, 'w') as odd_writer:
with open(even_lines_path, 'w') as even_writer:
while True:
line = reader.readline()
if line == "":
break
odd_writer.write(line)
line = reader.readline()
if line == "":
break
even_writer.write(line)
```
In this example, the inputs and outputs are defined as arguments of the
`split_text_lines` function. This lets Kubeflow Pipelines pass the path to the
source data file and the paths to the output data files into the function.
To accept a file as an input parameter, use one of the following type annotations:
* [`kfp.dsl.Input`][input]: Use this generic type hint to specify that your
function expects this argument to be an [`Artifact`][kfp-artifact]. Your
function can use the argument's `path` property to get the
artifact's path, and the `metadata` property to read its key/value metadata.
* [`kfp.components.InputBinaryFile`][input-binary]: Use this annotation to
specify that your function expects an argument to be an
[`io.BytesIO`][bytesio] instance that this function can read.
* [`kfp.components.InputPath`][input-path]: Use this annotation to specify that
your function expects an argument to be the path to the input file as
a `string`.
* [`kfp.components.InputTextFile`][input-text]: Use this annotation to specify
that your function expects an argument to be an
[`io.TextIOWrapper`][textiowrapper] instance that this function can read.
To return a file as an output, use one of the following type annotations:
* [`kfp.dsl.Output`][output]: Use this generic type hin to specify that your
function expects this argument to be an [`Artifact`][kfp-artifact]. Your
function can use the argument's `path` property to get the
artifact path to write to, and the `metadata` property to log key/value metadata.
* [`kfp.components.OutputBinaryFile`][output-binary]: Use this annotation to
specify that your function expects an argument to be an
[`io.BytesIO`][bytesio] instance that this function can write to.
* [`kfp.components.OutputPath`][output-path]: Use this annotation to specify that
your function expects an argument to be the path to store the output file at
as a `string`.
* [`kfp.components.OutputTextFile`][output-text]: Use this annotation to specify
that your function expects an argument to be an
[`io.TextIOWrapper`][textiowrapper] that this function can write to.
[input-binary]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.InputBinaryFile
[input-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.InputPath
[input-text]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.InputTextFile
[output-binary]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.OutputBinaryFile
[output-path]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.OutputPath
[output-text]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.components.html#kfp.components.OutputTextFile
[bytesio]: https://docs.python.org/3/library/io.html#io.BytesIO
[textiowrapper]: https://docs.python.org/3/library/io.html#io.TextIOWrapper
[input]: https://github.com/kubeflow/pipelines/blob/c5daa7532d18687b180badfca8d750c801805712/sdk/python/kfp/dsl/io_types.py
[output]: https://github.com/kubeflow/pipelines/blob/c5daa7532d18687b180badfca8d750c801805712/sdk/python/kfp/dsl/io_types.py
[kfp-artifact]: https://github.com/kubeflow/pipelines/blob/sdk/release-1.8/sdk/python/kfp/dsl/io_types.py
## Example Python function-based component
This section demonstrates how to build a Python function-based component that uses imports,
helper functions, and produces multiple outputs.
1. Define your function. This example function uses the `numpy` package to calculate the
quotient and remainder for a given dividend and divisor in a helper function. In
addition to the quotient and remainder, the function also returns two metrics.
By adding the `@component` annotation, you convert your function into a factory function
that creates pipeline steps that execute this function. This example also specifies the
base container image to run you component in.
```
from typing import NamedTuple
@component(base_image='tensorflow/tensorflow:1.11.0-py3')
def my_divmod(
dividend: float,
divisor: float,
metrics: Output[Metrics]) -> NamedTuple(
'MyDivmodOutput',
[
('quotient', float),
('remainder', float),
]):
'''Divides two numbers and calculate the quotient and remainder'''
# Import the numpy package inside the component function
import numpy as np
# Define a helper function
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
# Export two metrics
metrics.log_metric('quotient', float(quotient))
metrics.log_metric('remainder', float(remainder))
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput',
['quotient', 'remainder'])
return divmod_output(quotient, remainder)
```
2. Define your pipeline. This example pipeline uses the `my_divmod` factory
function and the `add` factory function from an earlier example.
```
import kfp.dsl as dsl
@dsl.pipeline(
name='calculation-pipeline',
description='An example pipeline that performs arithmetic calculations.',
pipeline_root='gs://my-pipeline-root/example-pipeline'
)
def calc_pipeline(
a: float=1,
b: float=7,
c: float=17,
):
# Passes a pipeline parameter and a constant value as operation arguments.
add_task = add(a, 4) # The add_op factory function returns
# a dsl.ContainerOp class instance.
# Passes the output of the add_task and a pipeline parameter as operation
# arguments. For an operation with a single return value, the output
# reference is accessed using `task.output` or
# `task.outputs['output_name']`.
divmod_task = my_divmod(add_task.output, b)
# For an operation with multiple return values, output references are
# accessed as `task.outputs['output_name']`.
result_task = add(divmod_task.outputs['quotient'], c)
```
3. Compile and run your pipeline. [Learn more about compiling and running pipelines][build-pipelines].
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline
```
# Specify pipeline argument values
arguments = {'a': 7, 'b': 8}
# Submit a pipeline run
client.create_run_from_pipeline_func(
calc_pipeline,
arguments=arguments,
mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE)
```
|
github_jupyter
|
!pip install --upgrade kfp
import kfp
import kfp.dsl as dsl
from kfp.v2.dsl import (
component,
Input,
Output,
Dataset,
Metrics,
)
client = kfp.Client() # change arguments accordingly
@component
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
import kfp.dsl as dsl
@dsl.pipeline(
name='addition-pipeline',
description='An example pipeline that performs addition calculations.',
pipeline_root='gs://my-pipeline-root/example-pipeline'
)
def add_pipeline(
a: float=1,
b: float=7,
):
# Passes a pipeline parameter and a constant value to the `add` factory
# function.
first_add_task = add(a, 4)
# Passes an output reference from `first_add_task` and a pipeline parameter
# to the `add` factory function. For operations with a single return
# value, the output reference can be accessed as `task.output` or
# `task.outputs['output_name']`.
second_add_task = add(first_add_task.output, b)
# Specify pipeline argument values
arguments = {'a': 7, 'b': 8}
# Submit a pipeline run using the v2 compatible mode
client.create_run_from_pipeline_func(
add_pipeline,
arguments=arguments,
mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE)
from typing import NamedTuple
@component
def multiple_return_values_example(a: float, b: float) -> NamedTuple(
'ExampleOutputs',
[
('sum', float),
('product', float)
]):
"""Example function that demonstrates how to return multiple values."""
sum_value = a + b
product_value = a * b
from collections import namedtuple
example_output = namedtuple('ExampleOutputs', ['sum', 'product'])
return example_output(sum_value, product_value)
@component
def split_text_lines(
source: Input[Dataset],
odd_lines: Output[Dataset],
even_lines_path: Output[Dataset]):
"""Splits a text file into two files, with even lines going to one file
and odd lines to the other."""
with open(source.path, 'r') as reader:
with open(odd_lines.path, 'w') as odd_writer:
with open(even_lines_path, 'w') as even_writer:
while True:
line = reader.readline()
if line == "":
break
odd_writer.write(line)
line = reader.readline()
if line == "":
break
even_writer.write(line)
from typing import NamedTuple
@component(base_image='tensorflow/tensorflow:1.11.0-py3')
def my_divmod(
dividend: float,
divisor: float,
metrics: Output[Metrics]) -> NamedTuple(
'MyDivmodOutput',
[
('quotient', float),
('remainder', float),
]):
'''Divides two numbers and calculate the quotient and remainder'''
# Import the numpy package inside the component function
import numpy as np
# Define a helper function
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
# Export two metrics
metrics.log_metric('quotient', float(quotient))
metrics.log_metric('remainder', float(remainder))
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput',
['quotient', 'remainder'])
return divmod_output(quotient, remainder)
import kfp.dsl as dsl
@dsl.pipeline(
name='calculation-pipeline',
description='An example pipeline that performs arithmetic calculations.',
pipeline_root='gs://my-pipeline-root/example-pipeline'
)
def calc_pipeline(
a: float=1,
b: float=7,
c: float=17,
):
# Passes a pipeline parameter and a constant value as operation arguments.
add_task = add(a, 4) # The add_op factory function returns
# a dsl.ContainerOp class instance.
# Passes the output of the add_task and a pipeline parameter as operation
# arguments. For an operation with a single return value, the output
# reference is accessed using `task.output` or
# `task.outputs['output_name']`.
divmod_task = my_divmod(add_task.output, b)
# For an operation with multiple return values, output references are
# accessed as `task.outputs['output_name']`.
result_task = add(divmod_task.outputs['quotient'], c)
# Specify pipeline argument values
arguments = {'a': 7, 'b': 8}
# Submit a pipeline run
client.create_run_from_pipeline_func(
calc_pipeline,
arguments=arguments,
mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE)
| 0.830594 | 0.980949 |
```
import netCDF4
import math
import xarray as xr
import dask
import numpy as np
import time
import scipy
import matplotlib.pyplot as plt
from matplotlib import animation
from matplotlib import transforms
from matplotlib.animation import PillowWriter
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/Small_Sample/Useful_Files/Amazon_Rainforest.nc'
amazon = xr.open_dataset(path_to_file)
#test_ds.variables
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/Small_Sample/Useful_Files/Siberia.nc'
siberia = xr.open_dataset(path_to_file)
#test_ds.variables
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/Small_Sample/New_SPCAM5/TimestepOutput_Neuralnet_SPCAM_216/run/Cpac_gridcell_rcat.nc'
test_ds = xr.open_dataset(path_to_file)
#test_ds.variables
amazon_T = np.squeeze(amazon.CRM_T.values)
siberia_T = np.squeeze(siberia.CRM_T.values)
test_T = np.squeeze(test_ds.CRM_T.values)
print(test_T.shape)
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/Small_Sample/Useful_Files/CRM_T_Analysis.nc'
test_ds = xr.open_dataset(path_to_file)
all_T = np.squeeze(test_ds.CRM_T.values)
print(all_T.shape)
equator_T = np.squeeze(all_T[:,:,:,47,:])
equator_T = np.nanmean(equator_T, axis = 3)
north_T = np.squeeze(all_T[:,:,:,80,:])
north_T = np.nanmean(north_T, axis = 3)
siberia_anons = siberia_T - north_T
amazon_anons = amazon_T - equator_T
test_anons = test_T - equator_T
def utc_timing(times):
utc_list = []
end_times = [':00',':15',':30',':45']
counter = 0
thing = 0
for i in range(times):
if thing == 95:
thing = 0
beg_time = int(thing/4)
if beg_time == 0:
beg_time = 24
ending = end_times[counter]
counter = counter + 1
if counter == 4:
counter = 0
utc_time = str(beg_time)+ending
utc_list.append(utc_time)
thing = thing + 1
#print(utc_list)
return utc_list
varname = "Temperature"
location = 'Amazon'
units = "K"
savepath = 'T'
def anime_col(values, var, unit, save, local):
plt.rcParams['animation.ffmpeg_path'] = '/export/home/gmooers/miniconda3/bin/ffmpeg'
container = []
fig, ax = plt.subplots(1, 1)
times = len(values)
utc_list = utc_timing(times)
for i in range(times):
#base = plt.gca().transData
#rot = transforms.Affine2D().rotate_deg(270)
im = ax.pcolor(np.squeeze(values[i,:, :]), vmin = -4.0, vmax = 4.0, cmap = 'coolwarm', animated= True) #transform = rot + base)
if i ==0:
fig.colorbar(im, label=var+' '+unit)
plt.ylabel("Pressure")
plt.xlabel('CRMs')
title_feat = ax.text(0.5,1.05,var+' at '+local+" at "+utc_list[i],
size=10,
ha="center", transform=ax.transAxes, )
my_yticks = np.arange(50, 1000, 150)
my_yticks[::-1].sort()
ax.set_yticklabels(my_yticks)
yticks = ax.yaxis.get_major_ticks()
yticks[0].label1.set_visible(False)
yticks[-1].label1.set_visible(False)
container.append([im, title_feat])
ani = animation.ArtistAnimation(fig, container, interval = 150, blit = True, repeat = True)
ani.save('/fast/gmooers/Figures/Animate/Single_Day_'+save+'_'+local+'_Animations.mp4')
#plt.show()
anime_col(amazon_anons, varname, units, savepath, location)
location = 'Siberia'
anime_col(siberia_anons, varname, units, savepath, location)
location = '0N_180E'
anime_col(test_anons, varname, units, savepath, location)
```
|
github_jupyter
|
import netCDF4
import math
import xarray as xr
import dask
import numpy as np
import time
import scipy
import matplotlib.pyplot as plt
from matplotlib import animation
from matplotlib import transforms
from matplotlib.animation import PillowWriter
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/Small_Sample/Useful_Files/Amazon_Rainforest.nc'
amazon = xr.open_dataset(path_to_file)
#test_ds.variables
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/Small_Sample/Useful_Files/Siberia.nc'
siberia = xr.open_dataset(path_to_file)
#test_ds.variables
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/Small_Sample/New_SPCAM5/TimestepOutput_Neuralnet_SPCAM_216/run/Cpac_gridcell_rcat.nc'
test_ds = xr.open_dataset(path_to_file)
#test_ds.variables
amazon_T = np.squeeze(amazon.CRM_T.values)
siberia_T = np.squeeze(siberia.CRM_T.values)
test_T = np.squeeze(test_ds.CRM_T.values)
print(test_T.shape)
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/Small_Sample/Useful_Files/CRM_T_Analysis.nc'
test_ds = xr.open_dataset(path_to_file)
all_T = np.squeeze(test_ds.CRM_T.values)
print(all_T.shape)
equator_T = np.squeeze(all_T[:,:,:,47,:])
equator_T = np.nanmean(equator_T, axis = 3)
north_T = np.squeeze(all_T[:,:,:,80,:])
north_T = np.nanmean(north_T, axis = 3)
siberia_anons = siberia_T - north_T
amazon_anons = amazon_T - equator_T
test_anons = test_T - equator_T
def utc_timing(times):
utc_list = []
end_times = [':00',':15',':30',':45']
counter = 0
thing = 0
for i in range(times):
if thing == 95:
thing = 0
beg_time = int(thing/4)
if beg_time == 0:
beg_time = 24
ending = end_times[counter]
counter = counter + 1
if counter == 4:
counter = 0
utc_time = str(beg_time)+ending
utc_list.append(utc_time)
thing = thing + 1
#print(utc_list)
return utc_list
varname = "Temperature"
location = 'Amazon'
units = "K"
savepath = 'T'
def anime_col(values, var, unit, save, local):
plt.rcParams['animation.ffmpeg_path'] = '/export/home/gmooers/miniconda3/bin/ffmpeg'
container = []
fig, ax = plt.subplots(1, 1)
times = len(values)
utc_list = utc_timing(times)
for i in range(times):
#base = plt.gca().transData
#rot = transforms.Affine2D().rotate_deg(270)
im = ax.pcolor(np.squeeze(values[i,:, :]), vmin = -4.0, vmax = 4.0, cmap = 'coolwarm', animated= True) #transform = rot + base)
if i ==0:
fig.colorbar(im, label=var+' '+unit)
plt.ylabel("Pressure")
plt.xlabel('CRMs')
title_feat = ax.text(0.5,1.05,var+' at '+local+" at "+utc_list[i],
size=10,
ha="center", transform=ax.transAxes, )
my_yticks = np.arange(50, 1000, 150)
my_yticks[::-1].sort()
ax.set_yticklabels(my_yticks)
yticks = ax.yaxis.get_major_ticks()
yticks[0].label1.set_visible(False)
yticks[-1].label1.set_visible(False)
container.append([im, title_feat])
ani = animation.ArtistAnimation(fig, container, interval = 150, blit = True, repeat = True)
ani.save('/fast/gmooers/Figures/Animate/Single_Day_'+save+'_'+local+'_Animations.mp4')
#plt.show()
anime_col(amazon_anons, varname, units, savepath, location)
location = 'Siberia'
anime_col(siberia_anons, varname, units, savepath, location)
location = '0N_180E'
anime_col(test_anons, varname, units, savepath, location)
| 0.196132 | 0.30795 |
<a href="https://colab.research.google.com/github/Ipsit1234/QML-HEP-Evaluation-Test-GSOC-2021/blob/main/QML_HEP_GSoC_2021_Task_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Task II: Quantum Generative Adversarial Network (QGAN) Part
You will explore how best to apply a quantum generative adversarial network
(QGAN) to solve a High Energy Data analysis issue, more specifically, separating
the signal events from the background events. You should use the Google Cirq and
Tensorflow Quantum (TFQ) libraries for this task.
A set of input samples (simulated with Delphes) is provided in NumPy NPZ format
[Download Input](https://drive.google.com/file/d/1r_MZB_crfpij6r3SxPDeU_3JD6t6AxAj/view). In the input file, there are only 100 samples for training and 100
samples for testing so it won’t take much computing resources to accomplish this
task. The signal events are labeled with 1 while the background events are labeled
with 0.
Be sure to show that you understand how to fine tune your machine learning model
to improve the performance. The performance can be evaluated with classification
accuracy or Area Under ROC Curve (AUC).
## Downloading the dataset
```
!gdown --id 1r_MZB_crfpij6r3SxPDeU_3JD6t6AxAj -O events.npz
```
## Setting up the required libraries
```
!pip install -q tensorflow==2.3.1
!pip install -q tensorflow-quantum
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import seaborn as sns
from sklearn.metrics import roc_curve, auc
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
```
## Loading the data
```
data = np.load('./events.npz', allow_pickle=True)
training_input = data['training_input']
test_input = data['test_input']
training_input
def prepare_data(training_input, test_input):
x_train_0 = training_input.item()['0']
x_train_1 = training_input.item()['1']
x_test_0 = test_input.item()['0']
x_test_1 = test_input.item()['1']
x_train = np.zeros((len(x_train_0) + len(x_train_1), x_train_0.shape[1]), dtype=np.float32)
x_test = np.zeros((len(x_test_0) + len(x_test_1), x_test_0.shape[1]), dtype=np.float32)
y_train = np.zeros((len(x_train_0) + len(x_train_1),), dtype=np.int32)
y_test = np.zeros((len(x_test_0) + len(x_test_1),), dtype=np.int32)
x_train[:len(x_train_0), :] = x_train_0
x_train[len(x_train_0):, :] = x_train_1
y_train[:len(x_train_0)] = 0
y_train[len(x_train_0):] = 1
x_test[:len(x_test_0), :] = x_test_0
x_test[len(x_test_0):, :] = x_test_1
y_test[:len(x_test_0)] = 0
y_test[len(x_test_0):] = 1
idx1 = np.random.permutation(len(x_train))
idx2 = np.random.permutation(len(x_test))
x_train, y_train = x_train[idx1], y_train[idx1]
x_test, y_test = x_test[idx2], y_test[idx2]
print('Shape of the training set:', x_train.shape)
print('Shape of the test set:', x_test.shape)
return x_train, y_train, x_test, y_test
x_train, y_train, x_test, y_test = prepare_data(training_input, test_input)
```
## Approach
We will make use of a Quantum GAN in the following:
1. Train a GAN to produce samples that look like they came from quantum circuits.
2. Add a classification path to the discriminator and minimize both the minimax loss and classification loss.
3. We will use a random quantum circuit to generate random inputs for the generator. The intution behind this is that the data that was provided are the results (measurements) taken from some quantum experiment. So if we succeed in training a GAN which generates outputs similar to the experimental data, this will help in identifying new or other possible outcomes of the same quantum experiment which have been missed in the dataset provided.
4. Simultaneously training the discriminator to classify signal events and background events will help in identifying the signal events generated from the fully trained generator.
## Data Generation
As provided in the dataset, each datapoint is 5-dimensional. Hence we will use 5 qubits and pass them through a random quantum circuit and then use these measurements as inputs to the GAN
```
def generate_circuit(qubits):
"""Generate a random circuit on qubits."""
random_circuit = cirq.generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth=2, seed=123242)
return random_circuit
def generate_data(circuit, n_samples):
"""Draw `n_samples` samples from circuit into a tf.Tensor."""
return tf.squeeze(tfq.layers.Sample()(circuit, repetitions=n_samples).to_tensor())
# sample data and circuit structure
qubits = cirq.GridQubit.rect(1, 5)
random_circuit_m = generate_circuit(qubits) + cirq.measure_each(*qubits)
SVGCircuit(random_circuit_m)
generate_data(random_circuit_m, 10)
```
We will generate 200 random training data
```
N_SAMPLES = 200
N_QUBITS = 5
QUBITS = cirq.GridQubit.rect(1, N_QUBITS)
REFERENCE_CIRCUIT = generate_circuit(QUBITS)
random_data = generate_data(REFERENCE_CIRCUIT, N_SAMPLES)
random_data
```
## Building a Model
This GAN will be used to produce measurements corresponding to signal/background events.
```
def make_generator():
"""Construct generator model."""
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(256, use_bias=False, input_shape=(N_QUBITS,), activation='elu'))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(N_QUBITS, activation=tf.keras.activations.tanh))
return model
def make_discriminator():
"""Construct discriminator model along with a classifier."""
inp = tf.keras.Input(shape=(N_QUBITS, ), dtype=tf.float32)
out = tf.keras.layers.Dense(256, use_bias=False, activation='elu')(inp)
out = tf.keras.layers.Dense(128, activation='relu')(out)
out = tf.keras.layers.Dropout(0.4)(out)
out = tf.keras.layers.Dense(64, activation='relu')(out)
out = tf.keras.layers.Dropout(0.3)(out)
classification = tf.keras.layers.Dense(2, activation='softmax')(out)
discrimination = tf.keras.layers.Dense(1, activation='sigmoid')(out)
model = tf.keras.Model(inputs=[inp], outputs=[discrimination, classification])
return model
```
Let us instantiate our models, define the losses and define the `train_step` function which will be executed in each epoch
```
generator = make_generator()
discriminator = make_discriminator()
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
"""Computes the discriminator loss."""
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
"""Compute the generator loss."""
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5)
BATCH_SIZE = 16
bce = tf.keras.losses.BinaryCrossentropy(from_logits=False)
# auc = tf.keras.metrics.AUC()
@tf.function
def train_step(images, labels, noise):
"""Run train step on provided image batch."""
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_data = generator(noise, training=True)
real_output, real_preds = discriminator(images, training=True)
fake_output, fake_preds = discriminator(generated_data, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
disc_loss = disc_loss + bce(tf.one_hot(tf.squeeze(labels), depth=2), real_preds)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
# auc.update_state(tf.one_hot(tf.squeeze(labels), depth=2), real_preds)
return gen_loss, disc_loss
def train(data, labels, noise, epochs):
"""Launch full training for the given number of epochs."""
batched_data = tf.data.Dataset.from_tensor_slices(data).batch(BATCH_SIZE)
batched_labels = tf.data.Dataset.from_tensor_slices(labels).batch(BATCH_SIZE)
batched_noise = tf.data.Dataset.from_tensor_slices(noise).batch(BATCH_SIZE)
AUC = tf.keras.metrics.AUC()
g_losses = []
d_losses = []
# aucs = []
for epoch in range(epochs):
g_epoch_losses = []
d_epoch_losses = []
# aucs_epoch = []
for i, (data_batch, labels_batch, noise_batch) in enumerate(zip(batched_data, batched_labels, batched_noise)):
gl, dl = train_step(data_batch, labels_batch, noise_batch)
g_epoch_losses.append(gl)
d_epoch_losses.append(dl)
# aucs_epoch.append(auc_roc)
g_losses.append(tf.reduce_mean(g_epoch_losses))
d_losses.append(tf.reduce_mean(d_epoch_losses))
print('Epoch: {}, Generator Loss: {}, Discriminator Loss: {}'.format(epoch, tf.reduce_mean(g_epoch_losses), tf.reduce_mean(d_epoch_losses)))
# aucs.append(tf.reduce_mean(aucs_epoch))
return g_losses, d_losses
gen_losses, disc_losses = train(x_train, y_train, random_data, 2000)
plt.title('Generator Loss')
plt.plot(gen_losses, 'r-')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()
plt.title('Discriminator Loss')
plt.plot(disc_losses, 'b-')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()
```
## Using the Discriminator for Classification
We will now evaluate the performance of discriminator on the original training data as a classifier. We will check both the classification accuracy and Area Under ROC Curve as the metrics.
```
_, train_predictions = discriminator(tf.convert_to_tensor(x_train))
train_predictions.shape
binary_accuracy = tf.keras.metrics.BinaryAccuracy()
binary_accuracy.update_state(tf.one_hot(tf.squeeze(y_train), depth=2), train_predictions)
print('Training Accuracy: %.4f %s' % (binary_accuracy.result().numpy()*100, '%'))
fpr, tpr, _ = roc_curve(y_train, tf.argmax(train_predictions,1).numpy())
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic Curve')
plt.legend(loc="lower right")
plt.show()
_, test_predictions = discriminator(tf.convert_to_tensor(x_test))
test_predictions.shape
binary_accuracy = tf.keras.metrics.BinaryAccuracy()
binary_accuracy.update_state(tf.one_hot(tf.squeeze(y_test), depth=2), test_predictions)
print('Test Accuracy: %.4f %s' % (binary_accuracy.result().numpy()*100, '%'))
fpr, tpr, _ = roc_curve(y_test, tf.argmax(test_predictions,1).numpy())
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic Curve')
plt.legend(loc="lower right")
plt.show()
```
We will now look at the predictions on the generated synthetic data.
```
generator_outputs = generator(random_data)
generator_outputs.shape
_, predictions_synthetic = discriminator(generator_outputs)
predictions_synthetic.shape
predicted_labels_synthetic = tf.argmax(predictions_synthetic, 1)
predicted_labels_synthetic[:20]
```
## Improving the Performance
It can be seen from the loss vs iterations plots that the generator has more or less converged, but the discriminator hasn't. The AUC scores suggest that the model is actually learning. This can be improved in the fillowing ways:
1. Use slightly higher learning rates while training the discriminator.
2. As the generator converged, we can take the synthetic data generated by it and add to our original training set. We can again start training the GAN so that the discriminator becomes more robust.
3. Training for a larger number of epochs.
4. Using adaptive learning rates and learning rate scheduling.
|
github_jupyter
|
!gdown --id 1r_MZB_crfpij6r3SxPDeU_3JD6t6AxAj -O events.npz
!pip install -q tensorflow==2.3.1
!pip install -q tensorflow-quantum
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import seaborn as sns
from sklearn.metrics import roc_curve, auc
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
data = np.load('./events.npz', allow_pickle=True)
training_input = data['training_input']
test_input = data['test_input']
training_input
def prepare_data(training_input, test_input):
x_train_0 = training_input.item()['0']
x_train_1 = training_input.item()['1']
x_test_0 = test_input.item()['0']
x_test_1 = test_input.item()['1']
x_train = np.zeros((len(x_train_0) + len(x_train_1), x_train_0.shape[1]), dtype=np.float32)
x_test = np.zeros((len(x_test_0) + len(x_test_1), x_test_0.shape[1]), dtype=np.float32)
y_train = np.zeros((len(x_train_0) + len(x_train_1),), dtype=np.int32)
y_test = np.zeros((len(x_test_0) + len(x_test_1),), dtype=np.int32)
x_train[:len(x_train_0), :] = x_train_0
x_train[len(x_train_0):, :] = x_train_1
y_train[:len(x_train_0)] = 0
y_train[len(x_train_0):] = 1
x_test[:len(x_test_0), :] = x_test_0
x_test[len(x_test_0):, :] = x_test_1
y_test[:len(x_test_0)] = 0
y_test[len(x_test_0):] = 1
idx1 = np.random.permutation(len(x_train))
idx2 = np.random.permutation(len(x_test))
x_train, y_train = x_train[idx1], y_train[idx1]
x_test, y_test = x_test[idx2], y_test[idx2]
print('Shape of the training set:', x_train.shape)
print('Shape of the test set:', x_test.shape)
return x_train, y_train, x_test, y_test
x_train, y_train, x_test, y_test = prepare_data(training_input, test_input)
def generate_circuit(qubits):
"""Generate a random circuit on qubits."""
random_circuit = cirq.generate_boixo_2018_supremacy_circuits_v2(qubits, cz_depth=2, seed=123242)
return random_circuit
def generate_data(circuit, n_samples):
"""Draw `n_samples` samples from circuit into a tf.Tensor."""
return tf.squeeze(tfq.layers.Sample()(circuit, repetitions=n_samples).to_tensor())
# sample data and circuit structure
qubits = cirq.GridQubit.rect(1, 5)
random_circuit_m = generate_circuit(qubits) + cirq.measure_each(*qubits)
SVGCircuit(random_circuit_m)
generate_data(random_circuit_m, 10)
N_SAMPLES = 200
N_QUBITS = 5
QUBITS = cirq.GridQubit.rect(1, N_QUBITS)
REFERENCE_CIRCUIT = generate_circuit(QUBITS)
random_data = generate_data(REFERENCE_CIRCUIT, N_SAMPLES)
random_data
def make_generator():
"""Construct generator model."""
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(256, use_bias=False, input_shape=(N_QUBITS,), activation='elu'))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(N_QUBITS, activation=tf.keras.activations.tanh))
return model
def make_discriminator():
"""Construct discriminator model along with a classifier."""
inp = tf.keras.Input(shape=(N_QUBITS, ), dtype=tf.float32)
out = tf.keras.layers.Dense(256, use_bias=False, activation='elu')(inp)
out = tf.keras.layers.Dense(128, activation='relu')(out)
out = tf.keras.layers.Dropout(0.4)(out)
out = tf.keras.layers.Dense(64, activation='relu')(out)
out = tf.keras.layers.Dropout(0.3)(out)
classification = tf.keras.layers.Dense(2, activation='softmax')(out)
discrimination = tf.keras.layers.Dense(1, activation='sigmoid')(out)
model = tf.keras.Model(inputs=[inp], outputs=[discrimination, classification])
return model
generator = make_generator()
discriminator = make_discriminator()
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
"""Computes the discriminator loss."""
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
"""Compute the generator loss."""
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5)
BATCH_SIZE = 16
bce = tf.keras.losses.BinaryCrossentropy(from_logits=False)
# auc = tf.keras.metrics.AUC()
@tf.function
def train_step(images, labels, noise):
"""Run train step on provided image batch."""
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_data = generator(noise, training=True)
real_output, real_preds = discriminator(images, training=True)
fake_output, fake_preds = discriminator(generated_data, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
disc_loss = disc_loss + bce(tf.one_hot(tf.squeeze(labels), depth=2), real_preds)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
# auc.update_state(tf.one_hot(tf.squeeze(labels), depth=2), real_preds)
return gen_loss, disc_loss
def train(data, labels, noise, epochs):
"""Launch full training for the given number of epochs."""
batched_data = tf.data.Dataset.from_tensor_slices(data).batch(BATCH_SIZE)
batched_labels = tf.data.Dataset.from_tensor_slices(labels).batch(BATCH_SIZE)
batched_noise = tf.data.Dataset.from_tensor_slices(noise).batch(BATCH_SIZE)
AUC = tf.keras.metrics.AUC()
g_losses = []
d_losses = []
# aucs = []
for epoch in range(epochs):
g_epoch_losses = []
d_epoch_losses = []
# aucs_epoch = []
for i, (data_batch, labels_batch, noise_batch) in enumerate(zip(batched_data, batched_labels, batched_noise)):
gl, dl = train_step(data_batch, labels_batch, noise_batch)
g_epoch_losses.append(gl)
d_epoch_losses.append(dl)
# aucs_epoch.append(auc_roc)
g_losses.append(tf.reduce_mean(g_epoch_losses))
d_losses.append(tf.reduce_mean(d_epoch_losses))
print('Epoch: {}, Generator Loss: {}, Discriminator Loss: {}'.format(epoch, tf.reduce_mean(g_epoch_losses), tf.reduce_mean(d_epoch_losses)))
# aucs.append(tf.reduce_mean(aucs_epoch))
return g_losses, d_losses
gen_losses, disc_losses = train(x_train, y_train, random_data, 2000)
plt.title('Generator Loss')
plt.plot(gen_losses, 'r-')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()
plt.title('Discriminator Loss')
plt.plot(disc_losses, 'b-')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()
_, train_predictions = discriminator(tf.convert_to_tensor(x_train))
train_predictions.shape
binary_accuracy = tf.keras.metrics.BinaryAccuracy()
binary_accuracy.update_state(tf.one_hot(tf.squeeze(y_train), depth=2), train_predictions)
print('Training Accuracy: %.4f %s' % (binary_accuracy.result().numpy()*100, '%'))
fpr, tpr, _ = roc_curve(y_train, tf.argmax(train_predictions,1).numpy())
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic Curve')
plt.legend(loc="lower right")
plt.show()
_, test_predictions = discriminator(tf.convert_to_tensor(x_test))
test_predictions.shape
binary_accuracy = tf.keras.metrics.BinaryAccuracy()
binary_accuracy.update_state(tf.one_hot(tf.squeeze(y_test), depth=2), test_predictions)
print('Test Accuracy: %.4f %s' % (binary_accuracy.result().numpy()*100, '%'))
fpr, tpr, _ = roc_curve(y_test, tf.argmax(test_predictions,1).numpy())
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic Curve')
plt.legend(loc="lower right")
plt.show()
generator_outputs = generator(random_data)
generator_outputs.shape
_, predictions_synthetic = discriminator(generator_outputs)
predictions_synthetic.shape
predicted_labels_synthetic = tf.argmax(predictions_synthetic, 1)
predicted_labels_synthetic[:20]
| 0.771499 | 0.989327 |
# Simple Stock Backtesting
https://www.investopedia.com/terms/b/backtesting.asp
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'MSFT'
start = '2016-01-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
# Create Signal and Moving Average for long and short
# Initialize signals column to zero
df['Signal'] = 0
df['Short_MA'] = df['Adj Close'].rolling(window=20).mean()
df['Long_MA'] = df['Adj Close'].rolling(window=50).mean()
# Create short and long signal with short window
short_window=40
df['Signal'][short_window:] = np.where(df['Short_MA'][short_window:] > df['Long_MA'][short_window:], 1, 0)
# Compute the difference between consecutive entries in signals
df['Positions'] = df['Signal'].diff()
# Create Positions
positions = pd.DataFrame(index=df.index).fillna(0.0)
positions = 100 * df['Signal']
# Daily and Total of Profit & Loss
df['Daily P&L'] = df['Adj Close'].diff() * df['Signal']
df['Total P&L'] = df['Daily P&L'].cumsum()
df
# Create Portfolio dataframe with Holding, Cash, Total, Returns
initial_capital = 10000 # Starting Cash
positions = pd.DataFrame(index=df.index).fillna(0.0)
positions = 100 * df['Positions']
portfolio = pd.DataFrame(index=df.index)
portfolio['Holdings'] = positions*df['Adj Close']
portfolio['Cash'] = initial_capital - portfolio['Holdings'].cumsum()
portfolio['Total'] = portfolio['Cash'] + positions.cumsum() * df['Adj Close']
portfolio['Returns'] = portfolio['Total'].pct_change
```
## Plot Chart Signals
```
# Plot backtesting for Long and Short
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
df[['Short_MA', 'Long_MA']].plot(ax=ax1, lw=2.)
ax1.plot(df['Adj Close'])
ax1.plot(df.ix[df['Positions'] == 1.0].index, df.Short_MA[df['Positions'] == 1.0],'^', markersize=10, color='g', label='Long')
ax1.plot(df.ix[df['Positions'] == -1.0].index, df.Short_MA[df['Positions'] == -1.0],'v', markersize=10, color='r', label='Short')
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.grid()
ax1.legend()
# Plot backtesting Portfolio Values
ax2 = plt.subplot(2, 1, 2)
ax2.plot(portfolio['Total'])
ax2.plot(portfolio['Returns'].ix[df['Positions'] == 1.0].index, portfolio['Total'][df['Positions'] == 1.0],'^', markersize=10, color='g', label='Long')
ax2.plot(portfolio['Returns'].ix[df['Positions'] == -1.0].index, portfolio['Total'][df['Positions'] == -1.0],'v', markersize=10, color='r', label='Short')
ax2.set_ylabel('Portfolio Value')
ax2.set_xlabel('Date')
ax2.legend()
ax2.grid()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'MSFT'
start = '2016-01-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
# Create Signal and Moving Average for long and short
# Initialize signals column to zero
df['Signal'] = 0
df['Short_MA'] = df['Adj Close'].rolling(window=20).mean()
df['Long_MA'] = df['Adj Close'].rolling(window=50).mean()
# Create short and long signal with short window
short_window=40
df['Signal'][short_window:] = np.where(df['Short_MA'][short_window:] > df['Long_MA'][short_window:], 1, 0)
# Compute the difference between consecutive entries in signals
df['Positions'] = df['Signal'].diff()
# Create Positions
positions = pd.DataFrame(index=df.index).fillna(0.0)
positions = 100 * df['Signal']
# Daily and Total of Profit & Loss
df['Daily P&L'] = df['Adj Close'].diff() * df['Signal']
df['Total P&L'] = df['Daily P&L'].cumsum()
df
# Create Portfolio dataframe with Holding, Cash, Total, Returns
initial_capital = 10000 # Starting Cash
positions = pd.DataFrame(index=df.index).fillna(0.0)
positions = 100 * df['Positions']
portfolio = pd.DataFrame(index=df.index)
portfolio['Holdings'] = positions*df['Adj Close']
portfolio['Cash'] = initial_capital - portfolio['Holdings'].cumsum()
portfolio['Total'] = portfolio['Cash'] + positions.cumsum() * df['Adj Close']
portfolio['Returns'] = portfolio['Total'].pct_change
# Plot backtesting for Long and Short
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
df[['Short_MA', 'Long_MA']].plot(ax=ax1, lw=2.)
ax1.plot(df['Adj Close'])
ax1.plot(df.ix[df['Positions'] == 1.0].index, df.Short_MA[df['Positions'] == 1.0],'^', markersize=10, color='g', label='Long')
ax1.plot(df.ix[df['Positions'] == -1.0].index, df.Short_MA[df['Positions'] == -1.0],'v', markersize=10, color='r', label='Short')
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.grid()
ax1.legend()
# Plot backtesting Portfolio Values
ax2 = plt.subplot(2, 1, 2)
ax2.plot(portfolio['Total'])
ax2.plot(portfolio['Returns'].ix[df['Positions'] == 1.0].index, portfolio['Total'][df['Positions'] == 1.0],'^', markersize=10, color='g', label='Long')
ax2.plot(portfolio['Returns'].ix[df['Positions'] == -1.0].index, portfolio['Total'][df['Positions'] == -1.0],'v', markersize=10, color='r', label='Short')
ax2.set_ylabel('Portfolio Value')
ax2.set_xlabel('Date')
ax2.legend()
ax2.grid()
| 0.576542 | 0.869271 |
<center><font size="4"><span style="color:blue">Demonstration 1: general presentation with some quantities and statistics</span></font></center>
This is a general presentation of the 3W dataset, to the best of its authors' knowledge, the first realistic and public dataset with rare undesirable real events in oil wells that can be readily used as a benchmark dataset for development of machine learning techniques related to inherent difficulties of actual data.
For more information about the theory behind this dataset, refer to the paper **A Realistic and Public Dataset with Rare Undesirable Real Events in Oil Wells** published in the **Journal of Petroleum Science and Engineering**.
# 1. Introduction
This notebook presents the 3W dataset in a general way. For this, some tables, graphs, and statistics are presented.
# 2. Imports and Configurations
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.colors as mcolors
from matplotlib.patches import Patch
from pathlib import Path
from multiprocessing.dummy import Pool as ThreadPool
from collections import defaultdict
from natsort import natsorted
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
data_path = Path('..', 'data')
events_names = {0: 'Normal',
1: 'Abrupt Increase of BSW',
2: 'Spurious Closure of DHSV',
3: 'Severe Slugging',
4: 'Flow Instability',
5: 'Rapid Productivity Loss',
6: 'Quick Restriction in PCK',
7: 'Scaling in PCK',
8: 'Hydrate in Production Line'
}
columns = ['P-PDG',
'P-TPT',
'T-TPT',
'P-MON-CKP',
'T-JUS-CKP',
'P-JUS-CKGL',
'T-JUS-CKGL',
'QGL',
'class']
rare_threshold = 0.01
```
# 3. Quantities of Instances
The following table shows the quantities of instances that compose the 3W dataset, by type of event and by knowledge source: real, simulated and hand-drawn instances.
```
def class_and_file_generator(data_path, real=False, simulated=False, drawn=False):
for class_path in data_path.iterdir():
if class_path.is_dir():
class_code = int(class_path.stem)
for instance_path in class_path.iterdir():
if (instance_path.suffix == '.csv'):
if (simulated and instance_path.stem.startswith('SIMULATED')) or \
(drawn and instance_path.stem.startswith('DRAWN')) or \
(real and (not instance_path.stem.startswith('SIMULATED')) and \
(not instance_path.stem.startswith('DRAWN'))):
yield class_code, instance_path
real_instances = list(class_and_file_generator(data_path, real=True, simulated=False, drawn=False))
simulated_instances = list(class_and_file_generator(data_path, real=False, simulated=True, drawn=False))
drawn_instances = list(class_and_file_generator(data_path, real=False, simulated=False, drawn=True))
instances_class = [{'TYPE OF EVENT': str(c) + ' - ' + events_names[c], 'SOURCE': 'REAL'} for c, p in real_instances] + \
[{'TYPE OF EVENT': str(c) + ' - ' + events_names[c], 'SOURCE': 'SIMULATED'} for c, p in simulated_instances] + \
[{'TYPE OF EVENT': str(c) + ' - ' + events_names[c], 'SOURCE': 'DRAWN'} for c, p in drawn_instances]
df_class = pd.DataFrame(instances_class)
df_class_count = df_class.groupby(['TYPE OF EVENT', 'SOURCE']).size().reset_index().pivot('SOURCE', 'TYPE OF EVENT', 0).fillna(0).astype(int).T
df_class_count = df_class_count.loc[natsorted(df_class_count.index.values)]
df_class_count = df_class_count[['REAL', 'SIMULATED', 'DRAWN']]
df_class_count['TOTAL'] = df_class_count.sum(axis=1)
df_class_count.loc['TOTAL'] = df_class_count.sum(axis=0)
df_class_count
```
# 4. Rare Undesirable Events
When considering only **real instances** and threshold of 1%, the following types of events are rare.
```
th = rare_threshold*df_class_count['REAL'][-1]
df_class_count.loc[df_class_count['REAL'] < th]
```
If **simulated instances** are also considered, the types of rare events become the ones listed below.
```
th = rare_threshold*(df_class_count['REAL'][-1]+df_class_count['SIMULATED'][-1])
df_class_count.loc[df_class_count['REAL']+df_class_count['SIMULATED'] < th]
```
After also considering the **hand-drawn instances**, we get the final list with rare types of events.
```
th = rare_threshold*(df_class_count['REAL'][-1]+df_class_count['SIMULATED'][-1]+df_class_count['DRAWN'][-1])
df_class_count.loc[df_class_count['REAL']+df_class_count['SIMULATED']+df_class_count['DRAWN'] < th]
```
# 5. Scatter Map of Real Instances
A scatter map with all the **real instances** is shown below. The oldest one occurred in the middle of 2012 and the most recent one in the middle of 2018. In addition to the total number of considered wells, this map provides an overview of the occurrences distributions of each undesirable event over time and between wells.
```
def load_instance(instances):
class_code, instance_path = instances
try:
well, instance_id = instance_path.stem.split('_')
df = pd.read_csv(instance_path, index_col='timestamp', parse_dates=['timestamp'])
assert (df.columns == columns).all(), "invalid columns in the file {}: {}".format(str(instance_path), str(df.columns.tolist()))
df['class_code'] = class_code
df['well'] = well
df['instance_id'] = instance_id
df = df[['class_code', 'well', 'instance_id'] + columns]
return df
except Exception as e:
raise Exception('error reading file {}: {}'.format(instance_path, e))
def load_instances(instances):
pool = ThreadPool()
all_df = []
try:
for df in pool.imap_unordered(load_instance, instances):
all_df.append(df)
finally:
pool.terminate()
df_all = pd.concat(all_df)
del all_df
return df_all
df_real = load_instances(real_instances)
df_time = df_real.reset_index().groupby(['well', 'instance_id', 'class_code'])['timestamp'].agg(['min', 'max'])
well_times = defaultdict(list)
well_classes = defaultdict(list)
for (well, instance_id, class_code), (tmin, tmax) in df_time.iterrows():
well_times[well].append((tmin.toordinal(), (tmax.toordinal() - tmin.toordinal())))
well_classes[well].append(int(class_code))
wells = df_real['well'].unique()
well_code = {w:i for i, w in enumerate(sorted(wells))}
cmap = plt.get_cmap('Paired')
my_colors = [cmap(i) for i in [3, 0, 5, 8, 11, 2, 1, 4, 9, 7, 6, 10]]
my_cmap = mcolors.ListedColormap(my_colors, name='my_cmap')
plt.register_cmap(name='my_cmap', cmap=my_cmap)
cmap = plt.get_cmap('my_cmap')
height = 5
border = 2
first_year = np.min(df_time['min']).year
last_year = np.max(df_time['max']).year
plt.rcParams['axes.labelsize'] = 9
plt.rcParams['font.size'] = 9
plt.rcParams['legend.fontsize'] = 9
fig, ax = plt.subplots(figsize=(9, 4))
yticks = []
yticks_labels = []
for well in well_times.keys():
times = well_times[well]
class_names = well_classes[well]
class_colors = list(map(cmap, class_names))
well_id = well_code[well]
yticks.append(well_id * height + height/2 - border/2)
yticks_labels.append(well)
ax.broken_barh(times, (well_id * height, height - border), facecolors=class_colors, edgecolors=class_colors)
ax.grid(True)
ax.set_axisbelow(True)
ax.set_yticks(yticks)
ax.set_yticklabels(yticks_labels)
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax.set_xlim(pd.datetime(first_year, 1, 1).toordinal(), pd.datetime(last_year, 12, 31).toordinal())
legend_colors = [Patch(facecolor=cmap(i), label=str(i) + ' - ' + events_name) for i, events_name in events_names.items()]
ax.legend(frameon=False, handles=legend_colors, loc='upper center', bbox_to_anchor=(0.5, 1.22), ncol=3);
#fig.savefig('figure.pdf', dpi=500, bbox_inches='tight')
```
# 6. Some Statistics
The main 3W dataset's fundamental aspects related to inherent difficulties of actual data are presented next.
```
def calc_stats_instance(instances):
_, instance_path = instances
n_vars_missing = 0
n_vars_frozen = 0
try:
df = pd.read_csv(instance_path, index_col='timestamp', parse_dates=['timestamp'])
vars = df.columns[:-1]
n_vars = len(vars)
for var in vars:
if df[var].isnull().all():
n_vars_missing += 1
u_values = df[var].unique()
if len(u_values) == 1 and not np.isnan(u_values):
n_vars_frozen += 1
n_obs = len(df)
n_obs_unlabeled = df['class'].isnull().sum()
return pd.DataFrame({'n_vars':[n_vars],
'n_vars_missing':[n_vars_missing],
'n_vars_frozen':[n_vars_frozen],
'n_obs':[n_obs],
'n_obs_unlabeled':[n_obs_unlabeled]
})
except Exception as e:
raise Exception('error reading file {}: {}'.format(instance_path, e))
def calc_stats_instances(instances):
pool = ThreadPool()
all_stats = []
try:
for stats in pool.imap_unordered(calc_stats_instance, instances):
all_stats.append(stats)
finally:
pool.terminate()
df_all_stats = pd.concat(all_stats)
del all_stats
return df_all_stats.sum()
global_stats = calc_stats_instances(real_instances+simulated_instances+drawn_instances)
print('missing variables: {} of {} ({:.2f}%)'.format(global_stats['n_vars_missing'], global_stats['n_vars'], 100*global_stats['n_vars_missing']/global_stats['n_vars']))
print('frozen variables: {} of {} ({:.2f}%)'.format(global_stats['n_vars_frozen'], global_stats['n_vars'], 100*global_stats['n_vars_frozen']/global_stats['n_vars']))
print('unlabeled observations: {} of {} ({:.2f}%)'.format(global_stats['n_obs_unlabeled'], global_stats['n_obs'], 100*global_stats['n_obs_unlabeled']/global_stats['n_obs']))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.colors as mcolors
from matplotlib.patches import Patch
from pathlib import Path
from multiprocessing.dummy import Pool as ThreadPool
from collections import defaultdict
from natsort import natsorted
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
data_path = Path('..', 'data')
events_names = {0: 'Normal',
1: 'Abrupt Increase of BSW',
2: 'Spurious Closure of DHSV',
3: 'Severe Slugging',
4: 'Flow Instability',
5: 'Rapid Productivity Loss',
6: 'Quick Restriction in PCK',
7: 'Scaling in PCK',
8: 'Hydrate in Production Line'
}
columns = ['P-PDG',
'P-TPT',
'T-TPT',
'P-MON-CKP',
'T-JUS-CKP',
'P-JUS-CKGL',
'T-JUS-CKGL',
'QGL',
'class']
rare_threshold = 0.01
def class_and_file_generator(data_path, real=False, simulated=False, drawn=False):
for class_path in data_path.iterdir():
if class_path.is_dir():
class_code = int(class_path.stem)
for instance_path in class_path.iterdir():
if (instance_path.suffix == '.csv'):
if (simulated and instance_path.stem.startswith('SIMULATED')) or \
(drawn and instance_path.stem.startswith('DRAWN')) or \
(real and (not instance_path.stem.startswith('SIMULATED')) and \
(not instance_path.stem.startswith('DRAWN'))):
yield class_code, instance_path
real_instances = list(class_and_file_generator(data_path, real=True, simulated=False, drawn=False))
simulated_instances = list(class_and_file_generator(data_path, real=False, simulated=True, drawn=False))
drawn_instances = list(class_and_file_generator(data_path, real=False, simulated=False, drawn=True))
instances_class = [{'TYPE OF EVENT': str(c) + ' - ' + events_names[c], 'SOURCE': 'REAL'} for c, p in real_instances] + \
[{'TYPE OF EVENT': str(c) + ' - ' + events_names[c], 'SOURCE': 'SIMULATED'} for c, p in simulated_instances] + \
[{'TYPE OF EVENT': str(c) + ' - ' + events_names[c], 'SOURCE': 'DRAWN'} for c, p in drawn_instances]
df_class = pd.DataFrame(instances_class)
df_class_count = df_class.groupby(['TYPE OF EVENT', 'SOURCE']).size().reset_index().pivot('SOURCE', 'TYPE OF EVENT', 0).fillna(0).astype(int).T
df_class_count = df_class_count.loc[natsorted(df_class_count.index.values)]
df_class_count = df_class_count[['REAL', 'SIMULATED', 'DRAWN']]
df_class_count['TOTAL'] = df_class_count.sum(axis=1)
df_class_count.loc['TOTAL'] = df_class_count.sum(axis=0)
df_class_count
th = rare_threshold*df_class_count['REAL'][-1]
df_class_count.loc[df_class_count['REAL'] < th]
th = rare_threshold*(df_class_count['REAL'][-1]+df_class_count['SIMULATED'][-1])
df_class_count.loc[df_class_count['REAL']+df_class_count['SIMULATED'] < th]
th = rare_threshold*(df_class_count['REAL'][-1]+df_class_count['SIMULATED'][-1]+df_class_count['DRAWN'][-1])
df_class_count.loc[df_class_count['REAL']+df_class_count['SIMULATED']+df_class_count['DRAWN'] < th]
def load_instance(instances):
class_code, instance_path = instances
try:
well, instance_id = instance_path.stem.split('_')
df = pd.read_csv(instance_path, index_col='timestamp', parse_dates=['timestamp'])
assert (df.columns == columns).all(), "invalid columns in the file {}: {}".format(str(instance_path), str(df.columns.tolist()))
df['class_code'] = class_code
df['well'] = well
df['instance_id'] = instance_id
df = df[['class_code', 'well', 'instance_id'] + columns]
return df
except Exception as e:
raise Exception('error reading file {}: {}'.format(instance_path, e))
def load_instances(instances):
pool = ThreadPool()
all_df = []
try:
for df in pool.imap_unordered(load_instance, instances):
all_df.append(df)
finally:
pool.terminate()
df_all = pd.concat(all_df)
del all_df
return df_all
df_real = load_instances(real_instances)
df_time = df_real.reset_index().groupby(['well', 'instance_id', 'class_code'])['timestamp'].agg(['min', 'max'])
well_times = defaultdict(list)
well_classes = defaultdict(list)
for (well, instance_id, class_code), (tmin, tmax) in df_time.iterrows():
well_times[well].append((tmin.toordinal(), (tmax.toordinal() - tmin.toordinal())))
well_classes[well].append(int(class_code))
wells = df_real['well'].unique()
well_code = {w:i for i, w in enumerate(sorted(wells))}
cmap = plt.get_cmap('Paired')
my_colors = [cmap(i) for i in [3, 0, 5, 8, 11, 2, 1, 4, 9, 7, 6, 10]]
my_cmap = mcolors.ListedColormap(my_colors, name='my_cmap')
plt.register_cmap(name='my_cmap', cmap=my_cmap)
cmap = plt.get_cmap('my_cmap')
height = 5
border = 2
first_year = np.min(df_time['min']).year
last_year = np.max(df_time['max']).year
plt.rcParams['axes.labelsize'] = 9
plt.rcParams['font.size'] = 9
plt.rcParams['legend.fontsize'] = 9
fig, ax = plt.subplots(figsize=(9, 4))
yticks = []
yticks_labels = []
for well in well_times.keys():
times = well_times[well]
class_names = well_classes[well]
class_colors = list(map(cmap, class_names))
well_id = well_code[well]
yticks.append(well_id * height + height/2 - border/2)
yticks_labels.append(well)
ax.broken_barh(times, (well_id * height, height - border), facecolors=class_colors, edgecolors=class_colors)
ax.grid(True)
ax.set_axisbelow(True)
ax.set_yticks(yticks)
ax.set_yticklabels(yticks_labels)
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax.set_xlim(pd.datetime(first_year, 1, 1).toordinal(), pd.datetime(last_year, 12, 31).toordinal())
legend_colors = [Patch(facecolor=cmap(i), label=str(i) + ' - ' + events_name) for i, events_name in events_names.items()]
ax.legend(frameon=False, handles=legend_colors, loc='upper center', bbox_to_anchor=(0.5, 1.22), ncol=3);
#fig.savefig('figure.pdf', dpi=500, bbox_inches='tight')
def calc_stats_instance(instances):
_, instance_path = instances
n_vars_missing = 0
n_vars_frozen = 0
try:
df = pd.read_csv(instance_path, index_col='timestamp', parse_dates=['timestamp'])
vars = df.columns[:-1]
n_vars = len(vars)
for var in vars:
if df[var].isnull().all():
n_vars_missing += 1
u_values = df[var].unique()
if len(u_values) == 1 and not np.isnan(u_values):
n_vars_frozen += 1
n_obs = len(df)
n_obs_unlabeled = df['class'].isnull().sum()
return pd.DataFrame({'n_vars':[n_vars],
'n_vars_missing':[n_vars_missing],
'n_vars_frozen':[n_vars_frozen],
'n_obs':[n_obs],
'n_obs_unlabeled':[n_obs_unlabeled]
})
except Exception as e:
raise Exception('error reading file {}: {}'.format(instance_path, e))
def calc_stats_instances(instances):
pool = ThreadPool()
all_stats = []
try:
for stats in pool.imap_unordered(calc_stats_instance, instances):
all_stats.append(stats)
finally:
pool.terminate()
df_all_stats = pd.concat(all_stats)
del all_stats
return df_all_stats.sum()
global_stats = calc_stats_instances(real_instances+simulated_instances+drawn_instances)
print('missing variables: {} of {} ({:.2f}%)'.format(global_stats['n_vars_missing'], global_stats['n_vars'], 100*global_stats['n_vars_missing']/global_stats['n_vars']))
print('frozen variables: {} of {} ({:.2f}%)'.format(global_stats['n_vars_frozen'], global_stats['n_vars'], 100*global_stats['n_vars_frozen']/global_stats['n_vars']))
print('unlabeled observations: {} of {} ({:.2f}%)'.format(global_stats['n_obs_unlabeled'], global_stats['n_obs'], 100*global_stats['n_obs_unlabeled']/global_stats['n_obs']))
| 0.423696 | 0.936314 |
# Data
De [link](https://github.com/chihyaoma/regretful-agent/tree/master/tasks/R2R-pano)
Each JSON Lines entry contains a guide annotation for a path in the environment.
Data schema:
```python
{'split': str,
'instruction_id': int,
'annotator_id': int,
'language': str,
'path_id': int,
'scan': str,
'path': Sequence[str],
'heading': float,
'instruction': str,
'timed_instruction': Sequence[Mapping[str, Union[str, float]]],
'edit_distance': float}
```
Field descriptions:
* `split`: The annotation split: `train`, `val_seen`, `val_unseen`,
`test_standard`.
* `instruction_id`: Uniquely identifies the guide annotation.
* `annotator_id`: Uniquely identifies the guide annotator.
* `language`: The IETF BCP 47 language tag: `en-IN`, `en-US`, `hi-IN`,
`te-IN`.
* `path_id`: Uniquely identifies a path sampled from the Matterport3D
environment.
* `scan`: Uniquely identifies a scan in the Matterport3D environment.
* `path`: A sequence of panoramic viewpoints along the path.
* `heading`: The initial heading in radians. Following R2R, the heading angle
is zero facing the y-axis with z-up, and increases by turning right.
* `instruction`: The navigation instruction.
* `timed_instruction`: A sequence of time-aligned words in the instruction.
Note that a small number of words are missing the `start_time` and
`end_time` fields.
* `word`: The aligned utterance.
* `start_time`: The start of the time span, w.r.t. the recording.
* `end_time`: The end of the time span, w.r.t. the recording.
* `edit_distance` Edit distance between the manually transcribed instructions
and the automatic transcript generated by Google Cloud
[Text-to-Speech](https://cloud.google.com/text-to-speech) API.
```
# Las features de todos los puntos del dataset pesan 3.9G. Como no me cabe, tengo que tomar un sample
DATA_SIZE_FACTOR = 0.01
import json
import os
import numpy as np
import pandas as pd
DATADIR = './data/original'
DATAPATHS = {
'train': os.path.join(DATADIR, 'R2R_train.json'),
'test': os.path.join(DATADIR, 'R2R_test.json'),
'val seen': os.path.join(DATADIR, 'R2R_val_seen.json'),
'val unseen': os.path.join(DATADIR, 'R2R_val_unseen.json'),
}
with open(DATAPATHS['train']) as f:
train = json.load(f)
train_df = pd.DataFrame.from_records(train)
with open(DATAPATHS['test']) as f:
test = json.load(f)
test_df = pd.DataFrame.from_records(test)
with open(DATAPATHS['val seen']) as f:
val_seen = json.load(f)
val_seen_df = pd.DataFrame.from_records(val_seen)
with open(DATAPATHS['val unseen']) as f:
val_unseen = json.load(f)
val_unseen_df = pd.DataFrame.from_records(val_unseen)
def scan_ids(df):
return df['scan'].unique()
def n_scenarios(df):
return scan_ids(df).shape[0]
print(f"Hay {len(train)} ejemplos en el set de entrenamiento.")
print(f"Hay {len(test)} ejemplos en el set de test.")
print(f"Hay {len(val_seen)} ejemplos en el set de validacion seen.")
print(f"Hay {len(val_unseen)} ejemplos en el set de validacion unseen.")
print("----------------------------------------------------")
print(f"Hay {n_scenarios(train_df)} escenarios distintos en el set de entrenamiento.")
print(f"Hay {n_scenarios(test_df)} escenarios distintos en el set de test.")
print(f"Hay {n_scenarios(val_seen_df)} escenarios distintos en el set de validacion seen. (Todos extraidos de train)")
print(f"Hay {n_scenarios(val_unseen_df)} escenarios distintos en el set de validacion unseen.")
print(f"Hay {n_scenarios(pd.concat([train_df, test_df, val_seen_df, val_unseen_df]))} escenarios distintos en total.")
print("----------------------------------------------------")
def sample_scan_ids(df):
unique_scan_ids = scan_ids(df)
return np.random.choice(unique_scan_ids, size=int(np.ceil(unique_scan_ids.shape[0] * DATA_SIZE_FACTOR)), replace=False) # ceil hace que sea al menos 1.
train_scan_ids = sample_scan_ids(train_df)
test_scan_ids = sample_scan_ids(test_df)
val_unseen_scan_ids = sample_scan_ids(val_unseen_df)
val_seen_scan_ids = np.intersect1d(train_scan_ids, scan_ids(val_seen_df))
def filter_by_scan_id(split_dict, scans):
is_in_scans = lambda row: row['scan'] in scans
return list(filter(is_in_scans, split_dict))
new_train = filter_by_scan_id(train, train_scan_ids)
new_test = filter_by_scan_id(test, test_scan_ids)
new_val_unseen = filter_by_scan_id(val_unseen, val_unseen_scan_ids)
new_val_seen = filter_by_scan_id(val_seen, val_seen_scan_ids)
new_train_df = pd.DataFrame.from_records(new_train)
new_test_df = pd.DataFrame.from_records(new_test)
new_val_unseen_df = pd.DataFrame.from_records(new_val_unseen)
new_val_seen_df = pd.DataFrame.from_records(new_val_seen)
# print(val_seen_scan_ids)
print(f"Quedan {n_scenarios(new_train_df)} escenarios distintos en el set de entrenamiento.")
print(f"Quedan {n_scenarios(new_test_df)} escenarios distintos en el set de test.")
print(f"Quedan {n_scenarios(new_val_seen_df)} escenarios distintos en el set de validacion seen. (Todos extraidos de train)")
print(f"Quedan {n_scenarios(new_val_unseen_df)} escenarios distintos en el set de validacion unseen.")
print(f"Quedan {n_scenarios(pd.concat([new_train_df, new_test_df, new_val_seen_df, new_val_unseen_df]))} escenarios distintos en total.")
# Guardar los nuevos datasets reducidos
DESTINY_DIR = 'data'
DESTINY_PATHS = {
'train': os.path.join(DESTINY_DIR, 'R2R_train.json'),
'test': os.path.join(DESTINY_DIR, 'R2R_test.json'),
'val seen': os.path.join(DESTINY_DIR, 'R2R_val_seen.json'),
'val unseen': os.path.join(DESTINY_DIR, 'R2R_val_unseen.json'),
}
def save_json(obj, destiny_path):
with open(destiny_path, 'w') as f:
json.dump(obj, f)
save_json(new_train, DESTINY_PATHS['train'])
save_json(new_test, DESTINY_PATHS['test'])
save_json(new_val_seen, DESTINY_PATHS['val seen'])
save_json(new_val_unseen, DESTINY_PATHS['val unseen'])
# Guardar los ids para luego filtrarlas features (en el runtime)
all_scan_ids = np.hstack([
train_scan_ids,
test_scan_ids,
val_seen_scan_ids,
val_unseen_scan_ids
])
save_json(list(np.unique(all_scan_ids)), 'data/scan_ids.json')
```
|
github_jupyter
|
{'split': str,
'instruction_id': int,
'annotator_id': int,
'language': str,
'path_id': int,
'scan': str,
'path': Sequence[str],
'heading': float,
'instruction': str,
'timed_instruction': Sequence[Mapping[str, Union[str, float]]],
'edit_distance': float}
# Las features de todos los puntos del dataset pesan 3.9G. Como no me cabe, tengo que tomar un sample
DATA_SIZE_FACTOR = 0.01
import json
import os
import numpy as np
import pandas as pd
DATADIR = './data/original'
DATAPATHS = {
'train': os.path.join(DATADIR, 'R2R_train.json'),
'test': os.path.join(DATADIR, 'R2R_test.json'),
'val seen': os.path.join(DATADIR, 'R2R_val_seen.json'),
'val unseen': os.path.join(DATADIR, 'R2R_val_unseen.json'),
}
with open(DATAPATHS['train']) as f:
train = json.load(f)
train_df = pd.DataFrame.from_records(train)
with open(DATAPATHS['test']) as f:
test = json.load(f)
test_df = pd.DataFrame.from_records(test)
with open(DATAPATHS['val seen']) as f:
val_seen = json.load(f)
val_seen_df = pd.DataFrame.from_records(val_seen)
with open(DATAPATHS['val unseen']) as f:
val_unseen = json.load(f)
val_unseen_df = pd.DataFrame.from_records(val_unseen)
def scan_ids(df):
return df['scan'].unique()
def n_scenarios(df):
return scan_ids(df).shape[0]
print(f"Hay {len(train)} ejemplos en el set de entrenamiento.")
print(f"Hay {len(test)} ejemplos en el set de test.")
print(f"Hay {len(val_seen)} ejemplos en el set de validacion seen.")
print(f"Hay {len(val_unseen)} ejemplos en el set de validacion unseen.")
print("----------------------------------------------------")
print(f"Hay {n_scenarios(train_df)} escenarios distintos en el set de entrenamiento.")
print(f"Hay {n_scenarios(test_df)} escenarios distintos en el set de test.")
print(f"Hay {n_scenarios(val_seen_df)} escenarios distintos en el set de validacion seen. (Todos extraidos de train)")
print(f"Hay {n_scenarios(val_unseen_df)} escenarios distintos en el set de validacion unseen.")
print(f"Hay {n_scenarios(pd.concat([train_df, test_df, val_seen_df, val_unseen_df]))} escenarios distintos en total.")
print("----------------------------------------------------")
def sample_scan_ids(df):
unique_scan_ids = scan_ids(df)
return np.random.choice(unique_scan_ids, size=int(np.ceil(unique_scan_ids.shape[0] * DATA_SIZE_FACTOR)), replace=False) # ceil hace que sea al menos 1.
train_scan_ids = sample_scan_ids(train_df)
test_scan_ids = sample_scan_ids(test_df)
val_unseen_scan_ids = sample_scan_ids(val_unseen_df)
val_seen_scan_ids = np.intersect1d(train_scan_ids, scan_ids(val_seen_df))
def filter_by_scan_id(split_dict, scans):
is_in_scans = lambda row: row['scan'] in scans
return list(filter(is_in_scans, split_dict))
new_train = filter_by_scan_id(train, train_scan_ids)
new_test = filter_by_scan_id(test, test_scan_ids)
new_val_unseen = filter_by_scan_id(val_unseen, val_unseen_scan_ids)
new_val_seen = filter_by_scan_id(val_seen, val_seen_scan_ids)
new_train_df = pd.DataFrame.from_records(new_train)
new_test_df = pd.DataFrame.from_records(new_test)
new_val_unseen_df = pd.DataFrame.from_records(new_val_unseen)
new_val_seen_df = pd.DataFrame.from_records(new_val_seen)
# print(val_seen_scan_ids)
print(f"Quedan {n_scenarios(new_train_df)} escenarios distintos en el set de entrenamiento.")
print(f"Quedan {n_scenarios(new_test_df)} escenarios distintos en el set de test.")
print(f"Quedan {n_scenarios(new_val_seen_df)} escenarios distintos en el set de validacion seen. (Todos extraidos de train)")
print(f"Quedan {n_scenarios(new_val_unseen_df)} escenarios distintos en el set de validacion unseen.")
print(f"Quedan {n_scenarios(pd.concat([new_train_df, new_test_df, new_val_seen_df, new_val_unseen_df]))} escenarios distintos en total.")
# Guardar los nuevos datasets reducidos
DESTINY_DIR = 'data'
DESTINY_PATHS = {
'train': os.path.join(DESTINY_DIR, 'R2R_train.json'),
'test': os.path.join(DESTINY_DIR, 'R2R_test.json'),
'val seen': os.path.join(DESTINY_DIR, 'R2R_val_seen.json'),
'val unseen': os.path.join(DESTINY_DIR, 'R2R_val_unseen.json'),
}
def save_json(obj, destiny_path):
with open(destiny_path, 'w') as f:
json.dump(obj, f)
save_json(new_train, DESTINY_PATHS['train'])
save_json(new_test, DESTINY_PATHS['test'])
save_json(new_val_seen, DESTINY_PATHS['val seen'])
save_json(new_val_unseen, DESTINY_PATHS['val unseen'])
# Guardar los ids para luego filtrarlas features (en el runtime)
all_scan_ids = np.hstack([
train_scan_ids,
test_scan_ids,
val_seen_scan_ids,
val_unseen_scan_ids
])
save_json(list(np.unique(all_scan_ids)), 'data/scan_ids.json')
| 0.281011 | 0.890056 |
```
# default_exp utils
```
# Utils
> contains various util functions and classes
```
#hide
from nbdev.showdoc import *
#export
import os
import re
import pandas as pd
import numpy as np
from random import randrange
from pm4py.objects.log.importer.xes import importer as xes_importer
from pm4py.objects.conversion.log import converter as log_converter
from fastai.torch_basics import *
from fastai.basics import *
from fastai.metrics import accuracy
from fastai.learner import *
from fastai.callback.all import *
#hide
%load_ext autoreload
%autoreload 2
%load_ext memory_profiler
%matplotlib inline
```
# Functions
```
#export
def f1score(truth, classified):
'Calculates F1 score given a list of the truth and classified values'
tp = len(set(truth).intersection(set(classified)))
t = len(truth)
p = len(classified)
if (p == 0):
precision = 1
else:
precision = tp/p
if (t == 0):
recall = 1
else:
recall = tp/t
if (precision == 0 and recall == 0):
f1 = 0
else:
f1 = 2*precision*recall/(precision+recall)
return f1, tp, t, p
#export
def df_preproc (df):
'Prepocesses the df to be ready for Anomaly Detection it will add start/end events to every trace'
df['event_id']= df.index
df.index = df['trace_id']
df = df[["event_id", "activity", "trace_id"]]
for i in df['trace_id'].unique():
df_cop = df.loc[i]
df.loc[i, 'event_id'] = np.arange(1, len(df_cop)+1)
df.reset_index(drop=True, inplace=True)
trace_ends = list(df.loc[df['event_id']==1].index)[1:]
trace_ends.append(len(df))
new = np.insert(np.array(df),trace_ends, [-1, 'end', 0], axis=0)
df = pd.DataFrame(data=new, columns=["event_id", "activity", "trace_id"])
trace_starts = list(df.loc[df['event_id']==1].index)
new = np.insert(np.array(df),trace_starts, [0, 'start', 0], axis=0)
trace_starts = np.where(new[:,0]==0)[0]
trace_ends = np.where(new[:,0]==-1)[0]
new[trace_starts,2] = new[np.array(trace_starts)+1,2]
new[trace_ends,2] = new[np.array(trace_starts)+1,2]
new[trace_ends,0] = new[trace_ends-1,0]+1
df = pd.DataFrame(data=new, columns=["event_id", "activity", "trace_id"])
df.index = df['trace_id']
return df
#export
def load_data (data='PDC2016'):
'loads eventlogs from different sources, returns event_df with start/end event, test_df and truth_df'
#---------------PDC2016|2017--------------------------------------------------
if (data == 'PDC2016' or data == 'PDC2017'):
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train'
files = [filename for filename in os.listdir(directory) if filename.split('.')[-1] =='csv']
files.sort()
pick = randrange(len(files)) #picks random eventlog
df = pd.read_csv('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train/' + files[pick], sep=',')
df = df.rename(columns={"act_name": "activity", "case_id": "trace_id"})
event_df = df_preproc(df)
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test'
files = [filename for filename in os.listdir(directory)]
files.sort()
log = xes_importer.apply('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test/' + files[pick])
df = log_converter.apply(log, variant=log_converter.Variants.TO_DATA_FRAME)
df = df.rename(columns={"concept:name": "activity", "case:concept:name": "trace_id"})
df = df[['activity', 'trace_id']]
test_df = df_preproc(df)
truth = [log[i].attributes['pdc:isPos'] for i in range(len(log))]
d = {'case': range(1, len(log)+1), 'normal': truth}
truth_df = pd.DataFrame (data=d)
#---------------PDC2019--------------------------------------------------
if (data == 'PDC2019'):
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train'
files = [filename for filename in os.listdir(directory) if filename.split('.')[-1] =='csv']
files.sort()
pick = randrange(len(files)) #picks random eventlog
df = pd.read_csv('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train/' + files[pick], sep=',')
df = df.rename(columns={"event": "activity", "case": "trace_id"})
df = df[['activity', 'trace_id']]
event_df = df_preproc(df)
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test'
files = [filename for filename in os.listdir(directory)]
files.sort()
log = xes_importer.apply('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test/' + files[pick])
df = log_converter.apply(log, variant=log_converter.Variants.TO_DATA_FRAME)
df = df.rename(columns={"concept:name": "activity", "case:concept:name": "trace_id"})
df = df[['activity', 'trace_id']]
test_df = df_preproc(df)
truth = [log[i].attributes['pdc:isPos'] for i in range(len(log))]
d = {'case': range(1, len(log)+1), 'normal': truth}
truth_df = pd.DataFrame (data=d)
#---------------PDC2020--------------------------------------------------
if (data == 'PDC2020'):
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train'
files = [filename for filename in os.listdir(directory) if filename.split('.')[-1] =='xes']
files.sort()
pick = randrange(len(files)) #picks random eventlog
log = xes_importer.apply('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test/' + files[pick])
df = log_converter.apply(log, variant=log_converter.Variants.TO_DATA_FRAME)
df = df.rename(columns={"concept:name": "activity", "case:concept:name": "trace_id"})
df['trace_id']= df['trace_id'].apply(lambda x: int(x.split()[1]))
df = df[['activity', 'trace_id']]
event_df = df_preproc(df)
test_df = event_df
truth = [log[i].attributes['pdc:isPos'] for i in range(len(log))]
d = {'case': range(1, len(log)+1), 'normal': truth}
truth_df = pd.DataFrame (data=d)
return event_df, test_df, truth_df
#export
class TestModel(nn.Module):
def __init__(self, pp_data ,is_cuda=False,vocab_col='activity'):
super().__init__()
vocab_size=len(pp_data.procs.categorify[vocab_col])
self.vocab_index={s:i for i,s in enumerate(pp_data.cat_names[0])}[vocab_col]
n_fac, n_hidden=round(sqrt(vocab_size))+1, round(sqrt(vocab_size)*2)
self.n_hidden=n_hidden
self.is_cuda=is_cuda
self.e = nn.Embedding(vocab_size,n_fac)
self.l_in = nn.Linear(n_fac, n_hidden)
self.l_hidden = nn.Linear(n_hidden, n_hidden)
self.l_bottleneck = nn.Linear(n_hidden, 2)
self.l_out = nn.Linear(2, vocab_size)
def forward(self, xb):
cs=xb.permute(1,2,0)[self.vocab_index]
bs = len(cs[0])
h = torch.zeros((bs,self.n_hidden))
if self.is_cuda: h=h.cuda()
for c in cs:
inp = torch.relu(self.l_in(self.e(c)))
h = torch.tanh(self.l_hidden(h+inp))
h = self.l_bottleneck(h)
return F.log_softmax(self.l_out(h),dim=0)
#export
class transform(ItemTransform):
def encodes(self,e):
(ecats,econts,tcats,tconts),(ycats,yconts)=e
return (ecats),ycats
#export
def _shift_columns (a,ws=3): return np.dstack(list(reversed([np.roll(a,i) for i in range(0,ws)])))[0]
def windows_fast(df,event_ids,ws=5,pad=None):
max_trace_len=int(event_ids.max())+1
trace_start = np.where(event_ids == 0)[0]
trace_len=[trace_start[i]-trace_start[i-1] for i in range(1,len(trace_start))]+[len(df)-trace_start[-1]]
idx=[range(trace_start[i]+(i+1)
*(ws-1),trace_start[i]+trace_len[i]+(i+1)*(ws-1)) for i in range(len(trace_start))]
idx=np.array([y for x in idx for y in x])
trace_start = np.repeat(trace_start, ws-1)
tmp=np.stack([_shift_columns(np.insert(np.array(df[i]), trace_start, 0, axis=0),ws=ws) for i in list(df)])
tmp=np.rollaxis(tmp,1)
res=tmp[idx]
if pad: res=np.pad(res,((0,0),(0,0),(pad-ws,0)))
return res[:-1],np.array(range(1,len(res)))
data = 'PDC2020'
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train'
files = [filename for filename in os.listdir(directory) if filename.split('.')[-1] =='xes']
files.sort()
pick = randrange(len(files)) #picks random eventlog
log = xes_importer.apply('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test/' + files[pick])
df = log_converter.apply(log, variant=log_converter.Variants.TO_DATA_FRAME)
df = df.rename(columns={"concept:name": "activity", "case:concept:name": "trace_id"})
df['trace_id']= df['trace_id'].apply(lambda x: int(x.split()[1]))
df = df[['activity', 'trace_id']]
event_df = df_preproc(df)
event_df
event_df, test_df, df_truth = load_data(data='PDC2017')
df_truth
df
```
|
github_jupyter
|
# default_exp utils
#hide
from nbdev.showdoc import *
#export
import os
import re
import pandas as pd
import numpy as np
from random import randrange
from pm4py.objects.log.importer.xes import importer as xes_importer
from pm4py.objects.conversion.log import converter as log_converter
from fastai.torch_basics import *
from fastai.basics import *
from fastai.metrics import accuracy
from fastai.learner import *
from fastai.callback.all import *
#hide
%load_ext autoreload
%autoreload 2
%load_ext memory_profiler
%matplotlib inline
#export
def f1score(truth, classified):
'Calculates F1 score given a list of the truth and classified values'
tp = len(set(truth).intersection(set(classified)))
t = len(truth)
p = len(classified)
if (p == 0):
precision = 1
else:
precision = tp/p
if (t == 0):
recall = 1
else:
recall = tp/t
if (precision == 0 and recall == 0):
f1 = 0
else:
f1 = 2*precision*recall/(precision+recall)
return f1, tp, t, p
#export
def df_preproc (df):
'Prepocesses the df to be ready for Anomaly Detection it will add start/end events to every trace'
df['event_id']= df.index
df.index = df['trace_id']
df = df[["event_id", "activity", "trace_id"]]
for i in df['trace_id'].unique():
df_cop = df.loc[i]
df.loc[i, 'event_id'] = np.arange(1, len(df_cop)+1)
df.reset_index(drop=True, inplace=True)
trace_ends = list(df.loc[df['event_id']==1].index)[1:]
trace_ends.append(len(df))
new = np.insert(np.array(df),trace_ends, [-1, 'end', 0], axis=0)
df = pd.DataFrame(data=new, columns=["event_id", "activity", "trace_id"])
trace_starts = list(df.loc[df['event_id']==1].index)
new = np.insert(np.array(df),trace_starts, [0, 'start', 0], axis=0)
trace_starts = np.where(new[:,0]==0)[0]
trace_ends = np.where(new[:,0]==-1)[0]
new[trace_starts,2] = new[np.array(trace_starts)+1,2]
new[trace_ends,2] = new[np.array(trace_starts)+1,2]
new[trace_ends,0] = new[trace_ends-1,0]+1
df = pd.DataFrame(data=new, columns=["event_id", "activity", "trace_id"])
df.index = df['trace_id']
return df
#export
def load_data (data='PDC2016'):
'loads eventlogs from different sources, returns event_df with start/end event, test_df and truth_df'
#---------------PDC2016|2017--------------------------------------------------
if (data == 'PDC2016' or data == 'PDC2017'):
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train'
files = [filename for filename in os.listdir(directory) if filename.split('.')[-1] =='csv']
files.sort()
pick = randrange(len(files)) #picks random eventlog
df = pd.read_csv('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train/' + files[pick], sep=',')
df = df.rename(columns={"act_name": "activity", "case_id": "trace_id"})
event_df = df_preproc(df)
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test'
files = [filename for filename in os.listdir(directory)]
files.sort()
log = xes_importer.apply('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test/' + files[pick])
df = log_converter.apply(log, variant=log_converter.Variants.TO_DATA_FRAME)
df = df.rename(columns={"concept:name": "activity", "case:concept:name": "trace_id"})
df = df[['activity', 'trace_id']]
test_df = df_preproc(df)
truth = [log[i].attributes['pdc:isPos'] for i in range(len(log))]
d = {'case': range(1, len(log)+1), 'normal': truth}
truth_df = pd.DataFrame (data=d)
#---------------PDC2019--------------------------------------------------
if (data == 'PDC2019'):
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train'
files = [filename for filename in os.listdir(directory) if filename.split('.')[-1] =='csv']
files.sort()
pick = randrange(len(files)) #picks random eventlog
df = pd.read_csv('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train/' + files[pick], sep=',')
df = df.rename(columns={"event": "activity", "case": "trace_id"})
df = df[['activity', 'trace_id']]
event_df = df_preproc(df)
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test'
files = [filename for filename in os.listdir(directory)]
files.sort()
log = xes_importer.apply('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test/' + files[pick])
df = log_converter.apply(log, variant=log_converter.Variants.TO_DATA_FRAME)
df = df.rename(columns={"concept:name": "activity", "case:concept:name": "trace_id"})
df = df[['activity', 'trace_id']]
test_df = df_preproc(df)
truth = [log[i].attributes['pdc:isPos'] for i in range(len(log))]
d = {'case': range(1, len(log)+1), 'normal': truth}
truth_df = pd.DataFrame (data=d)
#---------------PDC2020--------------------------------------------------
if (data == 'PDC2020'):
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train'
files = [filename for filename in os.listdir(directory) if filename.split('.')[-1] =='xes']
files.sort()
pick = randrange(len(files)) #picks random eventlog
log = xes_importer.apply('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test/' + files[pick])
df = log_converter.apply(log, variant=log_converter.Variants.TO_DATA_FRAME)
df = df.rename(columns={"concept:name": "activity", "case:concept:name": "trace_id"})
df['trace_id']= df['trace_id'].apply(lambda x: int(x.split()[1]))
df = df[['activity', 'trace_id']]
event_df = df_preproc(df)
test_df = event_df
truth = [log[i].attributes['pdc:isPos'] for i in range(len(log))]
d = {'case': range(1, len(log)+1), 'normal': truth}
truth_df = pd.DataFrame (data=d)
return event_df, test_df, truth_df
#export
class TestModel(nn.Module):
def __init__(self, pp_data ,is_cuda=False,vocab_col='activity'):
super().__init__()
vocab_size=len(pp_data.procs.categorify[vocab_col])
self.vocab_index={s:i for i,s in enumerate(pp_data.cat_names[0])}[vocab_col]
n_fac, n_hidden=round(sqrt(vocab_size))+1, round(sqrt(vocab_size)*2)
self.n_hidden=n_hidden
self.is_cuda=is_cuda
self.e = nn.Embedding(vocab_size,n_fac)
self.l_in = nn.Linear(n_fac, n_hidden)
self.l_hidden = nn.Linear(n_hidden, n_hidden)
self.l_bottleneck = nn.Linear(n_hidden, 2)
self.l_out = nn.Linear(2, vocab_size)
def forward(self, xb):
cs=xb.permute(1,2,0)[self.vocab_index]
bs = len(cs[0])
h = torch.zeros((bs,self.n_hidden))
if self.is_cuda: h=h.cuda()
for c in cs:
inp = torch.relu(self.l_in(self.e(c)))
h = torch.tanh(self.l_hidden(h+inp))
h = self.l_bottleneck(h)
return F.log_softmax(self.l_out(h),dim=0)
#export
class transform(ItemTransform):
def encodes(self,e):
(ecats,econts,tcats,tconts),(ycats,yconts)=e
return (ecats),ycats
#export
def _shift_columns (a,ws=3): return np.dstack(list(reversed([np.roll(a,i) for i in range(0,ws)])))[0]
def windows_fast(df,event_ids,ws=5,pad=None):
max_trace_len=int(event_ids.max())+1
trace_start = np.where(event_ids == 0)[0]
trace_len=[trace_start[i]-trace_start[i-1] for i in range(1,len(trace_start))]+[len(df)-trace_start[-1]]
idx=[range(trace_start[i]+(i+1)
*(ws-1),trace_start[i]+trace_len[i]+(i+1)*(ws-1)) for i in range(len(trace_start))]
idx=np.array([y for x in idx for y in x])
trace_start = np.repeat(trace_start, ws-1)
tmp=np.stack([_shift_columns(np.insert(np.array(df[i]), trace_start, 0, axis=0),ws=ws) for i in list(df)])
tmp=np.rollaxis(tmp,1)
res=tmp[idx]
if pad: res=np.pad(res,((0,0),(0,0),(pad-ws,0)))
return res[:-1],np.array(range(1,len(res)))
data = 'PDC2020'
directory = r'/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/train'
files = [filename for filename in os.listdir(directory) if filename.split('.')[-1] =='xes']
files.sort()
pick = randrange(len(files)) #picks random eventlog
log = xes_importer.apply('/mnt/data/jlahann/jupyterlab/fastpm/nbs/PDC data/'+ data +'/test/' + files[pick])
df = log_converter.apply(log, variant=log_converter.Variants.TO_DATA_FRAME)
df = df.rename(columns={"concept:name": "activity", "case:concept:name": "trace_id"})
df['trace_id']= df['trace_id'].apply(lambda x: int(x.split()[1]))
df = df[['activity', 'trace_id']]
event_df = df_preproc(df)
event_df
event_df, test_df, df_truth = load_data(data='PDC2017')
df_truth
df
| 0.155784 | 0.669853 |
# 选择
## 布尔类型、数值和表达式

- 注意:比较运算符的相等是两个等到,一个等到代表赋值
- 在Python中可以用整型0来代表False,其他数字来代表True
- 后面还会讲到 is 在判断语句中的用发
```
print(1>2)
yu=10000
a=eval(input('input money:'))
if a<=yu:
yu=yu-a
print("余额为:",yu)
else:
print("余额不足")
import os
a=eval(input('input money:'))
if a<=yu:
yu=yu-a
print("余额为:",yu)
else:
print("余额不足")
```
## 字符串的比较使用ASCII值
```
'a'>'A'
'abc'>'acd'
```
## Markdown
- https://github.com/younghz/Markdown
## EP:
- <img src="../Photo/34.png"></img>
- 输入一个数字,判断其实奇数还是偶数
```
bool(1.0)
bool(0.0)
```
## 产生随机数字
- 函数random.randint(a,b) 可以用来产生一个a和b之间且包括a和b的随机整数
```
import random
random.randint(0,10)
```
## 其他random方法
- random.random 返回0.0到1.0之间前闭后开区间的随机浮点
- random.randrange(a,b) 前闭后开
```
random.random()
random.randrange(5,20)
```
## EP:
- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字的和,并判定其是否正确
- 进阶:写一个随机序号点名程序
```
import random
for i in range(2):
if i==0:
a=random.randrange(1,11)
else:
b=random.randrange(1,11)
s=a+b
print(a,b)
c,d=eval(input('input:'))
if (c+d)==s:
print("true")
else:
print("false")
```
## if语句
- 如果条件正确就执行一个单向if语句,亦即当条件为真的时候才执行if内部的语句
- Python有很多选择语句:
> - 单向if
- 双向if-else
- 嵌套if
- 多向if-elif-else
- 注意:当语句含有子语句的时候,那么一定至少要有一个缩进,也就是说如果有儿子存在,那么一定要缩进
- 切记不可tab键和space混用,单用tab 或者 space
- 当你输出的结果是无论if是否为真时都需要显示时,语句应该与if对齐
```
grade=eval(input('input grade:'))
if grade>100:
print("不存在")
elif grade>=90:
print("优秀")
elif grade>=80:
print("良好")
elif grade>=60:
print("及格")
else:
print("不及格")
```
## EP:
- 用户输入一个数字,判断其实奇数还是偶数
- 进阶:可以查看下4.5实例研究猜生日
```
a=eval(input("input:"))
if a%2==0:
print("ou")
else:
print("ji")
```
## 双向if-else 语句
- 如果条件为真,那么走if内部语句,否则走else内部语句
## EP:
- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字,并判定其是否正确,如果正确打印“you‘re correct”,否则打印正确错误
```
for i in range(2):
if i==0:
a=random.randrange(1,11)
else:
b=random.randrange(1,11)
s=a+b
print(a,b)
c,d=eval(input('input:'))
if c==a and d==b:
print("true")
else:
print("false")
```
## 嵌套if 和多向if-elif-else

## EP:
- 提示用户输入一个年份,然后显示表示这一年的动物

- 计算身体质量指数的程序
- BMI = 以千克为单位的体重除以以米为单位的身高

```
year=eval(input('input:'))
i=year%12
if i==0:
print("猴")
else:
if i==2:
print("狗")
else:
if i==3:
print("猪")
else:
if i==4:
print("鼠")
else:
if i==5:
print("牛")
else:
if i==6:
print("虎")
else:
if i==7:
print("兔")
else:
if i==8:
print("龙")
else:
if i==9:
print("蛇")
else:
if i==10:
print("马")
else:
if i==11:
print("羊")
weight=eval(input("weight:"))
high=eval(input("high:"))
bmi=weight/high
if bmi>=30.0:
print("痴肥")
else:
if bmi>=25.0 and bmi<30.0:
print("超重")
else:
if bmi>=18.5 and bmi<25.0:
print("标准")
else:
if bmi<18.5:
print("超轻")
```
## 逻辑运算符



## EP:
- 判定闰年:一个年份如果能被4整除但不能被100整除,或者能被400整除,那么这个年份就是闰年
- 提示用户输入一个年份,并返回是否是闰年
- 提示用户输入一个数字,判断其是否为水仙花数
```
year=eval(input('input year:'))
if (year%4==0 and year%100!=0) or year%400==0:
print("闰年")
else:
print("不是闰年")
a=int(input("input a:"))
a1=int(a/100)
a2=int(a%100/10)
a3=int(a%10)
if a==(a1**3)+(a2**3)+(a3**3):
print("yes")
else:
print("not")
for a in range(100,1000):
a1=int(a/100)
a2=int(a%100/10)
a3=int(a%10)
if a==(a1**3)+(a2**3)+(a3**3):
print(a)
import math
a=int(input("input a:"))
str_a=str(a)
a1=int(str_a[0])
a2=int(str_a[1])
a3=int(str_a[2])
if a==math.pow(a1,3)+math.pow(a2,3)+math.pow(a3,3):
print("yes")
else:
print("not")
```
## 实例研究:彩票

```
import random
a=random.randrange(10,100)
print(a)
a1=int(a/10)
a2=a%10
b=eval(input("input d:"))
b1=int(b/10)
b2=b%10
if b==a:
print("10000")
else:
if b1==a2 and b2==a1:
print("3000")
else:
if b1==a1 or b2==a2:
print("1000")
```
# Homework
- 1

```
import math
a,b,c=eval(input("input a,b,c:"))
m=b*b-4*a*c
if m>0:
r1=(-b+math.sqrt(m))/(2*a)
r2=(-b-math.sqrt(m))/(2*a)
print(round(r1,6),round(r2,5))
else:
if m==0:
r1=(-b+math.sqrt(m))/(2*a)
print(r1)
else:
print("not roots")
```
- 2

```
for i in range(0,2):
a1=random.randint(0,100)
a2=random.randint(0,100)
print(a1,a2)
s=a1+a2
b=eval(input('input b:'))
if b==s:
print("yes")
else:
print("not")
```
- 3

```
a=eval(input("today is day:"))
b=eval(input("numbers days:"))
c=(a+b%7)%7
week=['Sun','Mon','Tue','Wed','Thu','Fri','Sat']
print("today is",week[a],"and the future day is",week[c])
```
- 4

```
import re
a,b,c=eval(input("input a,b,c:"))
m=max(a,b,c)
s=min(a,b,c)
print(s,a+b+c-m-s,m)
```
- 5

```
a1,a2=eval(input('input a1,a2:'))
b1,b2=eval(input('input b1,b2:'))
m1=a2/a1
m2=b2/b1
if m1>m2:
print("2")
else:
print("1")
```
- 6

```
month,year=eval(input("input month,year:"))
if ((year%4==0 and year%100!=0) or year%400==0) and month==2:
day=29
print(year,day)
else:
if month==2:
day=28
print(year,day)
else:
if month==1 or month==3 or month==5 or month==7 or month==8 or month==10 or month==12:
day=31
print(year,day)
else:
day=30
print(year,day)
```
- 7

```
import random
a=random.randint(0,2)
b=eval(input('input think:'))
if b==a:
print("true")
else:
print("false")
```
- 8

```
import random
a=eval(input('input 0、1或2:'))
b=random.randint(0,3)
print(b)
if (a==0 and b==2) or (a==1 and b==0) or (a==2 and b==1):
print("win")
else:
if a==b:
print("pingjun")
else:
print("lose")
```
- 9

```
year=eval(input("input year:"))
month=eval(input("input month:"))
day=eval(input("input day:"))
q=day
m=month
if month==1:
month=13
if month==2:
month=14
j=year/100
k=year%100
h=int((q+(26*(m+1)/10)+k+k/4+j/4+5*j)%7)
week=['Sun','Mon','Tue','Wed','Thu','Fri','Sat']
print(week[h])
```
- 10

```
puke=['Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King']
type=['梅花','红桃','方块','黑桃']
a=int(random.randint(0,14))
b=int(random.randint(0,5))
print(puke[int(a)],type[int(b)])
```
- 11

```
a=eval(input("input:"))
bai=a//100
ge=a%10
if bai==ge:
print("回文")
else:
print("不是回文")
```
- 12

```
a,b,c=eval(input("input a,b,c:"))
if (a+b>c) and (a+c>b) and (b+c>a):
l=a+b+c
print("周长:",l)
else:
print("不合法")
```
|
github_jupyter
|
print(1>2)
yu=10000
a=eval(input('input money:'))
if a<=yu:
yu=yu-a
print("余额为:",yu)
else:
print("余额不足")
import os
a=eval(input('input money:'))
if a<=yu:
yu=yu-a
print("余额为:",yu)
else:
print("余额不足")
'a'>'A'
'abc'>'acd'
bool(1.0)
bool(0.0)
import random
random.randint(0,10)
random.random()
random.randrange(5,20)
import random
for i in range(2):
if i==0:
a=random.randrange(1,11)
else:
b=random.randrange(1,11)
s=a+b
print(a,b)
c,d=eval(input('input:'))
if (c+d)==s:
print("true")
else:
print("false")
grade=eval(input('input grade:'))
if grade>100:
print("不存在")
elif grade>=90:
print("优秀")
elif grade>=80:
print("良好")
elif grade>=60:
print("及格")
else:
print("不及格")
a=eval(input("input:"))
if a%2==0:
print("ou")
else:
print("ji")
for i in range(2):
if i==0:
a=random.randrange(1,11)
else:
b=random.randrange(1,11)
s=a+b
print(a,b)
c,d=eval(input('input:'))
if c==a and d==b:
print("true")
else:
print("false")
year=eval(input('input:'))
i=year%12
if i==0:
print("猴")
else:
if i==2:
print("狗")
else:
if i==3:
print("猪")
else:
if i==4:
print("鼠")
else:
if i==5:
print("牛")
else:
if i==6:
print("虎")
else:
if i==7:
print("兔")
else:
if i==8:
print("龙")
else:
if i==9:
print("蛇")
else:
if i==10:
print("马")
else:
if i==11:
print("羊")
weight=eval(input("weight:"))
high=eval(input("high:"))
bmi=weight/high
if bmi>=30.0:
print("痴肥")
else:
if bmi>=25.0 and bmi<30.0:
print("超重")
else:
if bmi>=18.5 and bmi<25.0:
print("标准")
else:
if bmi<18.5:
print("超轻")
year=eval(input('input year:'))
if (year%4==0 and year%100!=0) or year%400==0:
print("闰年")
else:
print("不是闰年")
a=int(input("input a:"))
a1=int(a/100)
a2=int(a%100/10)
a3=int(a%10)
if a==(a1**3)+(a2**3)+(a3**3):
print("yes")
else:
print("not")
for a in range(100,1000):
a1=int(a/100)
a2=int(a%100/10)
a3=int(a%10)
if a==(a1**3)+(a2**3)+(a3**3):
print(a)
import math
a=int(input("input a:"))
str_a=str(a)
a1=int(str_a[0])
a2=int(str_a[1])
a3=int(str_a[2])
if a==math.pow(a1,3)+math.pow(a2,3)+math.pow(a3,3):
print("yes")
else:
print("not")
import random
a=random.randrange(10,100)
print(a)
a1=int(a/10)
a2=a%10
b=eval(input("input d:"))
b1=int(b/10)
b2=b%10
if b==a:
print("10000")
else:
if b1==a2 and b2==a1:
print("3000")
else:
if b1==a1 or b2==a2:
print("1000")
import math
a,b,c=eval(input("input a,b,c:"))
m=b*b-4*a*c
if m>0:
r1=(-b+math.sqrt(m))/(2*a)
r2=(-b-math.sqrt(m))/(2*a)
print(round(r1,6),round(r2,5))
else:
if m==0:
r1=(-b+math.sqrt(m))/(2*a)
print(r1)
else:
print("not roots")
for i in range(0,2):
a1=random.randint(0,100)
a2=random.randint(0,100)
print(a1,a2)
s=a1+a2
b=eval(input('input b:'))
if b==s:
print("yes")
else:
print("not")
a=eval(input("today is day:"))
b=eval(input("numbers days:"))
c=(a+b%7)%7
week=['Sun','Mon','Tue','Wed','Thu','Fri','Sat']
print("today is",week[a],"and the future day is",week[c])
import re
a,b,c=eval(input("input a,b,c:"))
m=max(a,b,c)
s=min(a,b,c)
print(s,a+b+c-m-s,m)
a1,a2=eval(input('input a1,a2:'))
b1,b2=eval(input('input b1,b2:'))
m1=a2/a1
m2=b2/b1
if m1>m2:
print("2")
else:
print("1")
month,year=eval(input("input month,year:"))
if ((year%4==0 and year%100!=0) or year%400==0) and month==2:
day=29
print(year,day)
else:
if month==2:
day=28
print(year,day)
else:
if month==1 or month==3 or month==5 or month==7 or month==8 or month==10 or month==12:
day=31
print(year,day)
else:
day=30
print(year,day)
import random
a=random.randint(0,2)
b=eval(input('input think:'))
if b==a:
print("true")
else:
print("false")
import random
a=eval(input('input 0、1或2:'))
b=random.randint(0,3)
print(b)
if (a==0 and b==2) or (a==1 and b==0) or (a==2 and b==1):
print("win")
else:
if a==b:
print("pingjun")
else:
print("lose")
year=eval(input("input year:"))
month=eval(input("input month:"))
day=eval(input("input day:"))
q=day
m=month
if month==1:
month=13
if month==2:
month=14
j=year/100
k=year%100
h=int((q+(26*(m+1)/10)+k+k/4+j/4+5*j)%7)
week=['Sun','Mon','Tue','Wed','Thu','Fri','Sat']
print(week[h])
puke=['Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King']
type=['梅花','红桃','方块','黑桃']
a=int(random.randint(0,14))
b=int(random.randint(0,5))
print(puke[int(a)],type[int(b)])
a=eval(input("input:"))
bai=a//100
ge=a%10
if bai==ge:
print("回文")
else:
print("不是回文")
a,b,c=eval(input("input a,b,c:"))
if (a+b>c) and (a+c>b) and (b+c>a):
l=a+b+c
print("周长:",l)
else:
print("不合法")
| 0.029118 | 0.692102 |
# Основы Jupyter
## Simple python
Все данные храняться в оперативной памяти. Однажды созданную переменную можно использовать пока она не будет явно удалена.
```
a = 1
print(a)
```
Каждая ячейка аналогична выполнению кода в глобальной области видимости.
```
def mult_list(lst, n):
return lst * n
arr = mult_list([1, 2, 3], 3)
print(arr)
```
Нам не обязательно использовать print, если мы хотим вывести только последнее значение:
```
arr
```
Но разница всё же есть:
```
print(mult_list(arr, 10))
mult_list(arr, 10)
```
Точка с запятой позволяет подавить вывод результата:
```
mult_list(arr, 10);
```
## Добавляем графики (matplotlib)
Импортируем библиотеку, настраиваем стиль и указываем, что графики стоит встраивать сразу в клетку:
```
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(font_scale=2, style='whitegrid', rc={'figure.figsize': (10, 6), 'axes.edgecolor': '#222222'})
plt.plot([(0.05 * x) ** 2 for x in range(100)])
plt.title('Parabolic graph')
plt.xlabel('X'); plt.ylabel('Y');
```
## Markdown
Описание кода находится непосредственно рядом с кодом, и это не простые комментарии.
* Здесь можно делать списки
* <div style='color:red'>**Стилизовать всё под себя.**</div>
* Добавлять [гиперссылки](http://jupyter.org/)
* Писать формулы в LaTeX: $\sum_{i=1}^\inf\frac{1}{i^2} = \frac{\pi^2}{6}$
* И даже вставлять картинки
<img style="float: left;" src="figures/power.png">
* А при желании и анимацию:
```
from IPython.display import Image
Image(filename="figures/cat.gif")
```
### Написание книг в Jupyter
<a href="https://github.com/jakevdp/PythonDataScienceHandbook">
<img style="float: left;" src="PythonDataScienceHandbook/notebooks/figures/PDSH-cover.png">
</a>
<a href="http://readiab.org/">
<img style="float: right;" src="figures/bioinformatics_intro.png">
</a>
[Отрывок из Python Data Science Handbook](PythonDataScienceHandbook/notebooks/05.06-Linear-Regression.ipynb)
В R подобный подход используется для документации пакетов. Это называется **vignettes**. Например, [vignette для пакета seurat по анализу scRNA-seq данных](http://satijalab.org/seurat/pbmc3k_tutorial.html).
## Jupyter kernels
В Jupyter можно запускать ооочень много разных языков. Для этого используются [Jupyter Kernels](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels).
```
kernels_df = pd.read_csv('./jupyter_kernels.csv')
kernels_df.head()
```
[Пример](RExample.ipynb) для [IRkernel](https://github.com/IRkernel/IRkernel).
## System calls
После '!' можно писать любые консольные команды:
```
!ls -l ./
```
И в них даже можно использовать переменные:
```
folder = 'PythonDataScienceHandbook/notebooks/'
!ls -l ./$folder | head
files = !ls ./$folder
files[1:10]
```
Также есть команды для записи в файл:
```
out_file = 'test_out.txt'
%%file $out_file
Line 1
Line 2
Line N
!cat $out_file
!rm $out_file
```
## Line magics и не только
Magic команды начинаются с одного или двух знаков "%". Главная из них - `%quickref`.
```
%quickref
```
Команды, начинающиеся с одного знака "%" действуют на одну строчку. Команды, начинающиеся с двух знаков - на всю ячейку (см. `%%file` выше).
### Code profiling
Определим несколько функций с разной производительностью:
```
def slow_function():
[_ for i in range(10000000)]
def not_so_slow_function():
[_ for i in range(1000)]
def complex_slow_function():
slow_function()
[not_so_slow_function() for i in range(1000)]
[slow_function() for i in range(10)]
```
Магические команды %time и %timeit позволяют замерять время выполнения строки, следующей за ними. %timeit отличается тем, что запускает одну и ту же команду много раз и усредняет результат.
```
%time slow_function()
```
Но следует быть осторожным, python имеет свойство кэшировать результаты выполнения, и поэтому первый запуск одной и той же функции может быть дольше, чем все последующие:
```
%time not_so_slow_function()
%timeit not_so_slow_function()
```
Также следует остерегаться такого кода:
```
arr = []
%timeit arr.append(1)
len(arr)
```
Команды для профилирования позволяют получить более детальную информацию о выполнении функции.
```
%prun complex_slow_function()
```
[Существуют разные виды профилировщиков, как для времени, так и для памяти](http://pynash.org/2013/03/06/timing-and-profiling/)
### Другие полезные команды
Посмотреть код функции:
```
%psource complex_slow_function
```
Описание функции или обьекта:
```
?map
a = complex(10, 20)
?a
```
Вся информация о функции:
```
import numpy as np
??np.sum
```
Захват вывода ячейки
```
%%capture a
print(11)
a.show()
a.stdout, a.stderr
```
Переменная `_` хранит результат посленей исполненной ячейки. Переменная `__` - предпоследней. Ещё есть `____` (3 шт.)
```
[1, 2, 3]
t = _
print(t)
```
# Scientific computing
## Больше графиков
### Matplotlib
Линейный график строится с помощью функции 'plot':
```
x_vals = list(range(100))
y_vals = [(0.05 * x) ** 2 for x in x_vals]
plt.plot(y_vals)
plt.title('Parabolic graph')
plt.xlabel('X'); plt.ylabel('Y');
```
Фнукция scatter даёт нам позволяет нам рисовать точками:
```
plt.scatter(x_vals[::5], y_vals[::5])
plt.title('Parabolic graph')
plt.xlabel('X'); plt.ylabel('Y')
plt.xlim(min(x_vals) - 1, max(x_vals) + 30); plt.ylim(min(y_vals) - 1, max(y_vals) + 10);
```
В терминах matplotlib, каждый график (plot) состоит из холста (figure), разделённого на axes (оси?). Чтобы отобразить несколько подграфиков на одном холсте используется функция subplots().
Важно заметить, что у класса AxesSubplot вместо методов title, xlabel и т.д. используются методы set_title, set_title, ...
```
fig, axes = plt.subplots(2, 2, figsize=(15, 8))
axes[0][0].plot(y_vals)
axes[0][1].hist(y_vals)
axes[1][0].scatter(x_vals[::10], y_vals[::10])
axes[1][1].hist(y_vals, bins=20);
for i, ax in enumerate(np.concatenate(axes)):
ax.set_title('Plot %d' % i)
plt.tight_layout();
```
Изобразим всё тоже самое на отдном графике:
```
plt.hist(y_vals, bins=10, label='10 bins', alpha=0.7)
plt.hist(y_vals, bins=20, label='20 bins', alpha=0.7)
plt.legend(loc='upper right');
```
### Statistical plots (seaborn)
```
tips = sns.load_dataset("tips")
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips, size=5);
axes = plt.subplots(ncols=2, figsize=(10, 5))[1]
sns.violinplot(x='smoker', y='tip', data=tips, ax=axes[0])
sns.swarmplot(x='smoker', y='tip', data=tips, ax=axes[1])
axes[0].set_title('Violin plot')
axes[1].set_title('Swarm plot')
plt.tight_layout()
```
На [сайте пакета](https://seaborn.pydata.org/examples/index.html) есть множество красивых примеров.
### Interactive plots (bokeh)
```
import bokeh.plotting as bp
import bokeh.models as bm
from bokeh.palettes import Spectral6
from bokeh.transform import factor_cmap
bp.output_notebook()
hover = bm.HoverTool(tooltips=[
("(x,y)", "($x, $y)"),
("Smoker", "@smoker"),
("Time", "@time"),
("Sex", "@sex")
])
p = bp.figure(tools="pan, box_zoom, reset, save, crosshair",
x_axis_label='Tip', y_axis_label='Total bill')
p.tools.append(hover)
p.circle('tip', 'total_bill',
fill_color=factor_cmap('smoker', palette=Spectral6, factors=list(tips.smoker.unique())),
legend='smoker', source=bm.ColumnDataSource(tips), size=10)
bp.show(p)
```
## NumPy - математика в Python
### Массивы
```
import numpy as np
```
Для массивов numpy переопределены все операции, включая вывод с помощью `print`
```
np_arr = np.array([1, 2, 3, 4, 5])
print(np_arr * 2)
print(np_arr * (np_arr + 1))
print(np_arr >= 3)
```
numpy имеет свои аналоги стандартных функций python, которые стоит применять при работе с numpy массивами
```
arr = [1, 2, 3] * 100000
np_arr = np.array(arr)
%time sum(np_arr)
%time all(np_arr > 0)
%time np.sum(np_arr)
%time _=np.all(np_arr > 0)
```
Однако они не гарантируют скорости на стандартных структурах данных:
```
%time sum(arr)
%time _=np.sum(arr)
```
Полезная функция `np.arange`. В отличие от range: а) возвращает `np.array` (не итератор!), б) разрешает дробный шаг
```
np.arange(1, 3, 0.5)
```
### Случайные числа
Генерируем нормальное распределение:
```
r_arr = np.random.normal(0, 1, 100)
np.mean(r_arr), np.std(r_arr)
```
Построим распределение среднего из 1000 нормальных распределений (`axis=0` - по строкам, `axis=1` - по столбцам)
```
r_arr = np.random.normal(0, 1, (100, 1000))
means = np.mean(r_arr, axis=0)
plt.hist(means, bins=20, alpha=0.8, normed=True)
sns.kdeplot(means, color='red')
plt.title('Distribution of means\nStdErr={:.3f}'.format(np.std(means)))
plt.xlabel('Mean'); plt.ylabel('Density');
```
Также с помощью класса random можно генерировать выборку из имеющегося массива:
```
arr = list(range(10))
np.random.choice(arr, 5, )
```
### Индексирование массивов
numpy.ndarray имеет продвинутую систему индексирования. Индексом может служить число, массив чисел, либо `np.ndarray` типа bool соответствующей размерности
```
arr = np.random.uniform(size=10)
arr[1:5]
arr[[1, -2, 6, 1]]
arr[np.array([True, False, False, False, False, True, False, False, False, True])]
```
Больше реализма
```
arr[arr < 0.5]
arr[(arr < 0.5) & (arr > 0.4)]
```
Каждый срез точно также работает на присваивание:
```
arr[1:7] = -1
arr
```
### Матрицы
Будьте осторожны, перемножение двумерных массивов происходит поэлементно.
```
e = np.eye(5, dtype=int)
arr = np.full((5, 5), 10, dtype=int)
print('E:\n{}\nArray:\n{}\nProduct:\n{}'.format(e, arr, e*arr))
```
Для матричного умножения следует использовать функцию `dot`:
```
e.dot(arr)
```
Однако, существует специальный класс `np.matrix`, в котором все операции переопределены в соответствии с матричными вычислениями
```
m_e = np.matrix(e)
m_arr = np.matrix(arr)
m_e * m_arr
```
Транспонирование как матрицы, так и массива осуществляется с помощью поля `T`:
```
arr = e + [1, 2, 3, 4, 5]
print(arr)
print()
print(arr.T)
```
## Pandas - анализ данных в Python
Как правило, вы работаете не просто с матрицами, но ваши данные имеют за собой какую-то природу, которую вы хотите с ними связать.
```
import pandas as pd
from pandas import Series, DataFrame
```
### Series
Pandas предоставляет класс `Series`, позволяющий совместить функциональность класса `dict` из python и одномерного массива numpy:
```
mouse_organs_weights = Series({'Liver': 10.5, 'Brain': 7.4, 'Legs': 3.1, 'Tail': 3.5})
print(mouse_organs_weights, '\n')
print('Brain: ', mouse_organs_weights['Brain'], '\n')
print(mouse_organs_weights.index)
```
Отсортируем:
```
mouse_organs_weights.sort_values(inplace=True)
print(mouse_organs_weights, '\n')
print(mouse_organs_weights.index)
mouse_organs_weights.plot(kind='bar', color='#0000AA', alpha=0.7)
plt.ylabel('Organ weights'); plt.xlabel('Organs'); plt.title('Mouse');
```
У нас появилась вторая мышь!
```
mouse_organs_weights2 = Series((8, 11.4, 2.1, 2.5), ('Liver', 'Brain', 'Legs', 'Tail'))
print(mouse_organs_weights2, '\n')
print(mouse_organs_weights2.index)
```
Не смотря на то, что эти два массива упорядочени по разному, во всех совместных операциях они будут взаимодействовать именно по текстовому индексу, а не по числовому:
```
mouse_organs_weights - mouse_organs_weights2
mouse_organs_weights * mouse_organs_weights2
```
Однако для корректного отображения графиков нам надо их переиндексировать:
```
mouse_organs_weights.plot(label='Mouse 1')
mouse_organs_weights2[mouse_organs_weights.index].plot(label='Mouse 2')
(mouse_organs_weights - mouse_organs_weights2)[mouse_organs_weights.index].abs().plot(label='Absolute difference')
plt.ylim(0, max(mouse_organs_weights.max(), mouse_organs_weights2.max()) + 1)
plt.ylabel('Organ weight'); plt.xlabel('Organs'); plt.title('Mouse')
plt.legend(loc='upper left')
```
### DataFrame
У нас появилось очень много мышей!
```
mice_organs_weights = [Series(
(np.random.uniform(5, 12), np.random.uniform(3, 15),
np.random.uniform(1.5, 3), np.random.uniform(2, 3.5)),
('Liver', 'Brain', 'Legs', 'Tail')) for _ in range(40)]
mice_organs_weights[:3]
```
Для хранения однотипных данных стоит использовать `DataFrame`:
```
mice_df = DataFrame(mice_organs_weights)
mice_df.head()
```
В отличие от numpy, расчёт статистических показателей по-умолчанию выполняется по столбцам:
```
print(mice_df.mean())
print(mice_df.std())
```
Зададим нашим мышам более говорящие имена:
```
mice_df.index = ['Mouse {}'.format(d) for d in mice_df.index]
mice_df.head()
```
Мы можем производить срезы `DataFrame`, аналогично `np.array`:
```
mice_df[['Liver', 'Brain']].head()
```
Для обращения по индексу стоит использовать `.iloc` или `.loc`:
```
mice_df.loc[['Mouse 1', 'Mouse 10']]
mice_df.iloc[[1, 10]]
mice_df.loc[['Mouse 1', 'Mouse 10'], ['Legs', 'Tail', 'Tail']]
```
В случае обращения к одному столбцу, `DataFrame` поддерживает доступ через '.':
```
mice_df.Legs[:3]
```
### Пропущенные данные
Одна из целей создания pandas - адекватная работа с пропущенными значениями.
Пусть у нас некоторые данные о мышах неизвестны.
```
mice_missed_df = mice_df[:5].copy()
mice_missed_df.iloc[[1, 3], 1] = np.nan
mice_missed_df.iloc[2, 2] = np.nan
mice_missed_df.iloc[4, 3] = np.nan
mice_missed_df
print(mice_missed_df.sum())
print()
print(mice_missed_df.sum(skipna=False))
```
### Функциональный стиль
Часто нам требуется применить функцию к каждому элементу массива. У нас есть свой `map` для индесированных массивов (`Series` и `DataFrame`):
```
print(mouse_organs_weights.map(np.exp))
print()
print(mouse_organs_weights.map(lambda x: x ** 2))
```
Также есть функция `apply`, которая в отличие от `map` пытается собрать результат в какую-нибудь более сложную структуру, например в DataFrame.
```
mouse_organs_weights.apply(lambda v: Series([v, 2*v]))
mice_df.apply(lambda r: Series([r[1], r[2] + r[3]]), axis=1).head()
```
У DataFrame есть только функция `apply`. Здесь она принимает аргумент `axis`, который говорит, стоит ли нам применять функцию по строкам, или по столбцам
```
(mice_df.apply(np.sum, 1) == mice_df.sum(axis=1)).all()
```
Этот же способ можно использовать для создания сложных булевых индексов:
```
str_df = DataFrame([['TTAGGC', 'TTACCC'], ['TTCGGC', 'TTCCGC'], ['GGACGGC', 'TGGC'], ['GGG', 'CGC']],
columns=['Gene 1', 'Gene 2'])
str_df.index = list(map('Human {}'.format, range(str_df.shape[0])))
str_df
index = str_df.apply(lambda genes: genes[0][0] == 'T' or genes[1][0] == 'C', axis=1)
index
str_df[index]
```
# Advanced Jupyter
## Debug in Jupyter
```
def recursive_error(rec_depth = 1):
if rec_depth == 0:
raise Exception("NOOOOOO")
recursive_error(rec_depth - 1)
recursive_error()
%debug
%debug recursive_error(1)
from IPython.core.debugger import set_trace
def longer_function():
va = 10
va += 1
vb = va
vc = 0
# set_trace()
print(vb / vc)
return vc + 1
longer_function()
```
## Интеграция с другими языками
```
%%bash
echo 123
ls -la ./Introduction.ipynb
%load_ext rpy2.ipython
arr_size = 1000
%%R -i arr_size -o df
library(ggplot2)
df <- data.frame(x=rnorm(arr_size), y=rnorm(arr_size))
theme_set(theme_bw())
(gg <- ggplot(df) + geom_point(aes(x=x, y=y)))
df.head()
```
Line magics для соответствующих языков:
* `R`
* `bash`
* `javascript` или `js`
* `latex`
* `markdown`
* `perl`
* `python`
* `python2`
* `python3`
* `ruby`
* `sh`
## Git integration
Обычный файл .ipynb выглядит достаточно страшно:
```
!less ./Introduction.ipynb | head -n 50
```
Для того, чтобы подружить вот это с git есть 2 выхода:
1. Удалять всю информацию перед коммитом ([описание](https://gist.github.com/pbugnion/ea2797393033b54674af)).
2. Использовать [ipymd](https://github.com/rossant/ipymd). [Пример](IpymdNotebook.md).
## Jupyter configuration
[Jupyter notebook extensions](https://github.com/ipython-contrib/jupyter_contrib_nbextensions):

Default imports and functions:
```
profile_dir = !ipython locate profile default
scripts_dir = profile_dir[0] + '/startup/'
print(scripts_dir)
!ls -la $scripts_dir
!cat $scripts_dir/00-imports.py
```
Custom CSS styles:
```
config_dir = !jupyter --config-dir
css_path = config_dir[0] + '/custom/custom.css'
css_path
!cat $css_path
```
[Matplotlib configuration](https://matplotlib.org/users/customizing.html):
```
import matplotlib
matplotlibrc_file = matplotlib.matplotlib_fname()
matplotlibrc_file
!cat $matplotlibrc_file | egrep "#[a-z]" | head -n 30
```
## Parallel execution
### Background scripts
```
%%python --bg --out p_out
import time
for i in range(10):
time.sleep(1)
print(i)
```
Функция read будет висеть до окончания работы скрипта:
```
p_out = p_out.read()
print(p_out.decode())
```
Область скрипт выполняется в отдельном потоке и имеет свою область видимости
```
a = 1000
%%python --bg --out p_out --err p_err
for i in range(100):
print(a)
p_out, p_err = p_out.read(), p_err.read()
print(p_out.decode())
print(p_err.decode())
```
### Background jobs
```
from IPython.lib import backgroundjobs as bg
import sys
import time
def sleepfunc(interval=2, *a, **kw):
time.sleep(interval)
return dict(interval=interval, args=a, kwargs=kw)
def diefunc(interval=2, *a, **kw):
time.sleep(interval)
raise Exception("Dead job with interval {}".format(interval))
def printfunc(interval=1, reps=5):
for n in range(reps):
time.sleep(interval)
print('In the background... {}'.format(n))
sys.stdout.flush()
print('All done!')
sys.stdout.flush()
jobs = bg.BackgroundJobManager()
# Start a few jobs, the first one will have ID # 0
jobs.new(sleepfunc, 4)
jobs.new(sleepfunc, kw={'reps':2})
jobs.new('printfunc(1,3)')
jobs.status()
jobs[0].result
```
Мы можем отслеживать ошибки в таких задачах:
```
diejob1 = jobs.new(diefunc, 1)
diejob2 = jobs.new(diefunc, 2)
print("Status of diejob1: {}".format(diejob1.status))
diejob1.traceback()
jobs.flush()
j = jobs.new(sleepfunc, 2)
j.join()
```
## Best practices
Подробное описание best practices есть в наборе видео [Reproducible Data Analysis in Jupyter](https://www.youtube.com/watch?v=_ZEWDGpM-vM&list=PLYCpMb24GpOC704uO9svUrihl-HY1tTJJ) от автора Python Data Science Handbook.
[Пример](SpeedDating.ipynb).
Краткое summary:
1. После получения важных результатов **необходимо** нажать "Restart & Run All", а затем закоммитить файл в git. Также, желательно перед коммитом описать в markdown все проделанные шаги.
2. Данные нужно скачивать прямо из jupyter notebook. Это повышает уровень воспроизводимости.
3. Все написанные функции нужно доставать из ноутбука и раскладывать по `.py` файлам. Тогда, написанный код можно будет представить в виде python пакета.
4. На пакет нужно писать unit-тесты. Для этого можно использовать, например, `pytest`.
5. Для того, чтобы делиться результатами, ноутбуки можно конвертировать в html / pdf. Для этого используется утилита [`nbconvert`](https://nbconvert.readthedocs.io/en/latest/). Для точной настройки формата вывода при этом используются [Jinja templates](http://nbconvert.readthedocs.io/en/latest/customizing.html). В частности, для того, чтобы спрятать код можно использовать jupyter extension `Hide input all` и код
`jupyter nbconvert --execute --template=nbextensions --to=html Pipeline.ipynb`
|
github_jupyter
|
a = 1
print(a)
def mult_list(lst, n):
return lst * n
arr = mult_list([1, 2, 3], 3)
print(arr)
arr
print(mult_list(arr, 10))
mult_list(arr, 10)
mult_list(arr, 10);
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(font_scale=2, style='whitegrid', rc={'figure.figsize': (10, 6), 'axes.edgecolor': '#222222'})
plt.plot([(0.05 * x) ** 2 for x in range(100)])
plt.title('Parabolic graph')
plt.xlabel('X'); plt.ylabel('Y');
from IPython.display import Image
Image(filename="figures/cat.gif")
kernels_df = pd.read_csv('./jupyter_kernels.csv')
kernels_df.head()
!ls -l ./
folder = 'PythonDataScienceHandbook/notebooks/'
!ls -l ./$folder | head
files = !ls ./$folder
files[1:10]
out_file = 'test_out.txt'
%%file $out_file
Line 1
Line 2
Line N
!cat $out_file
!rm $out_file
%quickref
def slow_function():
[_ for i in range(10000000)]
def not_so_slow_function():
[_ for i in range(1000)]
def complex_slow_function():
slow_function()
[not_so_slow_function() for i in range(1000)]
[slow_function() for i in range(10)]
%time slow_function()
%time not_so_slow_function()
%timeit not_so_slow_function()
arr = []
%timeit arr.append(1)
len(arr)
%prun complex_slow_function()
%psource complex_slow_function
?map
a = complex(10, 20)
?a
import numpy as np
??np.sum
%%capture a
print(11)
a.show()
a.stdout, a.stderr
[1, 2, 3]
t = _
print(t)
x_vals = list(range(100))
y_vals = [(0.05 * x) ** 2 for x in x_vals]
plt.plot(y_vals)
plt.title('Parabolic graph')
plt.xlabel('X'); plt.ylabel('Y');
plt.scatter(x_vals[::5], y_vals[::5])
plt.title('Parabolic graph')
plt.xlabel('X'); plt.ylabel('Y')
plt.xlim(min(x_vals) - 1, max(x_vals) + 30); plt.ylim(min(y_vals) - 1, max(y_vals) + 10);
fig, axes = plt.subplots(2, 2, figsize=(15, 8))
axes[0][0].plot(y_vals)
axes[0][1].hist(y_vals)
axes[1][0].scatter(x_vals[::10], y_vals[::10])
axes[1][1].hist(y_vals, bins=20);
for i, ax in enumerate(np.concatenate(axes)):
ax.set_title('Plot %d' % i)
plt.tight_layout();
plt.hist(y_vals, bins=10, label='10 bins', alpha=0.7)
plt.hist(y_vals, bins=20, label='20 bins', alpha=0.7)
plt.legend(loc='upper right');
tips = sns.load_dataset("tips")
sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips, size=5);
axes = plt.subplots(ncols=2, figsize=(10, 5))[1]
sns.violinplot(x='smoker', y='tip', data=tips, ax=axes[0])
sns.swarmplot(x='smoker', y='tip', data=tips, ax=axes[1])
axes[0].set_title('Violin plot')
axes[1].set_title('Swarm plot')
plt.tight_layout()
import bokeh.plotting as bp
import bokeh.models as bm
from bokeh.palettes import Spectral6
from bokeh.transform import factor_cmap
bp.output_notebook()
hover = bm.HoverTool(tooltips=[
("(x,y)", "($x, $y)"),
("Smoker", "@smoker"),
("Time", "@time"),
("Sex", "@sex")
])
p = bp.figure(tools="pan, box_zoom, reset, save, crosshair",
x_axis_label='Tip', y_axis_label='Total bill')
p.tools.append(hover)
p.circle('tip', 'total_bill',
fill_color=factor_cmap('smoker', palette=Spectral6, factors=list(tips.smoker.unique())),
legend='smoker', source=bm.ColumnDataSource(tips), size=10)
bp.show(p)
import numpy as np
np_arr = np.array([1, 2, 3, 4, 5])
print(np_arr * 2)
print(np_arr * (np_arr + 1))
print(np_arr >= 3)
arr = [1, 2, 3] * 100000
np_arr = np.array(arr)
%time sum(np_arr)
%time all(np_arr > 0)
%time np.sum(np_arr)
%time _=np.all(np_arr > 0)
%time sum(arr)
%time _=np.sum(arr)
np.arange(1, 3, 0.5)
r_arr = np.random.normal(0, 1, 100)
np.mean(r_arr), np.std(r_arr)
r_arr = np.random.normal(0, 1, (100, 1000))
means = np.mean(r_arr, axis=0)
plt.hist(means, bins=20, alpha=0.8, normed=True)
sns.kdeplot(means, color='red')
plt.title('Distribution of means\nStdErr={:.3f}'.format(np.std(means)))
plt.xlabel('Mean'); plt.ylabel('Density');
arr = list(range(10))
np.random.choice(arr, 5, )
arr = np.random.uniform(size=10)
arr[1:5]
arr[[1, -2, 6, 1]]
arr[np.array([True, False, False, False, False, True, False, False, False, True])]
arr[arr < 0.5]
arr[(arr < 0.5) & (arr > 0.4)]
arr[1:7] = -1
arr
e = np.eye(5, dtype=int)
arr = np.full((5, 5), 10, dtype=int)
print('E:\n{}\nArray:\n{}\nProduct:\n{}'.format(e, arr, e*arr))
e.dot(arr)
m_e = np.matrix(e)
m_arr = np.matrix(arr)
m_e * m_arr
arr = e + [1, 2, 3, 4, 5]
print(arr)
print()
print(arr.T)
import pandas as pd
from pandas import Series, DataFrame
mouse_organs_weights = Series({'Liver': 10.5, 'Brain': 7.4, 'Legs': 3.1, 'Tail': 3.5})
print(mouse_organs_weights, '\n')
print('Brain: ', mouse_organs_weights['Brain'], '\n')
print(mouse_organs_weights.index)
mouse_organs_weights.sort_values(inplace=True)
print(mouse_organs_weights, '\n')
print(mouse_organs_weights.index)
mouse_organs_weights.plot(kind='bar', color='#0000AA', alpha=0.7)
plt.ylabel('Organ weights'); plt.xlabel('Organs'); plt.title('Mouse');
mouse_organs_weights2 = Series((8, 11.4, 2.1, 2.5), ('Liver', 'Brain', 'Legs', 'Tail'))
print(mouse_organs_weights2, '\n')
print(mouse_organs_weights2.index)
mouse_organs_weights - mouse_organs_weights2
mouse_organs_weights * mouse_organs_weights2
mouse_organs_weights.plot(label='Mouse 1')
mouse_organs_weights2[mouse_organs_weights.index].plot(label='Mouse 2')
(mouse_organs_weights - mouse_organs_weights2)[mouse_organs_weights.index].abs().plot(label='Absolute difference')
plt.ylim(0, max(mouse_organs_weights.max(), mouse_organs_weights2.max()) + 1)
plt.ylabel('Organ weight'); plt.xlabel('Organs'); plt.title('Mouse')
plt.legend(loc='upper left')
mice_organs_weights = [Series(
(np.random.uniform(5, 12), np.random.uniform(3, 15),
np.random.uniform(1.5, 3), np.random.uniform(2, 3.5)),
('Liver', 'Brain', 'Legs', 'Tail')) for _ in range(40)]
mice_organs_weights[:3]
mice_df = DataFrame(mice_organs_weights)
mice_df.head()
print(mice_df.mean())
print(mice_df.std())
mice_df.index = ['Mouse {}'.format(d) for d in mice_df.index]
mice_df.head()
mice_df[['Liver', 'Brain']].head()
mice_df.loc[['Mouse 1', 'Mouse 10']]
mice_df.iloc[[1, 10]]
mice_df.loc[['Mouse 1', 'Mouse 10'], ['Legs', 'Tail', 'Tail']]
mice_df.Legs[:3]
mice_missed_df = mice_df[:5].copy()
mice_missed_df.iloc[[1, 3], 1] = np.nan
mice_missed_df.iloc[2, 2] = np.nan
mice_missed_df.iloc[4, 3] = np.nan
mice_missed_df
print(mice_missed_df.sum())
print()
print(mice_missed_df.sum(skipna=False))
print(mouse_organs_weights.map(np.exp))
print()
print(mouse_organs_weights.map(lambda x: x ** 2))
mouse_organs_weights.apply(lambda v: Series([v, 2*v]))
mice_df.apply(lambda r: Series([r[1], r[2] + r[3]]), axis=1).head()
(mice_df.apply(np.sum, 1) == mice_df.sum(axis=1)).all()
str_df = DataFrame([['TTAGGC', 'TTACCC'], ['TTCGGC', 'TTCCGC'], ['GGACGGC', 'TGGC'], ['GGG', 'CGC']],
columns=['Gene 1', 'Gene 2'])
str_df.index = list(map('Human {}'.format, range(str_df.shape[0])))
str_df
index = str_df.apply(lambda genes: genes[0][0] == 'T' or genes[1][0] == 'C', axis=1)
index
str_df[index]
def recursive_error(rec_depth = 1):
if rec_depth == 0:
raise Exception("NOOOOOO")
recursive_error(rec_depth - 1)
recursive_error()
%debug
%debug recursive_error(1)
from IPython.core.debugger import set_trace
def longer_function():
va = 10
va += 1
vb = va
vc = 0
# set_trace()
print(vb / vc)
return vc + 1
longer_function()
%%bash
echo 123
ls -la ./Introduction.ipynb
%load_ext rpy2.ipython
arr_size = 1000
%%R -i arr_size -o df
library(ggplot2)
df <- data.frame(x=rnorm(arr_size), y=rnorm(arr_size))
theme_set(theme_bw())
(gg <- ggplot(df) + geom_point(aes(x=x, y=y)))
df.head()
!less ./Introduction.ipynb | head -n 50
profile_dir = !ipython locate profile default
scripts_dir = profile_dir[0] + '/startup/'
print(scripts_dir)
!ls -la $scripts_dir
!cat $scripts_dir/00-imports.py
config_dir = !jupyter --config-dir
css_path = config_dir[0] + '/custom/custom.css'
css_path
!cat $css_path
import matplotlib
matplotlibrc_file = matplotlib.matplotlib_fname()
matplotlibrc_file
!cat $matplotlibrc_file | egrep "#[a-z]" | head -n 30
%%python --bg --out p_out
import time
for i in range(10):
time.sleep(1)
print(i)
p_out = p_out.read()
print(p_out.decode())
a = 1000
%%python --bg --out p_out --err p_err
for i in range(100):
print(a)
p_out, p_err = p_out.read(), p_err.read()
print(p_out.decode())
print(p_err.decode())
from IPython.lib import backgroundjobs as bg
import sys
import time
def sleepfunc(interval=2, *a, **kw):
time.sleep(interval)
return dict(interval=interval, args=a, kwargs=kw)
def diefunc(interval=2, *a, **kw):
time.sleep(interval)
raise Exception("Dead job with interval {}".format(interval))
def printfunc(interval=1, reps=5):
for n in range(reps):
time.sleep(interval)
print('In the background... {}'.format(n))
sys.stdout.flush()
print('All done!')
sys.stdout.flush()
jobs = bg.BackgroundJobManager()
# Start a few jobs, the first one will have ID # 0
jobs.new(sleepfunc, 4)
jobs.new(sleepfunc, kw={'reps':2})
jobs.new('printfunc(1,3)')
jobs.status()
jobs[0].result
diejob1 = jobs.new(diefunc, 1)
diejob2 = jobs.new(diefunc, 2)
print("Status of diejob1: {}".format(diejob1.status))
diejob1.traceback()
jobs.flush()
j = jobs.new(sleepfunc, 2)
j.join()
| 0.295433 | 0.982339 |
```
import pandas as pd
from statsmodels.tsa.arima.model import ARIMA
import pymongo
from pymongo import MongoClient
# Connection to mongo
client = MongoClient('mongodb+srv://<user>:<password>@cluster0.l3pqt.mongodb.net/MSA?retryWrites=true&w=majority')
# Select database
db = client['MSA']
# see list of collections
client.MSA.list_collection_names()
# Select the collection with needed data
unemployment = db.arima_unem_pred_score
dfu = pd.DataFrame(list(unemployment.find()))
print(dfu.dtypes)
dfu
# drop unnecessary columns
dfu.drop(columns=['_id'], inplace=True)
dfu
# Select the collection with needed data
employment = db.arima_emp_pred_score
dfe = pd.DataFrame(list(employment.find()))
print(dfe.dtypes)
dfe
# drop unnecessary columns
dfe.drop(columns=['_id'], inplace=True)
dfe
# Select the collection with needed data
gdp = db.arima_GDP_pred_score
dfg = pd.DataFrame(list(gdp.find()))
print(dfg.dtypes)
dfg
# drop unnecessary columns
dfg.drop(columns=['_id'], inplace=True)
dfg
# Select the collection with needed data
population = db.arima_pop_pred_score
dfp = pd.DataFrame(list(population.find()))
print(dfp.dtypes)
dfp
# drop unnecessary columns
dfp.drop(columns=['_id'], inplace=True)
dfp
df_all_features = (dfp.merge(dfu, on='CBSA')
.merge(dfe, on='CBSA')
.merge(dfg, on='CBSA'))
df_all_features
# reorder column names
df_all_features = df_all_features[['CBSA',
'MSA',
'2024_Pop_ROC',
'2024_Unem_ROC',
'2024_Emp_ROC',
'2024_GDP_ROC',
'Pop_Score',
'Unem_Score',
'Emp_Score',
'GDP_Score']]
df_all_features
df_rank_total = df_all_features[['MSA',
'Pop_Score',
'Unem_Score',
'Emp_Score',
'GDP_Score']].copy()
df_rank_total
df_rank_total['Total_Score'] = df_rank_total.sum(axis=1)
df_rank_total
df_all_features['Total_Score'] = df_rank_total['Total_Score']
df_all_features
#df_all_features.to_csv('arima_2024_ROC_score_total.csv', index=False)
# create new collection for df_all_features
arima_2024_ROC_score_total = db.arima_2024_ROC_score_total
# turn dataframe into records so it can be stored in mongoDB
df_dict = df_all_features.to_dict(orient='records')
#arima_2024_ROC_score_total.insert_many(df_dict)
```
|
github_jupyter
|
import pandas as pd
from statsmodels.tsa.arima.model import ARIMA
import pymongo
from pymongo import MongoClient
# Connection to mongo
client = MongoClient('mongodb+srv://<user>:<password>@cluster0.l3pqt.mongodb.net/MSA?retryWrites=true&w=majority')
# Select database
db = client['MSA']
# see list of collections
client.MSA.list_collection_names()
# Select the collection with needed data
unemployment = db.arima_unem_pred_score
dfu = pd.DataFrame(list(unemployment.find()))
print(dfu.dtypes)
dfu
# drop unnecessary columns
dfu.drop(columns=['_id'], inplace=True)
dfu
# Select the collection with needed data
employment = db.arima_emp_pred_score
dfe = pd.DataFrame(list(employment.find()))
print(dfe.dtypes)
dfe
# drop unnecessary columns
dfe.drop(columns=['_id'], inplace=True)
dfe
# Select the collection with needed data
gdp = db.arima_GDP_pred_score
dfg = pd.DataFrame(list(gdp.find()))
print(dfg.dtypes)
dfg
# drop unnecessary columns
dfg.drop(columns=['_id'], inplace=True)
dfg
# Select the collection with needed data
population = db.arima_pop_pred_score
dfp = pd.DataFrame(list(population.find()))
print(dfp.dtypes)
dfp
# drop unnecessary columns
dfp.drop(columns=['_id'], inplace=True)
dfp
df_all_features = (dfp.merge(dfu, on='CBSA')
.merge(dfe, on='CBSA')
.merge(dfg, on='CBSA'))
df_all_features
# reorder column names
df_all_features = df_all_features[['CBSA',
'MSA',
'2024_Pop_ROC',
'2024_Unem_ROC',
'2024_Emp_ROC',
'2024_GDP_ROC',
'Pop_Score',
'Unem_Score',
'Emp_Score',
'GDP_Score']]
df_all_features
df_rank_total = df_all_features[['MSA',
'Pop_Score',
'Unem_Score',
'Emp_Score',
'GDP_Score']].copy()
df_rank_total
df_rank_total['Total_Score'] = df_rank_total.sum(axis=1)
df_rank_total
df_all_features['Total_Score'] = df_rank_total['Total_Score']
df_all_features
#df_all_features.to_csv('arima_2024_ROC_score_total.csv', index=False)
# create new collection for df_all_features
arima_2024_ROC_score_total = db.arima_2024_ROC_score_total
# turn dataframe into records so it can be stored in mongoDB
df_dict = df_all_features.to_dict(orient='records')
#arima_2024_ROC_score_total.insert_many(df_dict)
| 0.29931 | 0.262464 |
```
%%javascript
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
MathJax.Hub.Queue(
["resetEquationNumbers", MathJax.InputJax.TeX],
["PreProcess", MathJax.Hub],
["Reprocess", MathJax.Hub]
);
```
# Greek Letters
| Name | lower case | lower LaTeX | upper case | upper LaTeX |
|:--------|:----------:|:------------|:----------:|:------------|
| alpha | $\alpha$ | `\alpha` | $A$ | `A` |
| beta | $\beta$ | `\beta` | $B$ | `B` |
| gamma | $\gamma$ | `\gamma` | $\Gamma$ | `\Gamma` |
| delta | $\delta$ | `\delta` | $\Delta$ | `\Delta` |
| epsilon | $\epsilon$ | `\epsilon` | $E$ | `E` |
| zeta | $\zeta$ | `\zeta` | $Z$ | `Z` |
| eta | $\eta$ | `\eta` | $H$ | `H` |
| theta | $\theta$ | `\theta` | $\Theta$ | `\Theta` |
| iota | $\iota$ | `\iota` | $I$ | `I` |
| kappa | $\kappa$ | `\kappa` | $K$ | `K` |
| lambda | $\lambda$ | `\lambda` | $\Lambda$ | `\Lambda` |
| mu | $\mu$ | `\mu` | $M$ | `M` |
| nu | $\nu$ | `\nu` | $N$ | `N` |
| xi | $\xi$ | `\xi` | $\Xi$ | `\Xi` |
| omicron | $o$ | `o` | $O$ | `O` |
| pi | $\pi$ | `\pi` | $\Pi$ | `\pi` |
| rho | $\rho$ | `\rho` | $P$ | `P` |
| sigma | $\sigma$ | `\sigma` | $\Sigma$ | `\Sigma` |
| tau | $\tau$ | `\tau` | $T$ | `T` |
| upsilon | $\upsilon$ | `\upsilon` | $\Upsilon$ | `\Upsilon` |
| phi | $\phi$ | `\phi` | $\Phi$ | `\Phi` |
| chi | $\chi$ | `\chi` | $X$ | `X` |
| psi | $\psi$ | `\psi` | $\Psi$ | `\Psi` |
| omega | $\omega$ | `\omega` | $\Omega$ | `\Omega` |
# Limit
$$
f'(x) = \lim_{h \rightarrow 0} \frac{f(x+h) - f(x)}{h}
$$
$$
\lim_{x \rightarrow \infty} (1 + \frac{1}{x})^x = e
$$
$$
\lim_{n=1, 2, \ldots} a_n
$$
# Delta and Differentiation
$$
\lim_{\Delta x \rightarrow 0} \frac{\Delta y}{\Delta x} = \frac{dy}{dx}
$$
# Partial Derivatives
```
$$
\frac{\partial z}{\partial x} = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x}
$$
```
$$
\frac{\partial z}{\partial x} = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x}
$$
```
%%latex
\begin{align}
\label{partial}
\frac{\partial z}{\partial x} = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x}
\end{align}
```
Equation ($\ref{partial}$) is partial derivative
|
github_jupyter
|
%%javascript
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
MathJax.Hub.Queue(
["resetEquationNumbers", MathJax.InputJax.TeX],
["PreProcess", MathJax.Hub],
["Reprocess", MathJax.Hub]
);
$$
\frac{\partial z}{\partial x} = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x}
$$
%%latex
\begin{align}
\label{partial}
\frac{\partial z}{\partial x} = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x}
\end{align}
| 0.483648 | 0.988481 |
# Loan Credit Risk Prediction
When a financial institution examines a request for a loan, it is crucial to assess the risk of default to determine whether to grant it, and if so, what will be the interest rate.
This notebook takes advantage of the power of SQL Server and RevoScaleR (Microsoft R Server). The tables are all stored in a SQL Server, and most of the computations are done by loading chunks of data in-memory instead of the whole dataset.
It does the following:
* **Step 0: Packages, Compute Contexts and Database Creation**
* **Step 1: Pre-Processing and Cleaning**
* **Step 2: Feature Engineering**
* **Step 3: Training, Scoring and Evalutating a Logistic Regression Model**
* **Step 4: Operational Metrics Computation and Scores Transformation**
## Step 0: Packages, Compute Contexts and Database Creation
#### In this step, we set up the connection string to access a SQL Server Database we create and load the necessary packages.
```
# WARNING.
# We recommend not using Internet Explorer as it does not support plotting, and may crash your session.
# INPUT DATA SETS: point to the correct path.
Loan <- "C:/Solutions/Loans/Data/Loan.txt"
Borrower <- "C:/Solutions/Loans/Data/Borrower.txt"
# Load packages.
library(RevoScaleR)
library("MicrosoftML")
library(smbinning)
library(ROCR)
# Creating the connection string. Specify:
## Database name. If it already exists, tables will be overwritten. If not, it will be created.
## Server name. If conecting remotely to the DSVM, the full DNS address should be used with the port number 1433 (which should be enabled)
## User ID and Password. Change them below if you modified the default values.
db_name <- "Loans"
server <- "localhost"
connection_string <- sprintf("Driver=SQL Server;Server=%s;Database=%s;TRUSTED_CONNECTION=True", server, db_name)
print("Connection String Written.")
# Create the database if not already existing.
## Open an Odbc connection with SQL Server master database only to create a new database with the rxExecuteSQLDDL function.
connection_string_master <- sprintf("Driver=SQL Server;Server=%s;Database=master;TRUSTED_CONNECTION=True", server)
outOdbcDS_master <- RxOdbcData(table = "Default_Master", connectionString = connection_string_master)
rxOpen(outOdbcDS_master, "w")
## Create database if not already existing.
query <- sprintf( "if not exists(SELECT * FROM sys.databases WHERE name = '%s') CREATE DATABASE %s;", db_name, db_name)
rxExecuteSQLDDL(outOdbcDS_master, sSQLString = query)
## Close Obdc connection to master database.
rxClose(outOdbcDS_master)
print("Database created if not already existing.")
# Define Compute Contexts.
sql <- RxInSqlServer(connectionString = connection_string)
local <- RxLocalSeq()
# Open a connection with SQL Server to be able to write queries with the rxExecuteSQLDDL function in the new database.
outOdbcDS <- RxOdbcData(table = "Default", connectionString = connection_string)
rxOpen(outOdbcDS, "w")
```
#### The function below can be used to get the top n rows of a table stored on SQL Server.
#### You can execute this cell throughout your progress by removing the comment "#", and inputting:
#### - the table name.
#### - the number of rows you want to display.
```
display_head <- function(table_name, n_rows){
table_sql <- RxSqlServerData(sqlQuery = sprintf("SELECT TOP(%s) * FROM %s", n_rows, table_name), connectionString = connection_string)
table <- rxImport(table_sql)
print(table)
}
# table_name <- "insert_table_name"
# n_rows <- 10
# display_head(table_name, n_rows)
```
## Step 1: Pre-Processing and Cleaning
In this step, we:
**1.** Upload the 2 raw data sets Loan and Borrower from disk to the SQL Server.
**2.** Join the 2 tables into one.
**3.** Perform a small pre-processing on a few variables.
**4.** Clean the merged data set: we replace NAs with the mode (categorical variables) or mean (continuous variables).
**Input:** 2 Data Tables: Loan and Borrower.
**Output:** Cleaned data set Merged_Cleaned.
```
# Set the compute context to Local.
rxSetComputeContext(local)
# Upload the data set to SQL.
## Specify the desired column types.
## When uploading to SQL, Character and Factor are converted to nvarchar(255), Integer to Integer and Numeric to Float.
column_types_loan <- c(loanId = "integer",
memberId = "integer",
date = "character",
purpose = "character",
isJointApplication = "character",
loanAmount = "numeric",
term = "character",
interestRate = "numeric",
monthlyPayment = "numeric",
grade = "character",
loanStatus = "character")
column_types_borrower <- c(memberId = "integer",
residentialState = "character",
yearsEmployment = "character",
homeOwnership = "character",
annualIncome = "numeric",
incomeVerified = "character",
dtiRatio = "numeric",
lengthCreditHistory = "integer",
numTotalCreditLines = "integer",
numOpenCreditLines = "integer",
numOpenCreditLines1Year = "integer",
revolvingBalance = "numeric",
revolvingUtilizationRate = "numeric",
numDerogatoryRec = "integer",
numDelinquency2Years = "integer",
numChargeoff1year = "integer",
numInquiries6Mon = "integer")
## Point to the input data sets while specifying the classes.
Loan_text <- RxTextData(file = Loan, colClasses = column_types_loan)
Borrower_text <- RxTextData(file = Borrower, colClasses = column_types_borrower)
## Upload the data to SQL tables.
Loan_sql <- RxSqlServerData(table = "Loan", connectionString = connection_string)
Borrower_sql <- RxSqlServerData(table = "Borrower", connectionString = connection_string)
rxDataStep(inData = Loan_text, outFile = Loan_sql, overwrite = TRUE)
rxDataStep(inData = Borrower_text, outFile = Borrower_sql, overwrite = TRUE)
print("Data exported to SQL.")
# Set the compute context to SQL.
rxSetComputeContext(sql)
# Inner join of the raw tables Loan and Borrower.
rxExecuteSQLDDL(outOdbcDS, sSQLString = "DROP TABLE if exists Merged;")
rxExecuteSQLDDL(outOdbcDS, sSQLString =
"SELECT loanId, [date], purpose, isJointApplication, loanAmount, term, interestRate, monthlyPayment,
grade, loanStatus, Borrower.*
INTO Merged
FROM Loan JOIN Borrower
ON Loan.memberId = Borrower.memberId;")
print("Merging of the two tables completed.")
# Determine if Merged has missing values and compute statistics for use in Production.
## Use rxSummary function to get the names of the variables with missing values.
## Assumption: no NAs in the id variables (loan_id and member_id), target variable and date.
## For rxSummary to give correct info on characters, stringsAsFactors = T should be used.
Merged_sql <- RxSqlServerData(table = "Merged", connectionString = connection_string, stringsAsFactors = T)
col_names <- rxGetVarNames(Merged_sql)
var_names <- col_names[!col_names %in% c("loanId", "memberId", "loanStatus", "date")]
formula <- as.formula(paste("~", paste(var_names, collapse = "+")))
summary <- rxSummary(formula, Merged_sql, byTerm = TRUE)
## Get the variables types.
categorical_all <- unlist(lapply(summary$categorical, FUN = function(x){colnames(x)[1]}))
numeric_all <- setdiff(var_names, categorical_all)
## Get the variables names with missing values.
var_with_NA <- summary$sDataFrame[summary$sDataFrame$MissingObs > 0, 1]
categorical_NA <- intersect(categorical_all, var_with_NA)
numeric_NA <- intersect(numeric_all, var_with_NA)
## Compute the global means.
Summary_DF <- summary$sDataFrame
Numeric_Means <- Summary_DF[Summary_DF$Name %in% numeric_all, c("Name", "Mean")]
Numeric_Means$Mean <- round(Numeric_Means$Mean)
## Compute the global modes.
## Get the counts tables.
Summary_Counts <- summary$categorical
names(Summary_Counts) <- lapply(Summary_Counts, FUN = function(x){colnames(x)[1]})
## Compute for each count table the value with the highest count.
modes <- unlist(lapply(Summary_Counts, FUN = function(x){as.character(x[which.max(x[,2]),1])}), use.names = FALSE)
Categorical_Modes <- data.frame(Name = categorical_all, Mode = modes)
## Set the compute context to local to export the summary statistics to SQL.
## The schema of the Statistics table is adapted to the one created in the SQL code.
rxSetComputeContext('local')
Numeric_Means$Mode <- NA
Numeric_Means$type <- "float"
Categorical_Modes$Mean <- NA
Categorical_Modes$type <- "char"
Stats <- rbind(Numeric_Means, Categorical_Modes)[, c("Name", "type", "Mode", "Mean")]
colnames(Stats) <- c("variableName", "type", "mode", "mean")
## Save the statistics to SQL for Production use.
Stats_sql <- RxSqlServerData(table = "Stats", connectionString = connection_string)
rxDataStep(inData = Stats, outFile = Stats_sql, overwrite = TRUE)
## Set the compute context back to SQL.
rxSetComputeContext(sql)
# If no missing values, we move the data to a new table Merged_Cleaned.
if(length(var_with_NA) == 0){
print("No missing values: no treatment will be applied.")
rxExecuteSQLDDL(outOdbcDS, sSQLString = "DROP TABLE if exists Merged_Cleaned;")
rxExecuteSQLDDL(outOdbcDS, sSQLString = "SELECT * INTO Merged_Cleaned FROM Merged;")
missing <- 0
} else{
print("Variables containing missing values are:")
print(var_with_NA)
missing <- 1
print("Perform data cleaning in the next cell.")
}
# If applicable, NULL is replaced with the mode (categorical variables: integer or character) or mean (continuous variables).
if(missing == 1){
# Get the global means of the numeric variables with missing values.
numeric_NA_mean <- round(Stats[Stats$variableName %in% numeric_NA, "mean"])
# Get the global modes of the categorical variables with missing values.
categorical_NA_mode <- as.character(Stats[Stats$variableName %in% categorical_NA, "mode"])
# Function to replace missing values with mean or mode. It will be wrapped into rxDataStep.
Mean_Mode_Replace <- function(data) {
data <- data.frame(data, stringsAsFactors = FALSE)
# Replace numeric variables with the mean.
if(length(num_with_NA) > 0){
for(i in 1:length(num_with_NA)){
row_na <- which(is.na(data[, num_with_NA[i]]))
data[row_na, num_with_NA[i]] <- num_NA_mean[i]
}
}
# Replace categorical variables with the mode.
if(length(cat_with_NA) > 0){
for(i in 1:length(cat_with_NA)){
row_na <- which(is.na(data[, cat_with_NA[i]]))
data[row_na, cat_with_NA[i]] <- cat_NA_mode[i]
}
}
return(data)
}
# Point to the input table.
Merged_sql <- RxSqlServerData(table = "Merged", connectionString = connection_string)
# Point to the output (empty) table.
Merged_Cleaned_sql <- RxSqlServerData(table = "Merged_Cleaned", connectionString = connection_string)
# Perform the data cleaning with rxDataStep.
rxDataStep(inData = Merged_sql,
outFile = Merged_Cleaned_sql,
overwrite = TRUE,
transformFunc = Mean_Mode_Replace,
transformObjects = list(num_with_NA = numeric_NA , num_NA_mean = numeric_NA_mean,
cat_with_NA = categorical_NA, cat_NA_mode = categorical_NA_mode))
print("Data cleaned.")
}
```
## Step 2: Feature Engineering
In this step, we:
**1.** Create the label isBad based on the status of the loan.
**2.** Split the cleaned data set into a Training and a Testing set.
**3.** Bucketize all the numeric variables, based on Conditional Inference Trees, using the smbinning package on the Training set.
**Input:** Cleaned data set Merged_Cleaned.
**Output:** Data set with new features Merged_Features.
```
# Point to the input table.
Merged_Cleaned_sql <- RxSqlServerData(table = "Merged_Cleaned", connectionString = connection_string)
# Point to the Output SQL table:
Merged_Labeled_sql <- RxSqlServerData(table = "Merged_Labeled", connectionString = connection_string)
# Create the target variable, isBad, based on loanStatus.
rxDataStep(inData = Merged_Cleaned_sql ,
outFile = Merged_Labeled_sql,
overwrite = TRUE,
transforms = list(
isBad = ifelse(loanStatus %in% c("Current"), "0", "1")
))
print("Label isBad created.")
# Split the cleaned data set into a Training and a Testing set.
## Create the Hash_Id table containing loanId hashed to integers.
## The advantage of using a hashing function for splitting is to permit repeatability of the experiment.
rxExecuteSQLDDL(outOdbcDS, sSQLString = "DROP TABLE if exists Hash_Id;")
rxExecuteSQLDDL(outOdbcDS, sSQLString =
"SELECT loanId, ABS(CAST(CAST(HashBytes('MD5', CAST(loanId AS varchar(20))) AS VARBINARY(64)) AS BIGINT) % 100) AS hashCode
INTO Hash_Id
FROM Merged_Labeled ;")
# Point to the training set.
Train_sql <- RxSqlServerData(sqlQuery =
"SELECT *
FROM Merged_Labeled
WHERE loanId IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)",
connectionString = connection_string)
print("Splitting completed.")
# Compute optimal bins for numeric variables using the smbinning package on the Training set.
# Using the smbinning has some limitations, such as:
# - The variable should have more than 10 unique values.
# - If no significant splits are found, it does not output bins.
# For this reason, we manually specify default bins based on an analysis of the variables distributions or smbinning on a larger data set.
# We then overwrite them with smbinning when it output bins.
bins <- list()
# Default cutoffs for bins:
# EXAMPLE: If the cutoffs are (c1, c2, c3),
## Bin 1 = ]- inf, c1], Bin 2 = ]c1, c2], Bin 3 = ]c2, c3], Bin 4 = ]c3, + inf]
## c1 and c3 are NOT the minimum and maximum found in the training set.
bins$loanAmount <- c(14953, 18951, 20852, 22122, 24709, 28004)
bins$interestRate <- c(7.17, 10.84, 12.86, 14.47, 15.75, 18.05)
bins$monthlyPayment <- c(382, 429, 495, 529, 580, 649, 708, 847)
bins$annualIncome <- c(49402, 50823, 52089, 52885, 53521, 54881, 55520, 57490)
bins$dtiRatio <- c(9.01, 13.42, 15.92, 18.50, 21.49, 22.82, 24.67)
bins$lengthCreditHistory <- c(8)
bins$numTotalCreditLines <- c(1, 2)
bins$numOpenCreditLines <- c(3, 5)
bins$numOpenCreditLines1Year <- c(3, 4, 5, 6, 7, 9)
bins$revolvingBalance <- c(11912, 12645, 13799, 14345, 14785, 15360, 15883, 16361, 17374, 18877)
bins$revolvingUtilizationRate <- c(49.88, 60.01, 74.25, 81.96)
bins$numDerogatoryRec <- c(0, 1)
bins$numDelinquency2Years <- c(0)
bins$numChargeoff1year <- c(0)
bins$numInquiries6Mon <- c(0)
# Import the training set to be able to apply smbinning.
Train_df <- rxImport(Train_sql)
# Set the type of the label to numeric.
Train_df$isBad <- as.numeric(as.character(Train_df$isBad))
# Function to compute smbinning on every variable.
compute_bins <- function(name, data){
library(smbinning)
output <- smbinning(data, y = "isBad", x = name, p = 0.05)
if (class(output) == "list"){ # case where the binning was performed and returned bins.
cuts <- output$cuts
return (cuts)
}
}
# We apply it in parallel accross cores with rxExec and the compute context set to Local Parallel.
## 3 cores will be used here so the code can run on servers with smaller RAM.
## You can increase numCoresToUse below in order to speed up the execution if using a larger server.
## numCoresToUse = -1 will enable the use of the maximum number of cores.
rxOptions(numCoresToUse = 3) # use 3 cores.
rxSetComputeContext('localpar')
bins_smb <- rxExec(compute_bins, name = rxElemArg(names(bins)), data = Train_df)
names(bins_smb) <- names(bins)
# Fill bins with bins obtained in bins_smb with smbinning.
## We replace the default values in bins if and only if smbinning returned a non NULL result.
for(name in names(bins)){
if (!is.null(bins_smb[[name]])){
bins[[name]] <- bins_smb[[name]]
}
}
# Save the bins to SQL for use in Production Stage.
## Open an Odbc connection with SQL Server.
OdbcModel <- RxOdbcData(table = "Bins", connectionString = connection_string)
rxOpen(OdbcModel, "w")
## Drop the Bins table if it exists.
if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {
rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)
}
## Create an empty Bins table.
rxExecuteSQLDDL(OdbcModel,
sSQLString = paste(" CREATE TABLE [", OdbcModel@table, "] (",
" [id] varchar(200) not null, ",
" [value] varbinary(max), ",
" constraint unique_id unique (id))",
sep = "")
)
## Write the model to SQL.
rxWriteObject(OdbcModel, "Bin Info", bins)
## Close the Obdc connection used.
rxClose(OdbcModel)
# Set back the compute context to SQL.
rxSetComputeContext(sql)
print("Bins computed/defined.")
# Function to bucketize numeric variables. It will be wrapped into rxDataStep.
bucketize <- function(data) {
for(name in names(b)) {
name2 <- paste(name, "Bucket", sep = "")
data[[name2]] <- as.character(as.numeric(cut(data[[name]], c(-Inf, b[[name]], Inf))))
}
return(data)
}
# Perform feature engineering on the cleaned data set.
# Output:
Merged_Features_sql <- RxSqlServerData(table = "Merged_Features", connectionString = connection_string)
# Create buckets for various numeric variables with the function Bucketize.
rxDataStep(inData = Merged_Labeled_sql,
outFile = Merged_Features_sql,
overwrite = TRUE,
transformFunc = bucketize,
transformObjects = list(
b = bins))
print("Feature Engineering Completed.")
```
## Step 3: Training and Evaluating the Models
In this step we:
**1.** Train a logistic regression classification model on the training set and save it to SQL.
**2.** Score the logisitc regression on the test set.
**3.** Evaluate the tested model.
**Input:** Data set Merged_Features.
**Output:** Logistic Regression Model, Predictions and Evaluation Metrics.
```
# Convert strings to factors.
Merged_Features_sql <- RxSqlServerData(table = "Merged_Features", connectionString = connection_string, stringsAsFactors = TRUE)
## Get the column information.
column_info <- rxCreateColInfo(Merged_Features_sql, sortLevels = TRUE)
## Set the compute context to local to export the column_info list to SQl.
rxSetComputeContext('local')
## Open an Odbc connection with SQL Server.
OdbcModel <- RxOdbcData(table = "Column_Info", connectionString = connection_string)
rxOpen(OdbcModel, "w")
## Drop the Column Info table if it exists.
if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {
rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)
}
## Create an empty Column_Info table.
rxExecuteSQLDDL(OdbcModel,
sSQLString = paste(" CREATE TABLE [", OdbcModel@table, "] (",
" [id] varchar(200) not null, ",
" [value] varbinary(max), ",
" constraint unique_id2 unique (id))",
sep = "")
)
## Write the model to SQL.
rxWriteObject(OdbcModel, "Column Info", column_info)
## Close the Obdc connection used.
rxClose(OdbcModel)
# Set the compute context back to SQL.
rxSetComputeContext(sql)
# Point to the training set. It will be created on the fly when training models.
Train_sql <- RxSqlServerData(sqlQuery =
"SELECT *
FROM Merged_Features
WHERE loanId IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)",
connectionString = connection_string, colInfo = column_info)
# Point to the testing set. It will be created on the fly when testing models.
Test_sql <- RxSqlServerData(sqlQuery =
"SELECT *
FROM Merged_Features
WHERE loanId NOT IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)",
connectionString = connection_string, colInfo = column_info)
print("Column information received.")
# Write the formula after removing variables not used in the modeling.
## We remove the id variables, date, residentialState, term, and all the numeric variables that were later bucketed.
variables_all <- rxGetVarNames(Train_sql)
variables_to_remove <- c("loanId", "memberId", "loanStatus", "date", "residentialState", "term",
"loanAmount", "interestRate", "monthlyPayment", "annualIncome", "dtiRatio", "lengthCreditHistory",
"numTotalCreditLines", "numOpenCreditLines", "numOpenCreditLines1Year", "revolvingBalance",
"revolvingUtilizationRate", "numDerogatoryRec", "numDelinquency2Years", "numChargeoff1year",
"numInquiries6Mon")
training_variables <- variables_all[!(variables_all %in% c("isBad", variables_to_remove))]
formula <- as.formula(paste("isBad ~", paste(training_variables, collapse = "+")))
print("Formula written.")
# Train the logistic regression model.
logistic_model <- rxLogit(formula = formula,
data = Train_sql,
reportProgress = 0,
initialValues = NA)
## rxLogisticRegression function from the MicrosoftML library can be used instead.
## The regularization weights (l1Weight and l2Weight) can be modified for further optimization.
## The included selectFeatures function can select a certain number of optimal features based on a specified method.
## the number of variables to select and the method can be further optimized.
#library('MicrosoftML')
#logistic_model <- rxLogisticRegression(formula = formula,
# data = Train_sql,
# type = "binary",
# l1Weight = 0.7,
# l2Weight = 0.7,
# mlTransforms = list(selectFeatures(formula, mode = mutualInformation(numFeaturesToKeep = 10))))
print("Training Logistic Regression done.")
# Get the coefficients of the logistic regression formula.
## NA means the variable has been dropped while building the model.
coeff <- logistic_model$coefficients
Logistic_Coeff <- data.frame(variable = names(coeff), coefficient = coeff, row.names = NULL)
## Order in decreasing order of absolute value of coefficients.
Logistic_Coeff <- Logistic_Coeff[order(abs(Logistic_Coeff$coefficient), decreasing = TRUE),]
# Write the table to SQL. Compute Context should be set to local.
rxSetComputeContext(local)
Logistic_Coeff_sql <- RxSqlServerData(table = "Logistic_Coeff", connectionString = connection_string)
rxDataStep(inData = Logistic_Coeff, outFile = Logistic_Coeff_sql, overwrite = TRUE)
print("Logistic Regression Coefficients written to SQL.")
# Save the fitted model to SQL Server.
## Open an Odbc connection with SQL Server.
OdbcModel <- RxOdbcData(table = "Model", connectionString = connection_string)
rxOpen(OdbcModel, "w")
## Drop the Model table if it exists.
if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {
rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)
}
## Create an empty Model table.
rxExecuteSQLDDL(OdbcModel,
sSQLString = paste(" CREATE TABLE [", OdbcModel@table, "] (",
" [id] varchar(200) not null, ",
" [value] varbinary(max), ",
" constraint unique_id3 unique (id))",
sep = "")
)
## Write the model to SQL.
rxWriteObject(OdbcModel, "Logistic Regression", logistic_model)
## Close Obdc connection.
rxClose(OdbcModel)
# Set the compute context back to SQL.
rxSetComputeContext(sql)
print("Model uploaded to SQL.")
# Logistic Regression Scoring
# Make Predictions and save them to SQL.
Predictions_Logistic_sql <- RxSqlServerData(table = "Predictions_Logistic", connectionString = connection_string)
rxPredict(logistic_model,
data = Test_sql,
outData = Predictions_Logistic_sql,
overwrite = TRUE,
type = "response", # If you used rxLogisticRegression, this argument should be removed.
extraVarsToWrite = c("isBad", "loanId"))
print("Scoring done.")
# Evaluation.
## Import the prediction table and convert is_bad to numeric for correct evaluation.
Predictions <- rxImport(Predictions_Logistic_sql)
Predictions$isBad <- as.numeric(as.character(Predictions$isBad))
## Change the names of the variables in the predictions table if you used rxLogisticRegression.
## Predictions <- Predictions[, c(1, 2, 5)]
## colnames(Predictions) <- c("isBad", "loanId", "isBad_Pred")
## Set the Compute Context to local for evaluation.
rxSetComputeContext(local)
print("Predictions imported.")
## KS PLOT AND STATISTIC.
# Split the data according to the observed value and get the cumulative distribution of predicted probabilities.
Predictions0 <- Predictions[Predictions$isBad==0,]$isBad_Pred
Predictions1 <- Predictions[Predictions$isBad==1,]$isBad_Pred
cdf0 <- ecdf(Predictions0)
cdf1 <- ecdf(Predictions1)
# Compute the KS statistic and the corresponding points on the KS plot.
## Create a sequence of predicted probabilities in its range of values.
minMax <- seq(min(Predictions0, Predictions1), max(Predictions0, Predictions1), length.out=length(Predictions0))
## Compute KS, ie. the largest distance between the two cumulative distributions.
KS <- max(abs(cdf0(minMax) - cdf1(minMax)))
print(sprintf("KS = %s", KS))
## Find a predicted probability where the cumulative distributions have the biggest difference.
x0 <- minMax[which(abs(cdf0(minMax) - cdf1(minMax)) == KS )][1]
## Get the corresponding points on the plot.
y0 <- cdf0(x0)
y1 <- cdf1(x0)
# Plot the two cumulative distributions with the line between points of greatest distance.
plot(cdf0, verticals = TRUE, do.points = FALSE, col = "blue", main = sprintf("KS Plot; KS = %s", round(KS, digits = 3)), ylab = "Cumulative Distribution Functions", xlab = "Predicted Probabilities")
plot(cdf1, verticals = TRUE, do.points = FALSE, col = "green", add = TRUE)
legend(0.3, 0.8, c("isBad == 0", "isBad == 1"), lty = c(1, 1), lwd = c(2.5, 2.5), col = c("blue", "green"))
points(c(x0, x0), c(y0, y1), pch = 16, col = "red")
segments(x0, y0, x0, y1, col = "red", lty = "dotted")
## CONFUSION MATRIX AND VARIOUS METRICS.
# The cumulative distributions of predicted probabilities given observed values are the farthest apart for a score equal to x0.
# We can then use x0 as a decision threshold for example.
# Note that the choice of a decision threshold can be further optimized.
# Using the x0 point as a threshold, we compute the binary predictions to get the confusion matrix.
Predictions$isBad_Pred_Binary <- ifelse(Predictions$isBad_Pred < x0, 0, 1)
confusion <- table(Predictions$isBad, Predictions$isBad_Pred_Binary, dnn = c("Observed", "Predicted"))[c("0", "1"), c("0", "1")]
print(confusion)
tp <- confusion[1, 1]
fn <- confusion[1, 2]
fp <- confusion[2, 1]
tn <- confusion[2, 2]
accuracy <- (tp + tn) / (tp + fn + fp + tn)
precision <- tp / (tp + fp)
recall <- tp / (tp + fn)
fscore <- 2 * (precision * recall) / (precision + recall)
# Print the computed metrics.
metrics <- c("Accuracy" = accuracy,
"Precision" = precision,
"Recall" = recall,
"F-Score" = fscore,
"Score Threshold" = x0)
print(metrics)
## ROC PLOT AND AUC.
ROC <- rxRoc(actualVarName = "isBad", predVarNames = "isBad_Pred", data = Predictions, numBreaks = 1000)
AUC <- rxAuc(ROC)
print(sprintf("AUC = %s", AUC))
plot(ROC, title = "ROC Curve for Logistic Regression")
## LIFT CHART.
pred <- prediction(predictions = Predictions$isBad_Pred, labels = Predictions$isBad, label.ordering = c("0", "1"))
perf <- performance(pred, measure = "lift", x.measure = "rpp")
plot(perf, main = c("Lift Chart"))
abline(h = 1.0, col = "purple")
```
## Step 4: Operational Metrics Computation and Scores Transformation
In this step, we:
**1.** Compute Operational Metrics: expected bad rate for various classification decision thresholds.
**2.** Apply a score transformation based on operational metrics.
**Input:** Predictions table.
**Output:** Operational Metrics and Transformed Scores.
### Operational metrics are computed in the following way:
**1.** Apply a sigmoid function to the output scores of the logistic regression, in order to spread them in [0,1].
**2.** Compute bins for the scores, based on quantiles.
**3.** Take each lower bound of each bin as a decision threshold for default loan classification, and compute the rate of bad loans among loans with a score higher than the threshold.
```
# Space out the scores (predicted probability of default) for interpretability with a sigmoid.
## Define the sigmoid: it is centered at 1.2*mean score to ensure a good spread of scores.
dev_test_avg_score <- mean(Predictions$isBad_Pred)
sigmoid <- function(x){
return(1/(1 + exp(-20*(x-1.2*dev_test_avg_score))))
}
## Apply the function.
Predictions$transformedScore <- sigmoid(Predictions$isBad_Pred)
## Changes can be observed with the histograms and summary statistics.
#summary(Predictions$isBad_Pred)
#hist(Predictions$isBad_Pred)
#summary(Predictions$transformedScore)
#hist(Predictions$transformedScore)
## Save the average score on the test set for the Production stage.
Scores_Average <- data.frame(avg = dev_test_avg_score)
Scores_Average_sql <- RxSqlServerData(table = "Scores_Average", connectionString = connection_string)
rxDataStep(inData = Scores_Average, outFile = Scores_Average_sql, overwrite = TRUE)
print("Scores Spaced out in [0,1]")
# Compute operational metrics.
## Bin the scores based on quantiles.
bins <- rxQuantile("transformedScore", Predictions, probs = c(seq(0, 0.99, 0.01)))
bins[["0%"]] <- 0
## We consider 100 decision thresholds: the lower bound of each bin.
## Compute the expected rates of bad loans for loans with scores higher than each decision threshold.
badrate <- rep(0, length(bins))
for(i in 1:length(bins))
{
selected <- Predictions$isBad[Predictions$transformedScore >= bins[i]]
badrate[i] <- sum(selected)/length(selected)
}
## Save the data points to a data frame and load it to SQL.
Operational_Metrics <- data.frame(scorePercentile = names(bins), scoreCutoff = bins, badRate = badrate, row.names = NULL)
Operational_Metrics_sql <- RxSqlServerData(table = "Operational_Metrics", connectionString = connection_string)
rxDataStep(inData = Operational_Metrics, outFile = Operational_Metrics_sql, overwrite = TRUE)
print("Operational Metrics computed.")
# Apply the score transformation.
## Deal with the bottom 1-99 percentiles.
for (i in seq(1, (nrow(Operational_Metrics) - 1))){
rows <- which(Predictions$transformedScore <= Operational_Metrics$scoreCutoff[i + 1] &
Predictions$transformedScore > Operational_Metrics$scoreCutoff[i])
Predictions[rows, c("scorePercentile")] <- as.character(Operational_Metrics$scorePercentile[i + 1])
Predictions[rows, c("badRate")] <- Operational_Metrics$badRate[i]
Predictions[rows, c("scoreCutoff")] <- Operational_Metrics$scoreCutoff[i]
}
## Deal with the top 1% higher scores (last bucket).
rows <- which(Predictions$transformedScore > Operational_Metrics$scoreCutoff[100])
Predictions[rows, c("scorePercentile")] <- "Top 1%"
Predictions[rows, c("scoreCutoff")] <- Operational_Metrics$scoreCutoff[100]
Predictions[rows, c("badRate")] <- Operational_Metrics$badRate[100]
## Save the transformed scores to SQL.
Scores_sql <- RxSqlServerData(table = "Scores", connectionString = connection_string)
rxDataStep(inData = Predictions[, c("loanId", "transformedScore", "scorePercentile", "scoreCutoff", "badRate", "isBad")],
outFile = Scores_sql,
overwrite = TRUE)
print("Scores transformed.")
# Plot the rates of bad loans for various thresholds obtained through binning.
plot(Operational_Metrics$badRate, main = c("Bad Loans Rates Among those with Scores Higher than Decision Thresholds"), xlab = "Default Score Percentiles", ylab = "Expected Rate of Bad Loans")
## EXAMPLE:
## If the score cutoff of the 91th score percentile is 0.9834, and we read a bad rate of 0.6449.
## This means that if 0.9834 is used as a threshold to classify loans as bad, we would have a bad rate of 64.49%.
## This bad rate is equal to the number of observed bad loans over the total number of loans with a score greater than the threshold.
# Close Obdc connection to master database.
rxClose(outOdbcDS)
```
|
github_jupyter
|
# WARNING.
# We recommend not using Internet Explorer as it does not support plotting, and may crash your session.
# INPUT DATA SETS: point to the correct path.
Loan <- "C:/Solutions/Loans/Data/Loan.txt"
Borrower <- "C:/Solutions/Loans/Data/Borrower.txt"
# Load packages.
library(RevoScaleR)
library("MicrosoftML")
library(smbinning)
library(ROCR)
# Creating the connection string. Specify:
## Database name. If it already exists, tables will be overwritten. If not, it will be created.
## Server name. If conecting remotely to the DSVM, the full DNS address should be used with the port number 1433 (which should be enabled)
## User ID and Password. Change them below if you modified the default values.
db_name <- "Loans"
server <- "localhost"
connection_string <- sprintf("Driver=SQL Server;Server=%s;Database=%s;TRUSTED_CONNECTION=True", server, db_name)
print("Connection String Written.")
# Create the database if not already existing.
## Open an Odbc connection with SQL Server master database only to create a new database with the rxExecuteSQLDDL function.
connection_string_master <- sprintf("Driver=SQL Server;Server=%s;Database=master;TRUSTED_CONNECTION=True", server)
outOdbcDS_master <- RxOdbcData(table = "Default_Master", connectionString = connection_string_master)
rxOpen(outOdbcDS_master, "w")
## Create database if not already existing.
query <- sprintf( "if not exists(SELECT * FROM sys.databases WHERE name = '%s') CREATE DATABASE %s;", db_name, db_name)
rxExecuteSQLDDL(outOdbcDS_master, sSQLString = query)
## Close Obdc connection to master database.
rxClose(outOdbcDS_master)
print("Database created if not already existing.")
# Define Compute Contexts.
sql <- RxInSqlServer(connectionString = connection_string)
local <- RxLocalSeq()
# Open a connection with SQL Server to be able to write queries with the rxExecuteSQLDDL function in the new database.
outOdbcDS <- RxOdbcData(table = "Default", connectionString = connection_string)
rxOpen(outOdbcDS, "w")
display_head <- function(table_name, n_rows){
table_sql <- RxSqlServerData(sqlQuery = sprintf("SELECT TOP(%s) * FROM %s", n_rows, table_name), connectionString = connection_string)
table <- rxImport(table_sql)
print(table)
}
# table_name <- "insert_table_name"
# n_rows <- 10
# display_head(table_name, n_rows)
# Set the compute context to Local.
rxSetComputeContext(local)
# Upload the data set to SQL.
## Specify the desired column types.
## When uploading to SQL, Character and Factor are converted to nvarchar(255), Integer to Integer and Numeric to Float.
column_types_loan <- c(loanId = "integer",
memberId = "integer",
date = "character",
purpose = "character",
isJointApplication = "character",
loanAmount = "numeric",
term = "character",
interestRate = "numeric",
monthlyPayment = "numeric",
grade = "character",
loanStatus = "character")
column_types_borrower <- c(memberId = "integer",
residentialState = "character",
yearsEmployment = "character",
homeOwnership = "character",
annualIncome = "numeric",
incomeVerified = "character",
dtiRatio = "numeric",
lengthCreditHistory = "integer",
numTotalCreditLines = "integer",
numOpenCreditLines = "integer",
numOpenCreditLines1Year = "integer",
revolvingBalance = "numeric",
revolvingUtilizationRate = "numeric",
numDerogatoryRec = "integer",
numDelinquency2Years = "integer",
numChargeoff1year = "integer",
numInquiries6Mon = "integer")
## Point to the input data sets while specifying the classes.
Loan_text <- RxTextData(file = Loan, colClasses = column_types_loan)
Borrower_text <- RxTextData(file = Borrower, colClasses = column_types_borrower)
## Upload the data to SQL tables.
Loan_sql <- RxSqlServerData(table = "Loan", connectionString = connection_string)
Borrower_sql <- RxSqlServerData(table = "Borrower", connectionString = connection_string)
rxDataStep(inData = Loan_text, outFile = Loan_sql, overwrite = TRUE)
rxDataStep(inData = Borrower_text, outFile = Borrower_sql, overwrite = TRUE)
print("Data exported to SQL.")
# Set the compute context to SQL.
rxSetComputeContext(sql)
# Inner join of the raw tables Loan and Borrower.
rxExecuteSQLDDL(outOdbcDS, sSQLString = "DROP TABLE if exists Merged;")
rxExecuteSQLDDL(outOdbcDS, sSQLString =
"SELECT loanId, [date], purpose, isJointApplication, loanAmount, term, interestRate, monthlyPayment,
grade, loanStatus, Borrower.*
INTO Merged
FROM Loan JOIN Borrower
ON Loan.memberId = Borrower.memberId;")
print("Merging of the two tables completed.")
# Determine if Merged has missing values and compute statistics for use in Production.
## Use rxSummary function to get the names of the variables with missing values.
## Assumption: no NAs in the id variables (loan_id and member_id), target variable and date.
## For rxSummary to give correct info on characters, stringsAsFactors = T should be used.
Merged_sql <- RxSqlServerData(table = "Merged", connectionString = connection_string, stringsAsFactors = T)
col_names <- rxGetVarNames(Merged_sql)
var_names <- col_names[!col_names %in% c("loanId", "memberId", "loanStatus", "date")]
formula <- as.formula(paste("~", paste(var_names, collapse = "+")))
summary <- rxSummary(formula, Merged_sql, byTerm = TRUE)
## Get the variables types.
categorical_all <- unlist(lapply(summary$categorical, FUN = function(x){colnames(x)[1]}))
numeric_all <- setdiff(var_names, categorical_all)
## Get the variables names with missing values.
var_with_NA <- summary$sDataFrame[summary$sDataFrame$MissingObs > 0, 1]
categorical_NA <- intersect(categorical_all, var_with_NA)
numeric_NA <- intersect(numeric_all, var_with_NA)
## Compute the global means.
Summary_DF <- summary$sDataFrame
Numeric_Means <- Summary_DF[Summary_DF$Name %in% numeric_all, c("Name", "Mean")]
Numeric_Means$Mean <- round(Numeric_Means$Mean)
## Compute the global modes.
## Get the counts tables.
Summary_Counts <- summary$categorical
names(Summary_Counts) <- lapply(Summary_Counts, FUN = function(x){colnames(x)[1]})
## Compute for each count table the value with the highest count.
modes <- unlist(lapply(Summary_Counts, FUN = function(x){as.character(x[which.max(x[,2]),1])}), use.names = FALSE)
Categorical_Modes <- data.frame(Name = categorical_all, Mode = modes)
## Set the compute context to local to export the summary statistics to SQL.
## The schema of the Statistics table is adapted to the one created in the SQL code.
rxSetComputeContext('local')
Numeric_Means$Mode <- NA
Numeric_Means$type <- "float"
Categorical_Modes$Mean <- NA
Categorical_Modes$type <- "char"
Stats <- rbind(Numeric_Means, Categorical_Modes)[, c("Name", "type", "Mode", "Mean")]
colnames(Stats) <- c("variableName", "type", "mode", "mean")
## Save the statistics to SQL for Production use.
Stats_sql <- RxSqlServerData(table = "Stats", connectionString = connection_string)
rxDataStep(inData = Stats, outFile = Stats_sql, overwrite = TRUE)
## Set the compute context back to SQL.
rxSetComputeContext(sql)
# If no missing values, we move the data to a new table Merged_Cleaned.
if(length(var_with_NA) == 0){
print("No missing values: no treatment will be applied.")
rxExecuteSQLDDL(outOdbcDS, sSQLString = "DROP TABLE if exists Merged_Cleaned;")
rxExecuteSQLDDL(outOdbcDS, sSQLString = "SELECT * INTO Merged_Cleaned FROM Merged;")
missing <- 0
} else{
print("Variables containing missing values are:")
print(var_with_NA)
missing <- 1
print("Perform data cleaning in the next cell.")
}
# If applicable, NULL is replaced with the mode (categorical variables: integer or character) or mean (continuous variables).
if(missing == 1){
# Get the global means of the numeric variables with missing values.
numeric_NA_mean <- round(Stats[Stats$variableName %in% numeric_NA, "mean"])
# Get the global modes of the categorical variables with missing values.
categorical_NA_mode <- as.character(Stats[Stats$variableName %in% categorical_NA, "mode"])
# Function to replace missing values with mean or mode. It will be wrapped into rxDataStep.
Mean_Mode_Replace <- function(data) {
data <- data.frame(data, stringsAsFactors = FALSE)
# Replace numeric variables with the mean.
if(length(num_with_NA) > 0){
for(i in 1:length(num_with_NA)){
row_na <- which(is.na(data[, num_with_NA[i]]))
data[row_na, num_with_NA[i]] <- num_NA_mean[i]
}
}
# Replace categorical variables with the mode.
if(length(cat_with_NA) > 0){
for(i in 1:length(cat_with_NA)){
row_na <- which(is.na(data[, cat_with_NA[i]]))
data[row_na, cat_with_NA[i]] <- cat_NA_mode[i]
}
}
return(data)
}
# Point to the input table.
Merged_sql <- RxSqlServerData(table = "Merged", connectionString = connection_string)
# Point to the output (empty) table.
Merged_Cleaned_sql <- RxSqlServerData(table = "Merged_Cleaned", connectionString = connection_string)
# Perform the data cleaning with rxDataStep.
rxDataStep(inData = Merged_sql,
outFile = Merged_Cleaned_sql,
overwrite = TRUE,
transformFunc = Mean_Mode_Replace,
transformObjects = list(num_with_NA = numeric_NA , num_NA_mean = numeric_NA_mean,
cat_with_NA = categorical_NA, cat_NA_mode = categorical_NA_mode))
print("Data cleaned.")
}
# Point to the input table.
Merged_Cleaned_sql <- RxSqlServerData(table = "Merged_Cleaned", connectionString = connection_string)
# Point to the Output SQL table:
Merged_Labeled_sql <- RxSqlServerData(table = "Merged_Labeled", connectionString = connection_string)
# Create the target variable, isBad, based on loanStatus.
rxDataStep(inData = Merged_Cleaned_sql ,
outFile = Merged_Labeled_sql,
overwrite = TRUE,
transforms = list(
isBad = ifelse(loanStatus %in% c("Current"), "0", "1")
))
print("Label isBad created.")
# Split the cleaned data set into a Training and a Testing set.
## Create the Hash_Id table containing loanId hashed to integers.
## The advantage of using a hashing function for splitting is to permit repeatability of the experiment.
rxExecuteSQLDDL(outOdbcDS, sSQLString = "DROP TABLE if exists Hash_Id;")
rxExecuteSQLDDL(outOdbcDS, sSQLString =
"SELECT loanId, ABS(CAST(CAST(HashBytes('MD5', CAST(loanId AS varchar(20))) AS VARBINARY(64)) AS BIGINT) % 100) AS hashCode
INTO Hash_Id
FROM Merged_Labeled ;")
# Point to the training set.
Train_sql <- RxSqlServerData(sqlQuery =
"SELECT *
FROM Merged_Labeled
WHERE loanId IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)",
connectionString = connection_string)
print("Splitting completed.")
# Compute optimal bins for numeric variables using the smbinning package on the Training set.
# Using the smbinning has some limitations, such as:
# - The variable should have more than 10 unique values.
# - If no significant splits are found, it does not output bins.
# For this reason, we manually specify default bins based on an analysis of the variables distributions or smbinning on a larger data set.
# We then overwrite them with smbinning when it output bins.
bins <- list()
# Default cutoffs for bins:
# EXAMPLE: If the cutoffs are (c1, c2, c3),
## Bin 1 = ]- inf, c1], Bin 2 = ]c1, c2], Bin 3 = ]c2, c3], Bin 4 = ]c3, + inf]
## c1 and c3 are NOT the minimum and maximum found in the training set.
bins$loanAmount <- c(14953, 18951, 20852, 22122, 24709, 28004)
bins$interestRate <- c(7.17, 10.84, 12.86, 14.47, 15.75, 18.05)
bins$monthlyPayment <- c(382, 429, 495, 529, 580, 649, 708, 847)
bins$annualIncome <- c(49402, 50823, 52089, 52885, 53521, 54881, 55520, 57490)
bins$dtiRatio <- c(9.01, 13.42, 15.92, 18.50, 21.49, 22.82, 24.67)
bins$lengthCreditHistory <- c(8)
bins$numTotalCreditLines <- c(1, 2)
bins$numOpenCreditLines <- c(3, 5)
bins$numOpenCreditLines1Year <- c(3, 4, 5, 6, 7, 9)
bins$revolvingBalance <- c(11912, 12645, 13799, 14345, 14785, 15360, 15883, 16361, 17374, 18877)
bins$revolvingUtilizationRate <- c(49.88, 60.01, 74.25, 81.96)
bins$numDerogatoryRec <- c(0, 1)
bins$numDelinquency2Years <- c(0)
bins$numChargeoff1year <- c(0)
bins$numInquiries6Mon <- c(0)
# Import the training set to be able to apply smbinning.
Train_df <- rxImport(Train_sql)
# Set the type of the label to numeric.
Train_df$isBad <- as.numeric(as.character(Train_df$isBad))
# Function to compute smbinning on every variable.
compute_bins <- function(name, data){
library(smbinning)
output <- smbinning(data, y = "isBad", x = name, p = 0.05)
if (class(output) == "list"){ # case where the binning was performed and returned bins.
cuts <- output$cuts
return (cuts)
}
}
# We apply it in parallel accross cores with rxExec and the compute context set to Local Parallel.
## 3 cores will be used here so the code can run on servers with smaller RAM.
## You can increase numCoresToUse below in order to speed up the execution if using a larger server.
## numCoresToUse = -1 will enable the use of the maximum number of cores.
rxOptions(numCoresToUse = 3) # use 3 cores.
rxSetComputeContext('localpar')
bins_smb <- rxExec(compute_bins, name = rxElemArg(names(bins)), data = Train_df)
names(bins_smb) <- names(bins)
# Fill bins with bins obtained in bins_smb with smbinning.
## We replace the default values in bins if and only if smbinning returned a non NULL result.
for(name in names(bins)){
if (!is.null(bins_smb[[name]])){
bins[[name]] <- bins_smb[[name]]
}
}
# Save the bins to SQL for use in Production Stage.
## Open an Odbc connection with SQL Server.
OdbcModel <- RxOdbcData(table = "Bins", connectionString = connection_string)
rxOpen(OdbcModel, "w")
## Drop the Bins table if it exists.
if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {
rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)
}
## Create an empty Bins table.
rxExecuteSQLDDL(OdbcModel,
sSQLString = paste(" CREATE TABLE [", OdbcModel@table, "] (",
" [id] varchar(200) not null, ",
" [value] varbinary(max), ",
" constraint unique_id unique (id))",
sep = "")
)
## Write the model to SQL.
rxWriteObject(OdbcModel, "Bin Info", bins)
## Close the Obdc connection used.
rxClose(OdbcModel)
# Set back the compute context to SQL.
rxSetComputeContext(sql)
print("Bins computed/defined.")
# Function to bucketize numeric variables. It will be wrapped into rxDataStep.
bucketize <- function(data) {
for(name in names(b)) {
name2 <- paste(name, "Bucket", sep = "")
data[[name2]] <- as.character(as.numeric(cut(data[[name]], c(-Inf, b[[name]], Inf))))
}
return(data)
}
# Perform feature engineering on the cleaned data set.
# Output:
Merged_Features_sql <- RxSqlServerData(table = "Merged_Features", connectionString = connection_string)
# Create buckets for various numeric variables with the function Bucketize.
rxDataStep(inData = Merged_Labeled_sql,
outFile = Merged_Features_sql,
overwrite = TRUE,
transformFunc = bucketize,
transformObjects = list(
b = bins))
print("Feature Engineering Completed.")
# Convert strings to factors.
Merged_Features_sql <- RxSqlServerData(table = "Merged_Features", connectionString = connection_string, stringsAsFactors = TRUE)
## Get the column information.
column_info <- rxCreateColInfo(Merged_Features_sql, sortLevels = TRUE)
## Set the compute context to local to export the column_info list to SQl.
rxSetComputeContext('local')
## Open an Odbc connection with SQL Server.
OdbcModel <- RxOdbcData(table = "Column_Info", connectionString = connection_string)
rxOpen(OdbcModel, "w")
## Drop the Column Info table if it exists.
if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {
rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)
}
## Create an empty Column_Info table.
rxExecuteSQLDDL(OdbcModel,
sSQLString = paste(" CREATE TABLE [", OdbcModel@table, "] (",
" [id] varchar(200) not null, ",
" [value] varbinary(max), ",
" constraint unique_id2 unique (id))",
sep = "")
)
## Write the model to SQL.
rxWriteObject(OdbcModel, "Column Info", column_info)
## Close the Obdc connection used.
rxClose(OdbcModel)
# Set the compute context back to SQL.
rxSetComputeContext(sql)
# Point to the training set. It will be created on the fly when training models.
Train_sql <- RxSqlServerData(sqlQuery =
"SELECT *
FROM Merged_Features
WHERE loanId IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)",
connectionString = connection_string, colInfo = column_info)
# Point to the testing set. It will be created on the fly when testing models.
Test_sql <- RxSqlServerData(sqlQuery =
"SELECT *
FROM Merged_Features
WHERE loanId NOT IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)",
connectionString = connection_string, colInfo = column_info)
print("Column information received.")
# Write the formula after removing variables not used in the modeling.
## We remove the id variables, date, residentialState, term, and all the numeric variables that were later bucketed.
variables_all <- rxGetVarNames(Train_sql)
variables_to_remove <- c("loanId", "memberId", "loanStatus", "date", "residentialState", "term",
"loanAmount", "interestRate", "monthlyPayment", "annualIncome", "dtiRatio", "lengthCreditHistory",
"numTotalCreditLines", "numOpenCreditLines", "numOpenCreditLines1Year", "revolvingBalance",
"revolvingUtilizationRate", "numDerogatoryRec", "numDelinquency2Years", "numChargeoff1year",
"numInquiries6Mon")
training_variables <- variables_all[!(variables_all %in% c("isBad", variables_to_remove))]
formula <- as.formula(paste("isBad ~", paste(training_variables, collapse = "+")))
print("Formula written.")
# Train the logistic regression model.
logistic_model <- rxLogit(formula = formula,
data = Train_sql,
reportProgress = 0,
initialValues = NA)
## rxLogisticRegression function from the MicrosoftML library can be used instead.
## The regularization weights (l1Weight and l2Weight) can be modified for further optimization.
## The included selectFeatures function can select a certain number of optimal features based on a specified method.
## the number of variables to select and the method can be further optimized.
#library('MicrosoftML')
#logistic_model <- rxLogisticRegression(formula = formula,
# data = Train_sql,
# type = "binary",
# l1Weight = 0.7,
# l2Weight = 0.7,
# mlTransforms = list(selectFeatures(formula, mode = mutualInformation(numFeaturesToKeep = 10))))
print("Training Logistic Regression done.")
# Get the coefficients of the logistic regression formula.
## NA means the variable has been dropped while building the model.
coeff <- logistic_model$coefficients
Logistic_Coeff <- data.frame(variable = names(coeff), coefficient = coeff, row.names = NULL)
## Order in decreasing order of absolute value of coefficients.
Logistic_Coeff <- Logistic_Coeff[order(abs(Logistic_Coeff$coefficient), decreasing = TRUE),]
# Write the table to SQL. Compute Context should be set to local.
rxSetComputeContext(local)
Logistic_Coeff_sql <- RxSqlServerData(table = "Logistic_Coeff", connectionString = connection_string)
rxDataStep(inData = Logistic_Coeff, outFile = Logistic_Coeff_sql, overwrite = TRUE)
print("Logistic Regression Coefficients written to SQL.")
# Save the fitted model to SQL Server.
## Open an Odbc connection with SQL Server.
OdbcModel <- RxOdbcData(table = "Model", connectionString = connection_string)
rxOpen(OdbcModel, "w")
## Drop the Model table if it exists.
if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {
rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)
}
## Create an empty Model table.
rxExecuteSQLDDL(OdbcModel,
sSQLString = paste(" CREATE TABLE [", OdbcModel@table, "] (",
" [id] varchar(200) not null, ",
" [value] varbinary(max), ",
" constraint unique_id3 unique (id))",
sep = "")
)
## Write the model to SQL.
rxWriteObject(OdbcModel, "Logistic Regression", logistic_model)
## Close Obdc connection.
rxClose(OdbcModel)
# Set the compute context back to SQL.
rxSetComputeContext(sql)
print("Model uploaded to SQL.")
# Logistic Regression Scoring
# Make Predictions and save them to SQL.
Predictions_Logistic_sql <- RxSqlServerData(table = "Predictions_Logistic", connectionString = connection_string)
rxPredict(logistic_model,
data = Test_sql,
outData = Predictions_Logistic_sql,
overwrite = TRUE,
type = "response", # If you used rxLogisticRegression, this argument should be removed.
extraVarsToWrite = c("isBad", "loanId"))
print("Scoring done.")
# Evaluation.
## Import the prediction table and convert is_bad to numeric for correct evaluation.
Predictions <- rxImport(Predictions_Logistic_sql)
Predictions$isBad <- as.numeric(as.character(Predictions$isBad))
## Change the names of the variables in the predictions table if you used rxLogisticRegression.
## Predictions <- Predictions[, c(1, 2, 5)]
## colnames(Predictions) <- c("isBad", "loanId", "isBad_Pred")
## Set the Compute Context to local for evaluation.
rxSetComputeContext(local)
print("Predictions imported.")
## KS PLOT AND STATISTIC.
# Split the data according to the observed value and get the cumulative distribution of predicted probabilities.
Predictions0 <- Predictions[Predictions$isBad==0,]$isBad_Pred
Predictions1 <- Predictions[Predictions$isBad==1,]$isBad_Pred
cdf0 <- ecdf(Predictions0)
cdf1 <- ecdf(Predictions1)
# Compute the KS statistic and the corresponding points on the KS plot.
## Create a sequence of predicted probabilities in its range of values.
minMax <- seq(min(Predictions0, Predictions1), max(Predictions0, Predictions1), length.out=length(Predictions0))
## Compute KS, ie. the largest distance between the two cumulative distributions.
KS <- max(abs(cdf0(minMax) - cdf1(minMax)))
print(sprintf("KS = %s", KS))
## Find a predicted probability where the cumulative distributions have the biggest difference.
x0 <- minMax[which(abs(cdf0(minMax) - cdf1(minMax)) == KS )][1]
## Get the corresponding points on the plot.
y0 <- cdf0(x0)
y1 <- cdf1(x0)
# Plot the two cumulative distributions with the line between points of greatest distance.
plot(cdf0, verticals = TRUE, do.points = FALSE, col = "blue", main = sprintf("KS Plot; KS = %s", round(KS, digits = 3)), ylab = "Cumulative Distribution Functions", xlab = "Predicted Probabilities")
plot(cdf1, verticals = TRUE, do.points = FALSE, col = "green", add = TRUE)
legend(0.3, 0.8, c("isBad == 0", "isBad == 1"), lty = c(1, 1), lwd = c(2.5, 2.5), col = c("blue", "green"))
points(c(x0, x0), c(y0, y1), pch = 16, col = "red")
segments(x0, y0, x0, y1, col = "red", lty = "dotted")
## CONFUSION MATRIX AND VARIOUS METRICS.
# The cumulative distributions of predicted probabilities given observed values are the farthest apart for a score equal to x0.
# We can then use x0 as a decision threshold for example.
# Note that the choice of a decision threshold can be further optimized.
# Using the x0 point as a threshold, we compute the binary predictions to get the confusion matrix.
Predictions$isBad_Pred_Binary <- ifelse(Predictions$isBad_Pred < x0, 0, 1)
confusion <- table(Predictions$isBad, Predictions$isBad_Pred_Binary, dnn = c("Observed", "Predicted"))[c("0", "1"), c("0", "1")]
print(confusion)
tp <- confusion[1, 1]
fn <- confusion[1, 2]
fp <- confusion[2, 1]
tn <- confusion[2, 2]
accuracy <- (tp + tn) / (tp + fn + fp + tn)
precision <- tp / (tp + fp)
recall <- tp / (tp + fn)
fscore <- 2 * (precision * recall) / (precision + recall)
# Print the computed metrics.
metrics <- c("Accuracy" = accuracy,
"Precision" = precision,
"Recall" = recall,
"F-Score" = fscore,
"Score Threshold" = x0)
print(metrics)
## ROC PLOT AND AUC.
ROC <- rxRoc(actualVarName = "isBad", predVarNames = "isBad_Pred", data = Predictions, numBreaks = 1000)
AUC <- rxAuc(ROC)
print(sprintf("AUC = %s", AUC))
plot(ROC, title = "ROC Curve for Logistic Regression")
## LIFT CHART.
pred <- prediction(predictions = Predictions$isBad_Pred, labels = Predictions$isBad, label.ordering = c("0", "1"))
perf <- performance(pred, measure = "lift", x.measure = "rpp")
plot(perf, main = c("Lift Chart"))
abline(h = 1.0, col = "purple")
# Space out the scores (predicted probability of default) for interpretability with a sigmoid.
## Define the sigmoid: it is centered at 1.2*mean score to ensure a good spread of scores.
dev_test_avg_score <- mean(Predictions$isBad_Pred)
sigmoid <- function(x){
return(1/(1 + exp(-20*(x-1.2*dev_test_avg_score))))
}
## Apply the function.
Predictions$transformedScore <- sigmoid(Predictions$isBad_Pred)
## Changes can be observed with the histograms and summary statistics.
#summary(Predictions$isBad_Pred)
#hist(Predictions$isBad_Pred)
#summary(Predictions$transformedScore)
#hist(Predictions$transformedScore)
## Save the average score on the test set for the Production stage.
Scores_Average <- data.frame(avg = dev_test_avg_score)
Scores_Average_sql <- RxSqlServerData(table = "Scores_Average", connectionString = connection_string)
rxDataStep(inData = Scores_Average, outFile = Scores_Average_sql, overwrite = TRUE)
print("Scores Spaced out in [0,1]")
# Compute operational metrics.
## Bin the scores based on quantiles.
bins <- rxQuantile("transformedScore", Predictions, probs = c(seq(0, 0.99, 0.01)))
bins[["0%"]] <- 0
## We consider 100 decision thresholds: the lower bound of each bin.
## Compute the expected rates of bad loans for loans with scores higher than each decision threshold.
badrate <- rep(0, length(bins))
for(i in 1:length(bins))
{
selected <- Predictions$isBad[Predictions$transformedScore >= bins[i]]
badrate[i] <- sum(selected)/length(selected)
}
## Save the data points to a data frame and load it to SQL.
Operational_Metrics <- data.frame(scorePercentile = names(bins), scoreCutoff = bins, badRate = badrate, row.names = NULL)
Operational_Metrics_sql <- RxSqlServerData(table = "Operational_Metrics", connectionString = connection_string)
rxDataStep(inData = Operational_Metrics, outFile = Operational_Metrics_sql, overwrite = TRUE)
print("Operational Metrics computed.")
# Apply the score transformation.
## Deal with the bottom 1-99 percentiles.
for (i in seq(1, (nrow(Operational_Metrics) - 1))){
rows <- which(Predictions$transformedScore <= Operational_Metrics$scoreCutoff[i + 1] &
Predictions$transformedScore > Operational_Metrics$scoreCutoff[i])
Predictions[rows, c("scorePercentile")] <- as.character(Operational_Metrics$scorePercentile[i + 1])
Predictions[rows, c("badRate")] <- Operational_Metrics$badRate[i]
Predictions[rows, c("scoreCutoff")] <- Operational_Metrics$scoreCutoff[i]
}
## Deal with the top 1% higher scores (last bucket).
rows <- which(Predictions$transformedScore > Operational_Metrics$scoreCutoff[100])
Predictions[rows, c("scorePercentile")] <- "Top 1%"
Predictions[rows, c("scoreCutoff")] <- Operational_Metrics$scoreCutoff[100]
Predictions[rows, c("badRate")] <- Operational_Metrics$badRate[100]
## Save the transformed scores to SQL.
Scores_sql <- RxSqlServerData(table = "Scores", connectionString = connection_string)
rxDataStep(inData = Predictions[, c("loanId", "transformedScore", "scorePercentile", "scoreCutoff", "badRate", "isBad")],
outFile = Scores_sql,
overwrite = TRUE)
print("Scores transformed.")
# Plot the rates of bad loans for various thresholds obtained through binning.
plot(Operational_Metrics$badRate, main = c("Bad Loans Rates Among those with Scores Higher than Decision Thresholds"), xlab = "Default Score Percentiles", ylab = "Expected Rate of Bad Loans")
## EXAMPLE:
## If the score cutoff of the 91th score percentile is 0.9834, and we read a bad rate of 0.6449.
## This means that if 0.9834 is used as a threshold to classify loans as bad, we would have a bad rate of 64.49%.
## This bad rate is equal to the number of observed bad loans over the total number of loans with a score greater than the threshold.
# Close Obdc connection to master database.
rxClose(outOdbcDS)
| 0.519278 | 0.911101 |
# mlp及深度学习常见技巧
我们将以mlp对为,基础模型,然后介绍一些深度学习常见技巧, 如: 权重初始化, 激活函数, 优化器, 批规范化, dropout,模型集成
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## 导入数据
```
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape([x_train.shape[0], -1])
x_test = x_test.reshape([x_test.shape[0], -1])
print(x_train.shape, ' ', y_train.shape)
print(x_test.shape, ' ', y_test.shape)
```
## 基础模型
```
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)),
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
```
## 权重初始化
```
model = keras.Sequential([
layers.Dense(64, activation='relu', kernel_initializer='he_normal', input_shape=(784,)),
layers.Dense(64, activation='relu', kernel_initializer='he_normal'),
layers.Dense(64, activation='relu', kernel_initializer='he_normal'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
```
## 激活函数
```
model = keras.Sequential([
layers.Dense(64, activation='sigmoid', input_shape=(784,)),
layers.Dense(64, activation='sigmoid'),
layers.Dense(64, activation='sigmoid'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
```
## 优化器
```
model = keras.Sequential([
layers.Dense(64, activation='sigmoid', input_shape=(784,)),
layers.Dense(64, activation='sigmoid'),
layers.Dense(64, activation='sigmoid'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
```
## 批正则化
```
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)),
layers.BatchNormalization(),
layers.Dense(64, activation='relu'),
layers.BatchNormalization(),
layers.Dense(64, activation='relu'),
layers.BatchNormalization(),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
```
## dropout
```
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dropout(0.2),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
```
## 模型集成
下面是使用投票的方法进行模型集成
```
import numpy as np
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.metrics import accuracy_score
def mlp_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dropout(0.2),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
return model
model1 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model2 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model3 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
ensemble_clf = VotingClassifier(estimators=[
('model1', model1), ('model2', model2), ('model3', model3)
], voting='soft')
ensemble_clf.fit(x_train, y_train)
y_pred = ensemble_clf.predict(x_test)
print('acc: ', accuracy_score(y_pred, y_test))
```
## 全部使用
```
from tensorflow.keras import layers
import numpy as np
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.metrics import accuracy_score
def mlp_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', kernel_initializer='he_normal', input_shape=(784,)),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(64, activation='relu', kernel_initializer='he_normal'),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(64, activation='relu', kernel_initializer='he_normal'),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
return model
model1 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model2 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model3 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model4 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
ensemble_clf = VotingClassifier(estimators=[
('model1', model1), ('model2', model2), ('model3', model3),('model4', model4)])
ensemble_clf.fit(x_train, y_train)
y_predict = ensemble_clf.predict(x_test)
print('acc: ', accuracy_score(y_predict, y_test))
```
|
github_jupyter
|
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape([x_train.shape[0], -1])
x_test = x_test.reshape([x_test.shape[0], -1])
print(x_train.shape, ' ', y_train.shape)
print(x_test.shape, ' ', y_test.shape)
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)),
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
model = keras.Sequential([
layers.Dense(64, activation='relu', kernel_initializer='he_normal', input_shape=(784,)),
layers.Dense(64, activation='relu', kernel_initializer='he_normal'),
layers.Dense(64, activation='relu', kernel_initializer='he_normal'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
model = keras.Sequential([
layers.Dense(64, activation='sigmoid', input_shape=(784,)),
layers.Dense(64, activation='sigmoid'),
layers.Dense(64, activation='sigmoid'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
model = keras.Sequential([
layers.Dense(64, activation='sigmoid', input_shape=(784,)),
layers.Dense(64, activation='sigmoid'),
layers.Dense(64, activation='sigmoid'),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)),
layers.BatchNormalization(),
layers.Dense(64, activation='relu'),
layers.BatchNormalization(),
layers.Dense(64, activation='relu'),
layers.BatchNormalization(),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dropout(0.2),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, batch_size=256, epochs=100, validation_split=0.3, verbose=0)
%matplotlib inline
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'validation'], loc='upper left')
plt.show()
result = model.evaluate(x_test, y_test)
import numpy as np
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.metrics import accuracy_score
def mlp_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dropout(0.2),
layers.Dense(64, activation='relu'),
layers.Dropout(0.2),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
return model
model1 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model2 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model3 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
ensemble_clf = VotingClassifier(estimators=[
('model1', model1), ('model2', model2), ('model3', model3)
], voting='soft')
ensemble_clf.fit(x_train, y_train)
y_pred = ensemble_clf.predict(x_test)
print('acc: ', accuracy_score(y_pred, y_test))
from tensorflow.keras import layers
import numpy as np
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.metrics import accuracy_score
def mlp_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', kernel_initializer='he_normal', input_shape=(784,)),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(64, activation='relu', kernel_initializer='he_normal'),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(64, activation='relu', kernel_initializer='he_normal'),
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer=keras.optimizers.SGD(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
return model
model1 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model2 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model3 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
model4 = KerasClassifier(build_fn=mlp_model, epochs=100, verbose=0)
ensemble_clf = VotingClassifier(estimators=[
('model1', model1), ('model2', model2), ('model3', model3),('model4', model4)])
ensemble_clf.fit(x_train, y_train)
y_predict = ensemble_clf.predict(x_test)
print('acc: ', accuracy_score(y_predict, y_test))
| 0.94036 | 0.945399 |
# Prueba de concepto
## Emplazamiento local y post valoración global para ajustamiento con random forest
```
import sys
print(sys.version) #Python version
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score, train_test_split, RepeatedKFold, GridSearchCV, ParameterGrid
import multiprocessing
import warnings
warnings.filterwarnings('once')
```
## Modelling
### Topology
```
import networkx as nx
G = nx.Graph()
G.add_edges_from([('A', 'B'),('C','D'),('G','D')], latency=1)
G.add_edges_from([('D','A'),('D','E'),('B','D'),('D','E')], latency=2)
G.add_edges_from([('B','C'),('E','F')], latency=3)
G.add_edges_from([('C','F')], latency=4)
cost_map= {'A': 1.0,'D': 6.0,'C': 10.0}
costs = [cost_map.get(node, 3.0) for node in G.nodes()]
edge_labels= dict([((u,v,),d['latency']) for u,v,d in G.edges(data=True)])
node_labels = {node:node for node in G.nodes()}
pos=nx.spring_layout(G)
nx.draw_networkx_edge_labels(G,pos,edge_labels=edge_labels)
nx.draw_networkx_labels(G, pos, labels=node_labels,font_color="white")
nx.draw(G,pos, node_color = costs, node_size=1500)
plt.show()
G.degree()
max_latency = 7
nx.betweenness_centrality(G)
```
### Users
Asignación estática de usuarios a nodos. Solo un usuario a esos nodos:
```
wl_users = {'C': 1,'G': 1,'E': 1}
users_at_node = [wl_users.get(node, 0) for node in G.nodes()]
```
### Services
Asignación de servicios a nodos. Esto cambiará... por ahora es una prueba
```
alloc_services = {"A":1}
alloc_services_at_node = [alloc_services.get(node, 0) for node in G.nodes()]
print(alloc_services_at_node)
#Test A: Routing services
# Calculamos la ruta de los servicios - usuarios
available_services = dict([k,alloc_services_at_node[ix]] for ix,k in enumerate(G.nodes()) if alloc_services_at_node[ix]>0 )
print(available_services)
print(wl_users)
routing_service = {} #key: user , value: (service,latency)
for (node,value) in wl_users.items():
more_close = 0
node_close = None
for a_service in available_services:
path_length = nx.shortest_path_length(G,node,a_service,weight="latency")
if node_close == None:
more_close = path_length
node_close = a_service
if path_length<more_close:
more_close = path_length
node_close = a_service
routing_service[node]=(node_close,more_close)
print(routing_service) #key: user , value: (service,latency)
#Generamos una funcion de lo anterior
arrayofservices = [1, 1, 1, 1, 0, 1, 0]
def get_routings(arrayofservices):
available_services = dict([k,arrayofservices[ix]] for ix,k in enumerate(G.nodes()) if arrayofservices[ix]>0 )
if len(available_services)==0: return dict()
routing_service = {} #key: user , value: (service,latency)
for (node,value) in wl_users.items():
more_close = 0
node_close = None
for a_service in available_services:
path_length = nx.shortest_path_length(G,node,a_service,weight="latency")
if node_close == None:
more_close = path_length
node_close = a_service
if path_length<more_close:
more_close = path_length
node_close = a_service
routing_service[node]=(node_close,more_close)
return routing_service
print(get_routings(arrayofservices))
```
## Global evaluation function
Como evaluaremos el modelo a nivel global. Para esta prueba algo muy simple:
- Goal: Latency == 0. All services with latency 0.
- Goal: One user with one service
```
#Average latency
avg_latency = np.array([v[1] for k,v in routing_service.items()])
avg_latency = (np.abs((avg_latency-max_latency)/max_latency)).mean()
print(avg_latency)
#Total services
services = [v[0] for k,v in routing_service.items()]
total_services = len(np.unique(services))
print(total_services)
print(len(wl_users))
total_services/len(wl_users)
#Evaluation value
def assignment_cardinality_eval(total_services,users):
return total_services/users
def global_eval(avg_latency,total_services,wl_users):
return avg_latency+assignment_cardinality_eval(total_services,len(wl_users))/2.0
geval = global_eval(avg_latency,total_services,wl_users)
print(geval)
# ALL in one function
print(routing_service)
def compute_eval(routing_service):
avg_latency = np.array([v[1] for k,v in routing_service.items()])
avg_latency = (np.abs((avg_latency-max_latency)/max_latency)).mean()
services = [v[0] for k,v in routing_service.items()]
total_services = len(np.unique(services))
assigment_eval = total_services/len(wl_users)
return (avg_latency+assigment_eval)/2.
print(compute_eval(routing_service))
```
## Design considerations of the RF input :
- Dimensionality problem - samples representation: ¿qué granularidad es introducida en el modelo de datos? > Columnas
- Evaluation goals
```
actions_labels = ["None","Migrate","Replicate","Undeploy"]
actions = np.arange(len(actions_labels)) #dummies
print(actions)
```
### the first sample
```
columns_UsersatNode = ["user_atNode_%s"%k for k in G.nodes()]
columns_ServicesAtNodeB = ["services_atNode_bAction_%s"%k for k in G.nodes()]
columns_ServicesAtNodeA = ["services_atNode_aAction_%s"%k for k in G.nodes()]
columns = columns_UsersatNode + columns_ServicesAtNodeB +["Action","OnService"]+ columns_ServicesAtNodeA + ["Fit"]
print(columns)
users_at_node
services_bAction = [alloc_services.get(node, 0) for node in G.nodes()]
print(services_bAction)
routing = get_routings(services_bAction)
eval_bActions = compute_eval(routing)
print(eval_bActions)
action = [0]
on_service = [0]
services_aAction = services_bAction #its None actions
print(services_aAction)
routing = get_routings(services_aAction)
eval_aActions = compute_eval(routing)
print(eval_aActions)
speedup = (eval_aActions/eval_bActions)-1.0
print(speedup)
sample = users_at_node+services_bAction+action+on_service+services_aAction+[speedup]
print(sample)
```
### Building a random sample
```
np.random.seed(0)
users_at_node = np.array(users_at_node)
#Current STATE
r_serv_bAction = np.random.randint(0,2,len(G.nodes()))
print(r_serv_bAction)
routing = get_routings(r_serv_bAction)
eval_bActions = compute_eval(routing)
print(eval_bActions)
#Action and on specific service
action = np.random.choice(actions,1)
print(action)
current_services = np.flatnonzero(r_serv_bAction)
if len(current_services)==0:
action = [0]
else:
on_service = np.random.choice(current_services,1)
print(actions_labels[action[0]])
print(on_service)
#State After action
#actions_labels = ["None","Migrate","Replicate","Undeploy"]
print(r_serv_bAction)
print(actions_labels[action[0]])
print(on_service)
if action[0]==0:#"None":
r_serv_aAction = r_serv_bAction
elif action[0]==1:#"Migrate":
r_serv_aAction = r_serv_bAction
dst_service = np.random.choice(np.where(r_serv_bAction<1)[0],1)
r_serv_aAction[on_service]=0
r_serv_aAction[dst_service]=1
elif action[0]==2:#"Replicate":
r_serv_aAction = r_serv_bAction
dst_service = np.random.choice(np.where(r_serv_bAction<1)[0],1)
r_serv_aAction[dst_service]=1
elif action[0]==3:#"Undeploy":
r_serv_aAction = r_serv_bAction
r_serv_aAction[on_service]=0
print(r_serv_aAction)
routing = get_routings(r_serv_aAction)
eval_aActions = compute_eval(routing)
print(eval_aActions)
print(eval_bActions)
speedup = np.array((eval_aActions/eval_bActions)-1.0)
print(speedup)
sample = np.hstack([users_at_node,r_serv_bAction,action,on_service,r_serv_aAction,speedup])
print(sample)
```
## Doing N-random samples
```
import math
samples = 1000
df = pd.DataFrame(columns=columns)
for i in range(samples):
#Current STATE
r_serv_bAction = np.random.randint(0,2,len(G.nodes()))
# print("---")
# print("B ",r_serv_bAction)
routing = get_routings(r_serv_bAction)
eval_bActions = compute_eval(routing)
#Action and on specific service
action = np.random.choice(actions,1)
current_services = np.flatnonzero(r_serv_bAction)
# print(actions_labels[action[0]])
if len(current_services)==0:
action = [0]
else:
on_service = np.random.choice(current_services,1)
# print(on_service)
#Computing action
r_serv_aAction = r_serv_bAction
if action[0]==0:#"None":
pass
elif action[0]==1:#"Migrate":
options = np.where(r_serv_bAction<1)[0]
if len(options)>0:
dst_service = np.random.choice(options,1)
r_serv_aAction[dst_service]=1
r_serv_aAction[on_service]=0
elif action[0]==2:#"Replicate":
options = np.where(r_serv_bAction<1)[0]
if len(options)>0:
dst_service = np.random.choice(options,1)
r_serv_aAction[dst_service]=1
elif action[0]==3:#"Undeploy":
r_serv_aAction[on_service]=0
# print("A ",r_serv_bAction)
# print("---")
routing = get_routings(r_serv_aAction)
eval_aActions = compute_eval(routing)
speedup = (eval_aActions/eval_bActions)-1.0
if math.isnan(speedup):
speedup=-1.0
sample = np.hstack([users_at_node,r_serv_bAction,action,on_service,r_serv_aAction,speedup])
df.loc[len(df)] = sample
df.head()
# Nos aseguramos que no haya nada raro.
# Previas pruebas: hay samples with Fit. con NaN !! cuando se borran todos los servicios. Bug corregido.
is_NaN = df.isnull()
row_has_NaN = is_NaN.any(axis=1)
rows_with_NaN = df[row_has_NaN]
rows_with_NaN
```
## Random forest model
```
import sklearn
print(sklearn.__version__) #Tengo que actualizar...
#Spliting data
X_train, X_test, y_train, y_test = train_test_split(
df.drop(columns = "Fit"),
df['Fit'],
random_state = 0)
print(len(X_train))
print(len(X_test))
#Training
model = RandomForestRegressor(
n_estimators = 10,
criterion = 'squared_error',
max_depth = None,
max_features = 'auto',
oob_score = False,
n_jobs = -1,
random_state = 123
)
model.fit(X_train, y_train)
forecasting = model.predict(X = X_test)
rmse = mean_squared_error(
y_true = y_test,
y_pred = forecasting)
print("El error (rmse) de test es: %f"%rmse)
```
### A simple specific sample
Comprobamos que el modelo nos puede asesorar!
casi todo perfecto menos una migracion pendiente
de B a C
```
#Current STATE
print(wl_users) #[a,b,c,d,e,f,g]
r_serv_bAction = [0,1,0,0,1,0,1]
routing = get_routings(r_serv_bAction)
eval_bActions = compute_eval(routing)
action = 1
on_service = 1
r_serv_aAction = [0,0,1,0,1,0,1]
routing = get_routings(r_serv_aAction)
eval_aActions = compute_eval(routing)
speedup = (eval_aActions/eval_bActions)-1.0
print(speedup)
sample1 = np.hstack([users_at_node,r_serv_bAction,action,on_service,r_serv_aAction]) #migrate ok
sample2 = np.hstack([users_at_node,r_serv_bAction,0,on_service,r_serv_bAction]) # none bad
print(sample1)
print(sample2)
dftest = pd.DataFrame(columns=columns[0:-1])
dftest.loc[len(dftest)] = sample1
dftest.loc[len(dftest)] = sample2
dftest
forecasting = model.predict(X = dftest)
print(forecasting)
#Ante dos acciones: migrar a una mejor posición o no hacer nada, es mejor migrar!
```
#### Importancia de los descriptores
```
importancia_predictores = pd.DataFrame(
{'predictor': df.drop(columns = "Fit").columns,
'importancia': model.feature_importances_}
)
importancia_predictores.sort_values('importancia', ascending=False)
```
## TestB Including constant variables from the topology
```
df.head()
df.shape
columns = ["degree_%s"%n for n in G.nodes()]
columns += ["centrality_%s"%n for n in G.nodes()]
node_degree = list(dict(G.degree()).values())
node_centrality = list(nx.betweenness_centrality(G).values())
dfConstant = pd.DataFrame(columns=columns)
dfConstant.loc[0] = node_degree+node_centrality
dfConstant
dfj = df.merge(dfConstant, how='cross')
X_train, X_test, y_train, y_test = train_test_split(
dfj.drop(columns = "Fit"),
dfj['Fit'],
random_state = 0)
print(len(X_train))
print(len(X_test))
model = RandomForestRegressor(
n_estimators = 10,
criterion = 'squared_error',
max_depth = None,
max_features = 'auto',
oob_score = False,
n_jobs = -1,
random_state = 123
)
model.fit(X_train, y_train)
forecasting = model.predict(X = X_test)
rmse = mean_squared_error(
y_true = y_test,
y_pred = forecasting)
print("El error (rmse) de test es: %f"%rmse)
from sklearn.inspection import permutation_importance
importancia = permutation_importance(
estimator = model,
X = X_train,
y = y_train,
n_repeats = 5,
scoring = 'neg_root_mean_squared_error',
n_jobs = multiprocessing.cpu_count() - 1,
random_state = 123
)
# Se almacenan los resultados (media y desviación) en un dataframe
df_importancia = pd.DataFrame(
{k: importancia[k] for k in ['importances_mean', 'importances_std']}
)
df_importancia['feature'] = X_train.columns
df_importancia.sort_values('importances_mean', ascending=False)
# Gráfico
fig, ax = plt.subplots(figsize=(5, 8))
df_importancia = df_importancia.sort_values('importances_mean', ascending=True)
ax.barh(
df_importancia['feature'],
df_importancia['importances_mean'],
xerr=df_importancia['importances_std'],
align='center',
alpha=0
)
ax.plot(
df_importancia['importances_mean'],
df_importancia['feature'],
marker="D",
linestyle="",
alpha=0.8,
color="r"
)
ax.set_title('Importancia de los predictores (train)')
ax.set_xlabel('Incremento del error tras la permutación');
```
### Issues
- Related work with RF & Cloud/Fog/Edge computing
- Improve the fitness function
- Hyperparametrization analysis
- More complex scenario -> version.B?
## RW:
- IMPROVING RESPONSE TIME OF TASK OFFLOADING BY RANDOM FOREST, EXTRA-TREES AND ADABOOST CLASSIFIERS IN MOBILE FOG COMPUTING https://www.ejmanager.com/mnstemps/71/71-1590557276.pdf?t=1636034459 Naive criteria: Authentication Confidentiality Integrity Availability Capacity Speed Cost
- ...
|
github_jupyter
|
import sys
print(sys.version) #Python version
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score, train_test_split, RepeatedKFold, GridSearchCV, ParameterGrid
import multiprocessing
import warnings
warnings.filterwarnings('once')
import networkx as nx
G = nx.Graph()
G.add_edges_from([('A', 'B'),('C','D'),('G','D')], latency=1)
G.add_edges_from([('D','A'),('D','E'),('B','D'),('D','E')], latency=2)
G.add_edges_from([('B','C'),('E','F')], latency=3)
G.add_edges_from([('C','F')], latency=4)
cost_map= {'A': 1.0,'D': 6.0,'C': 10.0}
costs = [cost_map.get(node, 3.0) for node in G.nodes()]
edge_labels= dict([((u,v,),d['latency']) for u,v,d in G.edges(data=True)])
node_labels = {node:node for node in G.nodes()}
pos=nx.spring_layout(G)
nx.draw_networkx_edge_labels(G,pos,edge_labels=edge_labels)
nx.draw_networkx_labels(G, pos, labels=node_labels,font_color="white")
nx.draw(G,pos, node_color = costs, node_size=1500)
plt.show()
G.degree()
max_latency = 7
nx.betweenness_centrality(G)
wl_users = {'C': 1,'G': 1,'E': 1}
users_at_node = [wl_users.get(node, 0) for node in G.nodes()]
alloc_services = {"A":1}
alloc_services_at_node = [alloc_services.get(node, 0) for node in G.nodes()]
print(alloc_services_at_node)
#Test A: Routing services
# Calculamos la ruta de los servicios - usuarios
available_services = dict([k,alloc_services_at_node[ix]] for ix,k in enumerate(G.nodes()) if alloc_services_at_node[ix]>0 )
print(available_services)
print(wl_users)
routing_service = {} #key: user , value: (service,latency)
for (node,value) in wl_users.items():
more_close = 0
node_close = None
for a_service in available_services:
path_length = nx.shortest_path_length(G,node,a_service,weight="latency")
if node_close == None:
more_close = path_length
node_close = a_service
if path_length<more_close:
more_close = path_length
node_close = a_service
routing_service[node]=(node_close,more_close)
print(routing_service) #key: user , value: (service,latency)
#Generamos una funcion de lo anterior
arrayofservices = [1, 1, 1, 1, 0, 1, 0]
def get_routings(arrayofservices):
available_services = dict([k,arrayofservices[ix]] for ix,k in enumerate(G.nodes()) if arrayofservices[ix]>0 )
if len(available_services)==0: return dict()
routing_service = {} #key: user , value: (service,latency)
for (node,value) in wl_users.items():
more_close = 0
node_close = None
for a_service in available_services:
path_length = nx.shortest_path_length(G,node,a_service,weight="latency")
if node_close == None:
more_close = path_length
node_close = a_service
if path_length<more_close:
more_close = path_length
node_close = a_service
routing_service[node]=(node_close,more_close)
return routing_service
print(get_routings(arrayofservices))
#Average latency
avg_latency = np.array([v[1] for k,v in routing_service.items()])
avg_latency = (np.abs((avg_latency-max_latency)/max_latency)).mean()
print(avg_latency)
#Total services
services = [v[0] for k,v in routing_service.items()]
total_services = len(np.unique(services))
print(total_services)
print(len(wl_users))
total_services/len(wl_users)
#Evaluation value
def assignment_cardinality_eval(total_services,users):
return total_services/users
def global_eval(avg_latency,total_services,wl_users):
return avg_latency+assignment_cardinality_eval(total_services,len(wl_users))/2.0
geval = global_eval(avg_latency,total_services,wl_users)
print(geval)
# ALL in one function
print(routing_service)
def compute_eval(routing_service):
avg_latency = np.array([v[1] for k,v in routing_service.items()])
avg_latency = (np.abs((avg_latency-max_latency)/max_latency)).mean()
services = [v[0] for k,v in routing_service.items()]
total_services = len(np.unique(services))
assigment_eval = total_services/len(wl_users)
return (avg_latency+assigment_eval)/2.
print(compute_eval(routing_service))
actions_labels = ["None","Migrate","Replicate","Undeploy"]
actions = np.arange(len(actions_labels)) #dummies
print(actions)
columns_UsersatNode = ["user_atNode_%s"%k for k in G.nodes()]
columns_ServicesAtNodeB = ["services_atNode_bAction_%s"%k for k in G.nodes()]
columns_ServicesAtNodeA = ["services_atNode_aAction_%s"%k for k in G.nodes()]
columns = columns_UsersatNode + columns_ServicesAtNodeB +["Action","OnService"]+ columns_ServicesAtNodeA + ["Fit"]
print(columns)
users_at_node
services_bAction = [alloc_services.get(node, 0) for node in G.nodes()]
print(services_bAction)
routing = get_routings(services_bAction)
eval_bActions = compute_eval(routing)
print(eval_bActions)
action = [0]
on_service = [0]
services_aAction = services_bAction #its None actions
print(services_aAction)
routing = get_routings(services_aAction)
eval_aActions = compute_eval(routing)
print(eval_aActions)
speedup = (eval_aActions/eval_bActions)-1.0
print(speedup)
sample = users_at_node+services_bAction+action+on_service+services_aAction+[speedup]
print(sample)
np.random.seed(0)
users_at_node = np.array(users_at_node)
#Current STATE
r_serv_bAction = np.random.randint(0,2,len(G.nodes()))
print(r_serv_bAction)
routing = get_routings(r_serv_bAction)
eval_bActions = compute_eval(routing)
print(eval_bActions)
#Action and on specific service
action = np.random.choice(actions,1)
print(action)
current_services = np.flatnonzero(r_serv_bAction)
if len(current_services)==0:
action = [0]
else:
on_service = np.random.choice(current_services,1)
print(actions_labels[action[0]])
print(on_service)
#State After action
#actions_labels = ["None","Migrate","Replicate","Undeploy"]
print(r_serv_bAction)
print(actions_labels[action[0]])
print(on_service)
if action[0]==0:#"None":
r_serv_aAction = r_serv_bAction
elif action[0]==1:#"Migrate":
r_serv_aAction = r_serv_bAction
dst_service = np.random.choice(np.where(r_serv_bAction<1)[0],1)
r_serv_aAction[on_service]=0
r_serv_aAction[dst_service]=1
elif action[0]==2:#"Replicate":
r_serv_aAction = r_serv_bAction
dst_service = np.random.choice(np.where(r_serv_bAction<1)[0],1)
r_serv_aAction[dst_service]=1
elif action[0]==3:#"Undeploy":
r_serv_aAction = r_serv_bAction
r_serv_aAction[on_service]=0
print(r_serv_aAction)
routing = get_routings(r_serv_aAction)
eval_aActions = compute_eval(routing)
print(eval_aActions)
print(eval_bActions)
speedup = np.array((eval_aActions/eval_bActions)-1.0)
print(speedup)
sample = np.hstack([users_at_node,r_serv_bAction,action,on_service,r_serv_aAction,speedup])
print(sample)
import math
samples = 1000
df = pd.DataFrame(columns=columns)
for i in range(samples):
#Current STATE
r_serv_bAction = np.random.randint(0,2,len(G.nodes()))
# print("---")
# print("B ",r_serv_bAction)
routing = get_routings(r_serv_bAction)
eval_bActions = compute_eval(routing)
#Action and on specific service
action = np.random.choice(actions,1)
current_services = np.flatnonzero(r_serv_bAction)
# print(actions_labels[action[0]])
if len(current_services)==0:
action = [0]
else:
on_service = np.random.choice(current_services,1)
# print(on_service)
#Computing action
r_serv_aAction = r_serv_bAction
if action[0]==0:#"None":
pass
elif action[0]==1:#"Migrate":
options = np.where(r_serv_bAction<1)[0]
if len(options)>0:
dst_service = np.random.choice(options,1)
r_serv_aAction[dst_service]=1
r_serv_aAction[on_service]=0
elif action[0]==2:#"Replicate":
options = np.where(r_serv_bAction<1)[0]
if len(options)>0:
dst_service = np.random.choice(options,1)
r_serv_aAction[dst_service]=1
elif action[0]==3:#"Undeploy":
r_serv_aAction[on_service]=0
# print("A ",r_serv_bAction)
# print("---")
routing = get_routings(r_serv_aAction)
eval_aActions = compute_eval(routing)
speedup = (eval_aActions/eval_bActions)-1.0
if math.isnan(speedup):
speedup=-1.0
sample = np.hstack([users_at_node,r_serv_bAction,action,on_service,r_serv_aAction,speedup])
df.loc[len(df)] = sample
df.head()
# Nos aseguramos que no haya nada raro.
# Previas pruebas: hay samples with Fit. con NaN !! cuando se borran todos los servicios. Bug corregido.
is_NaN = df.isnull()
row_has_NaN = is_NaN.any(axis=1)
rows_with_NaN = df[row_has_NaN]
rows_with_NaN
import sklearn
print(sklearn.__version__) #Tengo que actualizar...
#Spliting data
X_train, X_test, y_train, y_test = train_test_split(
df.drop(columns = "Fit"),
df['Fit'],
random_state = 0)
print(len(X_train))
print(len(X_test))
#Training
model = RandomForestRegressor(
n_estimators = 10,
criterion = 'squared_error',
max_depth = None,
max_features = 'auto',
oob_score = False,
n_jobs = -1,
random_state = 123
)
model.fit(X_train, y_train)
forecasting = model.predict(X = X_test)
rmse = mean_squared_error(
y_true = y_test,
y_pred = forecasting)
print("El error (rmse) de test es: %f"%rmse)
#Current STATE
print(wl_users) #[a,b,c,d,e,f,g]
r_serv_bAction = [0,1,0,0,1,0,1]
routing = get_routings(r_serv_bAction)
eval_bActions = compute_eval(routing)
action = 1
on_service = 1
r_serv_aAction = [0,0,1,0,1,0,1]
routing = get_routings(r_serv_aAction)
eval_aActions = compute_eval(routing)
speedup = (eval_aActions/eval_bActions)-1.0
print(speedup)
sample1 = np.hstack([users_at_node,r_serv_bAction,action,on_service,r_serv_aAction]) #migrate ok
sample2 = np.hstack([users_at_node,r_serv_bAction,0,on_service,r_serv_bAction]) # none bad
print(sample1)
print(sample2)
dftest = pd.DataFrame(columns=columns[0:-1])
dftest.loc[len(dftest)] = sample1
dftest.loc[len(dftest)] = sample2
dftest
forecasting = model.predict(X = dftest)
print(forecasting)
#Ante dos acciones: migrar a una mejor posición o no hacer nada, es mejor migrar!
importancia_predictores = pd.DataFrame(
{'predictor': df.drop(columns = "Fit").columns,
'importancia': model.feature_importances_}
)
importancia_predictores.sort_values('importancia', ascending=False)
df.head()
df.shape
columns = ["degree_%s"%n for n in G.nodes()]
columns += ["centrality_%s"%n for n in G.nodes()]
node_degree = list(dict(G.degree()).values())
node_centrality = list(nx.betweenness_centrality(G).values())
dfConstant = pd.DataFrame(columns=columns)
dfConstant.loc[0] = node_degree+node_centrality
dfConstant
dfj = df.merge(dfConstant, how='cross')
X_train, X_test, y_train, y_test = train_test_split(
dfj.drop(columns = "Fit"),
dfj['Fit'],
random_state = 0)
print(len(X_train))
print(len(X_test))
model = RandomForestRegressor(
n_estimators = 10,
criterion = 'squared_error',
max_depth = None,
max_features = 'auto',
oob_score = False,
n_jobs = -1,
random_state = 123
)
model.fit(X_train, y_train)
forecasting = model.predict(X = X_test)
rmse = mean_squared_error(
y_true = y_test,
y_pred = forecasting)
print("El error (rmse) de test es: %f"%rmse)
from sklearn.inspection import permutation_importance
importancia = permutation_importance(
estimator = model,
X = X_train,
y = y_train,
n_repeats = 5,
scoring = 'neg_root_mean_squared_error',
n_jobs = multiprocessing.cpu_count() - 1,
random_state = 123
)
# Se almacenan los resultados (media y desviación) en un dataframe
df_importancia = pd.DataFrame(
{k: importancia[k] for k in ['importances_mean', 'importances_std']}
)
df_importancia['feature'] = X_train.columns
df_importancia.sort_values('importances_mean', ascending=False)
# Gráfico
fig, ax = plt.subplots(figsize=(5, 8))
df_importancia = df_importancia.sort_values('importances_mean', ascending=True)
ax.barh(
df_importancia['feature'],
df_importancia['importances_mean'],
xerr=df_importancia['importances_std'],
align='center',
alpha=0
)
ax.plot(
df_importancia['importances_mean'],
df_importancia['feature'],
marker="D",
linestyle="",
alpha=0.8,
color="r"
)
ax.set_title('Importancia de los predictores (train)')
ax.set_xlabel('Incremento del error tras la permutación');
| 0.233444 | 0.778291 |
<div>
<img src="..\Week 01\img\R_logo.svg" width="100"/>
</div>
<div style="line-height:600%;">
<font color=#1363E1 face="Britannic" size=10>
<div align=center>Variables</div>
</font>
</div>
<div style="line-height:300%;">
<font color=#9A0909 face="Britannic" size=6>
<div align=left>Variable Name</div>
</font>
</div>
A `variable` provides us with `named storage` that our programs can manipulate. A variable in R can store an atomic vector, group of atomic vectors or a combination of many Robjects.
1. A valid variable name consists of `letters`, `number`s and the `dot` or `underline` characters.
2. The variable name starts with a `letter` or the `dot not followed by a number`.
<div style="line-height:100%;">
<font color=black face="Britannic" size=5>
<div align=left>Example:</div>
</font>
</div>
```
# valid: Has letters, numbers, dot and underscore
var_name.2 = 1
print(var_name.2)
# Invalid: Has the character '%'. Only dot(.) and underscore allowed
var_name% = 1
print(var_name%)
# Invalid: Starts with a number
2var_name = 1
print(2var_name)
# Valid: Can start with a dot(.) but the dot(.)should not be followed by a number
.var_name = 1
print(.var_name)
var.name = 2
print(var.name)
# Invalid: The starting dot is followed by a number making it invalid
.2var_name = 1
print(.2var_name)
# Invalid: Starts with _ which is not valid
_var_name = 1
print(_var_name)
# Invalid: R is case-sensitive
Name = 1
print(name)
```
<div style="line-height:300%;">
<font color=#9A0909 face="Britannic" size=6>
<div align=left>Variable Assignment</div>
</font>
</div>
The `variables` can be `assigned values` using `leftward`, `rightward` and `equal` to operator. The values of the variables can be printed using `print()` or `cat()` function. **The cat() function combines multiple items into a continuous print output**.
---
<div style="line-height:100%;">
<font color=black face="Britannic" size=5>
<div align=left>Assignment Using Equal Operator:</div>
</font>
</div>
```
# Assignment using equal operator.
var_1 = c(0, 1, 2, 3)
print(var_1)
cat("var_1 is ", var_1 ,"\n")
```
---
<div style="line-height:100%;">
<font color=black face="Britannic" size=5>
<div align=left>Assignment Using Leftward Operator:</div>
</font>
</div>
```
# Assignment using leftward operator.
var_2 <- c(TRUE, 1)
print(var_2)
cat ("var_2 is ", var_2 ,"\n")
```
---
<div style="line-height:100%;">
<font color=black face="Britannic" size=5>
<div align=left>Assignment Using Rightward Operator:</div>
</font>
</div>
```
# Assignment using rightward operator.
c(TRUE, 1, "a") -> var_3
print(var_3)
cat ("var_3 is ", var_3 ,"\n")
```
>Note − The vector `c(TRUE, 1)` and `c(TRUE, 1, "a")` has a mix of **logical** and **numeric** and **character** class.
<div style="line-height:300%;">
<font color=#9A0909 face="Britannic" size=6>
<div align=left>Data Type of a Variable</div>
</font>
</div>
In R, a variable itself is not declared of any data type, rather it `gets the data type` of the `R-object assigned` to it. So R is called a **dynamically typed language**, which means that we can change a variable’s data type of the same variable again and again when using it in a program.
```
var_x <- "Hello"
cat("The class of var_x is ",class(var_x),"\n")
var_x <- 34.5
cat(" Now the class of var_x is ",class(var_x),"\n")
var_x <- 27L
cat(" Next the class of var_x becomes ",class(var_x),"\n")
```
<div style="line-height:300%;">
<font color=#9A0909 face="Britannic" size=6>
<div align=left>Finding Variables</div>
</font>
</div>
To know all the variables currently available in the `workspace` we use the `ls() function`. Also the ls() function can use patterns to match the variable names.
```
print(ls())
```
>Note − It is a sample output depending on what variables are declared in your environment. The `ls() function` can use patterns to match the variable names.
```
# List the variables starting with the pattern "var".
print(ls(pattern = "var"))
```
The variables `starting with dot(.)` are `hidden`, they can be listed using `"all.names = TRUE"` argument to `ls() function`.
```
.x <- 2
print(.x)
print(ls())
print(ls(all.name = TRUE))
```
<div style="line-height:300%;">
<font color=#9A0909 face="Britannic" size=6>
<div align=left>Deleting Variables</div>
</font>
</div>
Variables can be deleted by using the `rm() function`. Below we delete the variable var_1. On printing the value of the variable error is thrown.
```
var_1 = 1
print("print var_1:")
print(var_1)
print("Remove var_1")
rm(var_1)
print("print var_1")
print(var_1)
```
All the variables can be deleted by using the `rm()` and `ls()` function together.
```
rm(list = ls())
print(ls())
a = 1
b = 2
c = 3
rm(b, c)
print(a)
print(b)
print(c)
```
<div style="line-height:300%;">
<font color=#9A0909 face="Sriracha" size=6>
<div align=left>R Reserved Words</div>
</font>
</div>
<div style="line-height:200%;">
<font color=black face="Candara" size=4>
<div align=left>
Reserved words in R programming are a set of words that have special meaning and cannot be used as an identifier (variable name, function name etc).
Here is a list of reserved words in the R’s parser.</div>
</font>
</div>
<div>
<img src="..\Week 02\img\res_words.png" width="500"/>
</div>
<div style="line-height:200%;">
<font color=black face="Candara" size=4>
<div align=left>
This list can be viewed by typing <mark>help(reserved)</mark> or <mark>?reserved</mark> at the R command prompt as follows.</div>
</font>
</div>
```
?reserved
```
<div style="line-height:200%;">
<font color=black face="Candara" size=4>
<div align=left>
Among these words, <mark>if, else, repeat, while, function, for, in, next and break</mark> are used for conditions, loops and user defined functions. They form the basic building blocks of programming in R.
<mark>TRUE and FALSE</mark> are the logical constants in R.
<mark>NULL</mark> represents the absence of a value or an undefined value.
<mark>Inf</mark> is for “Infinity”, for example when 1 is divided by 0 whereas NaN is for “Not a Number”, for example when 0 is divided by 0.
<mark>NA</mark> stands for “Not Available” and is used to represent missing values.
<ins>R is a case sensitive language. Which mean that TRUE and True are not the same. While the first one is a reserved word denoting a logical constant in R, the latter can be used a variable name.</ins>
</div>
</font>
</div>
```
TRUE <- 1
True <- 1
print(True)
```
<div style="line-height:300%;">
<font color=#9A0909 face="Sriracha" size=6>
<div align=left>Constants in R</div>
</font>
</div>
<div style="line-height:200%;">
<font color=black face="Candara" size=4>
<div align=left>
Constants, as the name suggests, are entities whose value cannot be altered. Basic types of constant are numeric constants and character constants.</div>
</font>
</div>
<div style="line-height:200%;">
<font color=#076F9A face="Candara" size=4>
<div align=left>
1. Numeric Constants
</div>
</font>
</div>
<div style="line-height:200%;">
<font color=black face="Candara" size=4>
<div align=left>
All numbers fall under this category. They can be of type <mark>integer</mark>, <mark>double</mark> or <mark>complex</mark>.
It can be checked with the <mark>typeof()</mark> function.
Numeric constants followed by <mark>L</mark> are regarded as <mark>integer</mark> and those followed by <mark>i</mark> are regarded as complex.</div>
</font>
</div>
```
class(5)
typeof(5)
typeof(5L)
typeof(5i)
```
<div style="line-height:200%;">
<font color=#076F9A face="Candara" size=4>
<div align=left>
2. Character Constants
</div>
</font>
</div>
<div style="line-height:200%;">
<font color=black face="Candara" size=4>
<div align=left>
Character constants can be represented using either single quotes <mark>(')</mark> or double quotes <mark>(")</mark> as delimiters.</div>
</font>
</div>
```
name <- "pooya"
typeof(name)
class("5")
typeof("5")
```
<div style="line-height:200%;">
<font color=#076F9A face="Candara" size=4>
<div align=left>
3. Built-in Constants
</div>
</font>
</div>
<div style="line-height:200%;">
<font color=black face="Candara" size=4>
<div align=left>
Some of the built-in constants defined in R along with their values is shown below.</div>
</font>
</div>
```
LETTERS
letters
pi
month.name
month.abb
```
<div style="line-height:200%;">
<font color=black face="Candara" size=4>
<div align=left>
But it is not good to rely on these, as they are implemented as variables whose values can be changed.</div>
</font>
</div>
```
rm(pi)
cat("'pi' Before Assignment:", pi, "\n")
pi <- 5
cat("'pi' After Assignment:", pi, "\n")
rm(pi)
cat("'pi' After Assignment:", pi, "\n")
```
|
github_jupyter
|
# valid: Has letters, numbers, dot and underscore
var_name.2 = 1
print(var_name.2)
# Invalid: Has the character '%'. Only dot(.) and underscore allowed
var_name% = 1
print(var_name%)
# Invalid: Starts with a number
2var_name = 1
print(2var_name)
# Valid: Can start with a dot(.) but the dot(.)should not be followed by a number
.var_name = 1
print(.var_name)
var.name = 2
print(var.name)
# Invalid: The starting dot is followed by a number making it invalid
.2var_name = 1
print(.2var_name)
# Invalid: Starts with _ which is not valid
_var_name = 1
print(_var_name)
# Invalid: R is case-sensitive
Name = 1
print(name)
# Assignment using equal operator.
var_1 = c(0, 1, 2, 3)
print(var_1)
cat("var_1 is ", var_1 ,"\n")
# Assignment using leftward operator.
var_2 <- c(TRUE, 1)
print(var_2)
cat ("var_2 is ", var_2 ,"\n")
# Assignment using rightward operator.
c(TRUE, 1, "a") -> var_3
print(var_3)
cat ("var_3 is ", var_3 ,"\n")
var_x <- "Hello"
cat("The class of var_x is ",class(var_x),"\n")
var_x <- 34.5
cat(" Now the class of var_x is ",class(var_x),"\n")
var_x <- 27L
cat(" Next the class of var_x becomes ",class(var_x),"\n")
print(ls())
# List the variables starting with the pattern "var".
print(ls(pattern = "var"))
.x <- 2
print(.x)
print(ls())
print(ls(all.name = TRUE))
var_1 = 1
print("print var_1:")
print(var_1)
print("Remove var_1")
rm(var_1)
print("print var_1")
print(var_1)
rm(list = ls())
print(ls())
a = 1
b = 2
c = 3
rm(b, c)
print(a)
print(b)
print(c)
?reserved
TRUE <- 1
True <- 1
print(True)
class(5)
typeof(5)
typeof(5L)
typeof(5i)
name <- "pooya"
typeof(name)
class("5")
typeof("5")
LETTERS
letters
pi
month.name
month.abb
rm(pi)
cat("'pi' Before Assignment:", pi, "\n")
pi <- 5
cat("'pi' After Assignment:", pi, "\n")
rm(pi)
cat("'pi' After Assignment:", pi, "\n")
| 0.344664 | 0.874077 |
# Mass on a spring
The computation itself comes from from Chabay and Sherwood's exercises VP07 and VP09 http://www.compadre.org/portal/items/detail.cfm?ID=5692#tabs. They have great learning activities in the assignments as well.
# Model the spring force
Let's call the current length of the spring $L$ and relaxed length of the spring $L_0$. Then magnitude of the spring force $F_{sp}$ is proportional to the stretch $\left(L-L_0\right)$: $$F=k_s \left(\vec{L}-\vec{L}_0\right)$$. A more convenient way to express this vector is in terms of a unit vector in the direction of $\vec{L}$. Recall that a unit vector is calculated as $\hat{L}=\vec{L}/|L|$ and a convenient way to code this in Python is `Lhat = norm(L)`. Therefore, a vector expression for the spring force can be written in either of these two ways:
$$\vec{F}=-k_s \left(\vec{L}-\vec{L}_0\right) = \left(L-L_0\right)\left(-\hat{L}\right)$$.
```
## constants and data
g = 9.8
L0 = 0.26
ks = 1.8
dt = .02
## objects (origin is at ceiling)
ceiling = box(pos=vector(0,0,0), length=0.2, height=0.01, width=0.2)
ball = sphere(radius=0.025,color=color.orange, make_trail = True)
spring = helix(pos=ceiling.pos, color=color.cyan, thickness=.003, coils=40, radius=0.010)
# Set the ball position and make the spring axis be a vector from the ceiling.pos to the ball
ball.pos=vector(0.2,-0.3,0)
sprint.axis = ball.pos-ceiling.pos
## initial values
ball.velocity = vector(.2,.5,.5)
ball.mass = 0.03
#Define the force of gravity, which is a constant vector pointing in the -y direction.
## improve the display
scene.autoscale = False ## turn off automatic camera zoom
scene.center = vector(0,-L0,0) ## move camera down
scene.waitfor('click') ## wait for a mouse click
## set up a graph
graph1=graph(title='y vs. t')
ygraph = gcurve(gdisplay=graph1,color=ball.color)
graph2=graph(title='v vs. t')
vgraph = gcurve(gdisplay=graph2,color=color.blue)
```
# Add time dependence
Add time dependence using the Euler-Cromer method we have used in the past: update force first, then velocity, then position.
```
## calculation loop
t = 0
while t <100:
# pause long enough to make this real-time
sleep(dt)
#Update net force on ball: gravity + spring
L =
Lhat =
s =
Fspring =
Fnet =
#Update velocity
ball.velocity =
#Update position (and re-draw spring)
ball.pos =
spring.axis =
t = t + dt
# update the graphs to show the y-component of position and velocity
ygraph.plot(pos=(t, ball.pos.y))
vgraph.plot(pos=(t, ball.velocity.y))
```
## Extend the model
1. Complete the above code. Create motion in 3D, rather than a plane, to generate a 3D Lisajous figure.
2. Determine the correct initial velocity and ball.pos to make a ``circular pendulum'', in which the ball simply traces out a circle without bouncing vertically. Let the spring trace out a cone with an angle of 30 degrees from the vertical. Do any necessary calculations in python, but do not change anything in your loop!
3. You may find it most convenient to create a copy of your code for this next part. Add air resistance, using the relationship $$F_{drag}=\frac{1}{2}\rho C_d A v^2\,,$$ where for a sphere $C_d=0.5$. At what air density do you start noticing air drag? Use an initial condition that results in oscillations. Start with an air density of $\rho=1.2$ and increase it until the effects of drag are noticable.
|
github_jupyter
|
## constants and data
g = 9.8
L0 = 0.26
ks = 1.8
dt = .02
## objects (origin is at ceiling)
ceiling = box(pos=vector(0,0,0), length=0.2, height=0.01, width=0.2)
ball = sphere(radius=0.025,color=color.orange, make_trail = True)
spring = helix(pos=ceiling.pos, color=color.cyan, thickness=.003, coils=40, radius=0.010)
# Set the ball position and make the spring axis be a vector from the ceiling.pos to the ball
ball.pos=vector(0.2,-0.3,0)
sprint.axis = ball.pos-ceiling.pos
## initial values
ball.velocity = vector(.2,.5,.5)
ball.mass = 0.03
#Define the force of gravity, which is a constant vector pointing in the -y direction.
## improve the display
scene.autoscale = False ## turn off automatic camera zoom
scene.center = vector(0,-L0,0) ## move camera down
scene.waitfor('click') ## wait for a mouse click
## set up a graph
graph1=graph(title='y vs. t')
ygraph = gcurve(gdisplay=graph1,color=ball.color)
graph2=graph(title='v vs. t')
vgraph = gcurve(gdisplay=graph2,color=color.blue)
## calculation loop
t = 0
while t <100:
# pause long enough to make this real-time
sleep(dt)
#Update net force on ball: gravity + spring
L =
Lhat =
s =
Fspring =
Fnet =
#Update velocity
ball.velocity =
#Update position (and re-draw spring)
ball.pos =
spring.axis =
t = t + dt
# update the graphs to show the y-component of position and velocity
ygraph.plot(pos=(t, ball.pos.y))
vgraph.plot(pos=(t, ball.velocity.y))
| 0.525612 | 0.984246 |
```
import numpy as np
import pandas as pd
import scipy as sp
import sklearn as sl
import seaborn as sns; sns.set()
import matplotlib as mpl
from sklearn.linear_model import LinearRegression
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
%matplotlib inline
```
# Tarea 3: Encuentre la regresión
Ud recibe unos datos $x$ y $y$ cómo se muestran a continuación. Ud debe responder cuatro preguntas a partir de estos datos. Suponga que ud tiene un modelo tal que $y=f(x)$ más aún desconoce $f$.
```
df = pd.read_pickle('ex1.gz')
sns.scatterplot(x='x',y='y',data=df)
plt.show()
df
```
## (A) Pendiente e intercepto
Determine la pendiente de los datos en el intervalo $[0,1.5]$ y el valor del intercepto con el eje $y$. Es decir, $f(0)=?$. ¿Cuál es el valor de $r^2$?
```
k = df[(df.x >= 0) & (df.x <= 1.5)]
k
x1= k['x'].values.reshape(-1,1)
x2= k['y'].values.reshape(-1,1)
modelo = LinearRegression()
modelo.fit(x1,x2)
intercepto = modelo.intercept_
m = modelo.coef_
r2 = modelo.score(x1,x2)
print("Intercepto: ", intercepto)
print("Pendiente: ", m)
print("R^2: ", r2)
```
## (B) Regresión polinomial
Suponga que quiere realizar la siguiente regresión polinomial,
$$y=\beta_1+\beta_2x+\beta_2x^2+\beta_2x^3+\beta_2x^4+\beta_2x^5.$$
Plantee la función de costo que le permita calcular los coeficientes y calcule $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$, y $\beta_5$. ¿Cuál es el $r^2$?
Calcule $f(0)$ y compare con los resultados anteriores
```
def L(x,A,b):
m,n = A.shape
X = np.matrix(x).T
DeltaB=(A*X-b)
return (DeltaB.T*DeltaB)[0,0]/m
Y = df.loc[:, ['y']]
Y
X = df.loc[:, ['x']].rename(columns={'x': 'x1'})
X.insert(0, 'x0', 1)
X['x2'] = X['x1']*X['x1']
X['x3'] = X['x1']**3
X['x4'] = X['x1']**4
X['x5'] = X['x1']**5
Xi = X.to_numpy()
Yi = Y.to_numpy()
op = sp.optimize.minimize(fun=L,x0=np.zeros(Xi.shape[1]), args = (Xi,Yi), tol=1e-10)
print("El valor para los coeficientes es:",op['x'])
print("El valor para f(0):",op['x'][0])
y = df["y"]
b = np.linspace(0,4,100)
def f(a,b,c,d,e,f,x):
return a*x**5 + b*x**4 + c*x**3 + d*x**2 + e*x + f
p = f(op['x'][5],op['x'][4],op['x'][3],op['x'][2],op['x'][1],op['x'][0],b)
r2 = 1-np.sum((p-y)**2)/np.sum((y-y.mean())**2)
r2
print("Es posible apreciar un resultado similar al metodo de la polinomial exacta, evidenciando que ambos metodos poseen una buena precision con solo algunas variaciones en cifras decimales")
```
## (C) Regresión polinomial exacta
Resulta, que cuando se quiere hacer alguna regresión polinomial esta se puede hacer de forma exacta. ¿Cómo? Suponga que ud va a considerar que su problema en lugar de tener $1$ variable ($x$) tiene $n+1$, siendo $n$ el orden del polinomio a ajustar. Es decir, sus nuevas variables van a ser $\{x_0,\,x_1,\,x_2,\,x_3,\dots,\,x_n\}$ definiendo $x_j=x^j$. Así pues, siguiendo el mismo procedimiento para la regresión lineal multidimensional que realizamos para el ejercicio de datos inmobiliarios, puede encontrar los valores de los coeficientes $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$, y $\beta_5$. Encuentre estos valores y compare con los resultados en la sección **(B)**.
Calcule $f(0)$ y compare con los resultados anteriores.
> Si ud se pregunta si esto es posible la respuesta es sí. Inclusive, esto se puede extender a cualquier a cualquier conjunto de funciones, tal que $x_j=f_j(x)$, que represente un conjunto "linealmente independiente" (¡Me estoy adelantando a *Fourier*!). Para quienes quieran explorar algunas curiosidades matemáticas, cuando $n+1$ es igual al número de puntos o valores de $x$ (y todos diferentes) la matriz es siempre invertible y resulta ser la inversa de una matriz de Vandermonde.
```
rt = np.linalg.inv(Xi.T @ Xi) @ Xi.T @ Yi
b0, b1, b2, b3, b4, b5 = rt
coefs = str(b0) +','+ str(b1) + ',' + str(b2) + ',' + str(b3) + ',' + str(b4) + ',' + str(b5)
print(f"los coeficientes son = {coefs}")
print(f"El valor de f(0) es :", rt[0])
print("Se confirma como el valor para f(0) resulta muy preciso al ser comparado con valor de la regresión polinomica y a su vez resulta ser exacto si analizamos lo esperado por la grafica ")
```
## (D) Regresión a un modelo teórico
Suponga que su modelo teórico es el siguiente:
$$y=\frac{a}{\left[(x-b)^2+c\right]^\gamma}.$$
Halle $a$, $b$, $c$ y $\gamma$.
Calcule $f(0)$ y compare con los resultados anteriores
```
def f(i,x):
return (i[0])/((x-i[1])**2 + i[2])**i[3]
def L(i2,x,y):
dy = f(i2,x) - y
return np.dot(dy,dy)/len(y)
x = df["x"]
op = sp.optimize.minimize(fun=L, x0=np.array([0,0,1,0]), args = (x,y), method='L-BFGS-B', tol=1e-8)
print("Los valores de a,b,c y omega son",op['x'])
print("El valor de f(0) es:", f(op.x,0))
print("Con respecto a los dos anteriores metodos utilizados, este nos arrojo un valor de 0.2987 evidenciando menor presicion y exactitud, por lo que podriamos decir que este metodo es el menos optimo")
```
# Tarea 4
Con base a los métodos vistos en clase resuelva las siguientes dos preguntas
## (A) Integrales
* $\int_{0}^{1}x^{-1/2}\,\text{d}x$
* $\int_{0}^{\infty}e^{-x}\ln{x}\,\text{d}x$
* $\int_{0}^{\infty}\frac{\sin{x}}{x}\,\text{d}x$
```
x0 = 0.0000001
x1 = 1
xi = 0.0000001
xf =100
n = 1000001
def f1(x):
return x**(-1/2)
def f2(x):
return np.exp(-x)*np.log(x)
def f3(x):
return np.sin(x)/x
def integral(ini, fin, n, f1):
x, delta_x = np.linspace( ini, fin, num=n-1 , retstep=True )
return (delta_x/3)*( f1(x[0]) + 2*np.sum(f1(x[2:len(x)-1:2])) + 4*np.sum(f1(x[1::2])) + f1(x[-1]) )
f1_int = integral(x0, x1, n, f1)
print(f"El valor de la primera integral corresponde a: {f1_int}")
f2_int = integral(xi, xf, n, f2)
print(f"El valor de la segunda integral corresponde a: {f2_int}")
f3_int = integral(xi, xf, n, f3)
print(f"El valor de la tercera integral corresponde a: {f3_int}")
```
## (B) Fourier
Calcule la transformada rápida de Fourier para la función de la **Tarea 3 (D)** en el intervalo $[0,4]$ ($k$ máximo $2\pi n/L$ para $n=25$). Ajuste la transformada de Fourier para los datos de la **Tarea 3** usando el método de regresión exacto de la **Tarea 3 (C)** y compare con el anterior resultado. Para ambos ejercicios haga una interpolación y grafique para comparar.
```
n = 25
global x, y
def a(j):
r = 2*np.pi*j/4
y2 = y*np.cos(r*x)
return sp.integrate.simpson(y2, x)
def b(j):
r = 2*np.pi*j/4
y2 = y*np.sin(r*x)
return sp.integrate.simpson(y2, x)
a0 = np. array([a(j) for j in range(n)])
b0 = np. array([b(j) for j in range(n)])
x_lim = np.linspace(0, 4, 10000)
r = np. array([2*np.pi*j/4 for j in range(n)])
y_lim = np.sum([(a0[j]*np.cos(r[j]*x_lim) + b0[j]*np.sin(r[j]*x_lim)) for j in range(n)], axis=0)
plt.plot(x_lim, (a0[0]*np.cos(r[0]*x_lim) + b0[0]*np.sin(r[0]*x_lim)), c="r", linewidth = 2.0)
plt.plot(x_lim, (a0[1]*np.cos(r[1]*x_lim) + b0[1]*np.sin(r[1]*x_lim)), c="g", linewidth = 2.0 )
plt.plot(x_lim, (a0[2]*np.cos(r[2]*x_lim) + b0[2]*np.sin(r[2]*x_lim)), c="b", linewidth = 2.0 )
plt.xlabel('x')
plt.ylabel('y')
plt.show()
plt.plot(x_lim, y_lim, c = "r", linewidth = 2.0)
plt.show()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import scipy as sp
import sklearn as sl
import seaborn as sns; sns.set()
import matplotlib as mpl
from sklearn.linear_model import LinearRegression
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
%matplotlib inline
df = pd.read_pickle('ex1.gz')
sns.scatterplot(x='x',y='y',data=df)
plt.show()
df
k = df[(df.x >= 0) & (df.x <= 1.5)]
k
x1= k['x'].values.reshape(-1,1)
x2= k['y'].values.reshape(-1,1)
modelo = LinearRegression()
modelo.fit(x1,x2)
intercepto = modelo.intercept_
m = modelo.coef_
r2 = modelo.score(x1,x2)
print("Intercepto: ", intercepto)
print("Pendiente: ", m)
print("R^2: ", r2)
def L(x,A,b):
m,n = A.shape
X = np.matrix(x).T
DeltaB=(A*X-b)
return (DeltaB.T*DeltaB)[0,0]/m
Y = df.loc[:, ['y']]
Y
X = df.loc[:, ['x']].rename(columns={'x': 'x1'})
X.insert(0, 'x0', 1)
X['x2'] = X['x1']*X['x1']
X['x3'] = X['x1']**3
X['x4'] = X['x1']**4
X['x5'] = X['x1']**5
Xi = X.to_numpy()
Yi = Y.to_numpy()
op = sp.optimize.minimize(fun=L,x0=np.zeros(Xi.shape[1]), args = (Xi,Yi), tol=1e-10)
print("El valor para los coeficientes es:",op['x'])
print("El valor para f(0):",op['x'][0])
y = df["y"]
b = np.linspace(0,4,100)
def f(a,b,c,d,e,f,x):
return a*x**5 + b*x**4 + c*x**3 + d*x**2 + e*x + f
p = f(op['x'][5],op['x'][4],op['x'][3],op['x'][2],op['x'][1],op['x'][0],b)
r2 = 1-np.sum((p-y)**2)/np.sum((y-y.mean())**2)
r2
print("Es posible apreciar un resultado similar al metodo de la polinomial exacta, evidenciando que ambos metodos poseen una buena precision con solo algunas variaciones en cifras decimales")
rt = np.linalg.inv(Xi.T @ Xi) @ Xi.T @ Yi
b0, b1, b2, b3, b4, b5 = rt
coefs = str(b0) +','+ str(b1) + ',' + str(b2) + ',' + str(b3) + ',' + str(b4) + ',' + str(b5)
print(f"los coeficientes son = {coefs}")
print(f"El valor de f(0) es :", rt[0])
print("Se confirma como el valor para f(0) resulta muy preciso al ser comparado con valor de la regresión polinomica y a su vez resulta ser exacto si analizamos lo esperado por la grafica ")
def f(i,x):
return (i[0])/((x-i[1])**2 + i[2])**i[3]
def L(i2,x,y):
dy = f(i2,x) - y
return np.dot(dy,dy)/len(y)
x = df["x"]
op = sp.optimize.minimize(fun=L, x0=np.array([0,0,1,0]), args = (x,y), method='L-BFGS-B', tol=1e-8)
print("Los valores de a,b,c y omega son",op['x'])
print("El valor de f(0) es:", f(op.x,0))
print("Con respecto a los dos anteriores metodos utilizados, este nos arrojo un valor de 0.2987 evidenciando menor presicion y exactitud, por lo que podriamos decir que este metodo es el menos optimo")
x0 = 0.0000001
x1 = 1
xi = 0.0000001
xf =100
n = 1000001
def f1(x):
return x**(-1/2)
def f2(x):
return np.exp(-x)*np.log(x)
def f3(x):
return np.sin(x)/x
def integral(ini, fin, n, f1):
x, delta_x = np.linspace( ini, fin, num=n-1 , retstep=True )
return (delta_x/3)*( f1(x[0]) + 2*np.sum(f1(x[2:len(x)-1:2])) + 4*np.sum(f1(x[1::2])) + f1(x[-1]) )
f1_int = integral(x0, x1, n, f1)
print(f"El valor de la primera integral corresponde a: {f1_int}")
f2_int = integral(xi, xf, n, f2)
print(f"El valor de la segunda integral corresponde a: {f2_int}")
f3_int = integral(xi, xf, n, f3)
print(f"El valor de la tercera integral corresponde a: {f3_int}")
n = 25
global x, y
def a(j):
r = 2*np.pi*j/4
y2 = y*np.cos(r*x)
return sp.integrate.simpson(y2, x)
def b(j):
r = 2*np.pi*j/4
y2 = y*np.sin(r*x)
return sp.integrate.simpson(y2, x)
a0 = np. array([a(j) for j in range(n)])
b0 = np. array([b(j) for j in range(n)])
x_lim = np.linspace(0, 4, 10000)
r = np. array([2*np.pi*j/4 for j in range(n)])
y_lim = np.sum([(a0[j]*np.cos(r[j]*x_lim) + b0[j]*np.sin(r[j]*x_lim)) for j in range(n)], axis=0)
plt.plot(x_lim, (a0[0]*np.cos(r[0]*x_lim) + b0[0]*np.sin(r[0]*x_lim)), c="r", linewidth = 2.0)
plt.plot(x_lim, (a0[1]*np.cos(r[1]*x_lim) + b0[1]*np.sin(r[1]*x_lim)), c="g", linewidth = 2.0 )
plt.plot(x_lim, (a0[2]*np.cos(r[2]*x_lim) + b0[2]*np.sin(r[2]*x_lim)), c="b", linewidth = 2.0 )
plt.xlabel('x')
plt.ylabel('y')
plt.show()
plt.plot(x_lim, y_lim, c = "r", linewidth = 2.0)
plt.show()
| 0.434701 | 0.879871 |
<a href="https://colab.research.google.com/github/thehimalayanleo/Private-Machine-Learning/blob/master/Fed_Averaging_Pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import torch
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import numpy as np
import copy
from torch.utils.data import DataLoader, Dataset
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=5)
self.conv2 = nn.Conv2d(32, 64, kernel_size=5)
self.fc = nn.Linear(1024, 10)
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
#x = x.view(-1, 784)
#print(x.shape)
x = self.pool(F.relu(self.conv1(x)))
#print(x.shape)
x = self.pool(F.relu(self.conv2(x)))
#print(x.shape)
x = x.view(-1, 1024)
#print(x.shape)
x = F.relu(self.fc(x))
#print(x.shape)
return F.log_softmax(x, dim=1)
num_epochs = 10
lr = 0.01
batch_size = 64
batch_size_test = 1000
log_interval = 10
use_gpu = 0
device = torch.device('cuda:{}'.format(use_gpu) if torch.cuda.is_available() and use_gpu != -1 else 'cpu')
random_seed = 1
torch.backends.cudnn.enabled = False
torch.manual_seed(random_seed)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('../data/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_test, shuffle=False)
net = Net()
optimizer = optim.SGD(net.parameters(), lr=lr)
def test():
net.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = net(data)
#print(target.shape)
#print(output.shape)
test_loss+=F.nll_loss(output, target).item()
pred = output.data.max(1, keepdim=True)[1]
#print(pred.shape)
correct += pred.eq(target.data.view_as(pred)).sum()
test_loss/=len(test_loader.dataset)
test_losses.append(test_loss)
print('Test Set: Loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(test_loss, correct, len(test_loader.dataset),
100.*correct/len(test_loader.dataset)))
## Splitting dataset using indices
class DatasetSplit(Dataset):
def __init__(self, dataset, indices):
self.dataset = dataset
self.indices = list(indices)
def __len__(self):
return len(self.indices)
def __getitem__(self, item):
image, label = self.dataset[self.indices[item]]
return image, label
class LocalUpdate:
def __init__(self, dataset=None, indices=None):
#self.args = args
self.train_loader = DataLoader(DatasetSplit(dataset, indices), batch_size=batch_size, shuffle=True)
def train(self, net):
#net = Net()
epoch_losses = []
optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.5)
net.train()
for iters in range(num_epochs):
batch_loss = []
for batch_indx, (data, target) in enumerate(self.train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = net(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_indx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\t Loss: {:.6f}'.format(iters, batch_indx*len(data), len(self.train_loader.dataset),
100.*batch_indx/len(self.train_loader), loss.item()))
batch_loss.append(loss.item())
epoch_losses.append(sum(batch_loss)/len(batch_loss))
return net.state_dict(), sum(epoch_losses)/len(epoch_losses)
def iid_dataset(dataset, num_users):
dict_users = {}
all_indices = [indx for indx in range(len(dataset))]
num_items = int(len(dataset)/num_users)
for user in range(num_users):
dict_users[user] = set(np.random.choice(all_indices, num_items, replace=False))
all_indices = list(set(all_indices)-dict_users[user])
return dict_users
dataset_train = datasets.MNIST('../data/mnist/', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]))
num_users = 10
dict_users_mnist_iid = iid_dataset(dataset_train, num_users)
## Store global weights
global_net = Net().to(device)
global_weight = global_net.state_dict()
frac = 0.1
loss_training = []
for iter in range(num_epochs):
local_weights, local_loss = [], []
users_max = max(int(frac*num_users), 1)
user_indices = np.random.choice(range(num_users), users_max, replace=False)
for indx in user_indices:
local_update = LocalUpdate(dataset=dataset_train, indices=dict_users_mnist_iid[indx])
weight, loss = local_update.train(net=copy.deepcopy(global_net).to(device))
local_weights.append(copy.deepcopy(weight))
local_loss.append(copy.deepcopy(loss))
weight_average = copy.deepcopy(local_weights[0])
for key in weight_average.keys():
for indx in range(1, len(local_weights)):
weight_average[key] += local_weights[indx][key]
weight_average[key] = torch.div(weight_average[key], len(local_weights))
global_weight = weight_average
global_net.load_state_dict(global_weight)
loss_average = sum(local_loss)/len(local_loss)
print("Iteration {:3d}, Average Loss {:.2f}".format(iter, loss_average))
loss_training.append(loss_average)
import matplotlib
import matplotlib.pyplot as plt
plt.figure()
plt.grid()
plt.xlabel('Global Epochs')
plt.ylabel('Loss')
plt.plot(range(len(loss_training)), loss_training)
plt.savefig('first-fed-averaging.png')
def testing_script(global_net, test_loader):
global_net.eval()
test_loss, correct = 0, 0
for indx, (data, target) in enumerate(test_loader):
data, target = data.to(device), target.to(device)
output = global_net(data)
test_loss+=F.nll_loss(output, target, reduction='sum').item()
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).long().cpu().sum()
test_loss/=len(test_loader.dataset)
accuracy = 100.00*correct/len(test_loader.dataset)
print('Test Set: Loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(test_loss, correct, len(test_loader.dataset),
accuracy))
testing_script(global_net, test_loader)
```
|
github_jupyter
|
import torch
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import numpy as np
import copy
from torch.utils.data import DataLoader, Dataset
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=5)
self.conv2 = nn.Conv2d(32, 64, kernel_size=5)
self.fc = nn.Linear(1024, 10)
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
#x = x.view(-1, 784)
#print(x.shape)
x = self.pool(F.relu(self.conv1(x)))
#print(x.shape)
x = self.pool(F.relu(self.conv2(x)))
#print(x.shape)
x = x.view(-1, 1024)
#print(x.shape)
x = F.relu(self.fc(x))
#print(x.shape)
return F.log_softmax(x, dim=1)
num_epochs = 10
lr = 0.01
batch_size = 64
batch_size_test = 1000
log_interval = 10
use_gpu = 0
device = torch.device('cuda:{}'.format(use_gpu) if torch.cuda.is_available() and use_gpu != -1 else 'cpu')
random_seed = 1
torch.backends.cudnn.enabled = False
torch.manual_seed(random_seed)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('../data/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_test, shuffle=False)
net = Net()
optimizer = optim.SGD(net.parameters(), lr=lr)
def test():
net.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = net(data)
#print(target.shape)
#print(output.shape)
test_loss+=F.nll_loss(output, target).item()
pred = output.data.max(1, keepdim=True)[1]
#print(pred.shape)
correct += pred.eq(target.data.view_as(pred)).sum()
test_loss/=len(test_loader.dataset)
test_losses.append(test_loss)
print('Test Set: Loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(test_loss, correct, len(test_loader.dataset),
100.*correct/len(test_loader.dataset)))
## Splitting dataset using indices
class DatasetSplit(Dataset):
def __init__(self, dataset, indices):
self.dataset = dataset
self.indices = list(indices)
def __len__(self):
return len(self.indices)
def __getitem__(self, item):
image, label = self.dataset[self.indices[item]]
return image, label
class LocalUpdate:
def __init__(self, dataset=None, indices=None):
#self.args = args
self.train_loader = DataLoader(DatasetSplit(dataset, indices), batch_size=batch_size, shuffle=True)
def train(self, net):
#net = Net()
epoch_losses = []
optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.5)
net.train()
for iters in range(num_epochs):
batch_loss = []
for batch_indx, (data, target) in enumerate(self.train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = net(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_indx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\t Loss: {:.6f}'.format(iters, batch_indx*len(data), len(self.train_loader.dataset),
100.*batch_indx/len(self.train_loader), loss.item()))
batch_loss.append(loss.item())
epoch_losses.append(sum(batch_loss)/len(batch_loss))
return net.state_dict(), sum(epoch_losses)/len(epoch_losses)
def iid_dataset(dataset, num_users):
dict_users = {}
all_indices = [indx for indx in range(len(dataset))]
num_items = int(len(dataset)/num_users)
for user in range(num_users):
dict_users[user] = set(np.random.choice(all_indices, num_items, replace=False))
all_indices = list(set(all_indices)-dict_users[user])
return dict_users
dataset_train = datasets.MNIST('../data/mnist/', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]))
num_users = 10
dict_users_mnist_iid = iid_dataset(dataset_train, num_users)
## Store global weights
global_net = Net().to(device)
global_weight = global_net.state_dict()
frac = 0.1
loss_training = []
for iter in range(num_epochs):
local_weights, local_loss = [], []
users_max = max(int(frac*num_users), 1)
user_indices = np.random.choice(range(num_users), users_max, replace=False)
for indx in user_indices:
local_update = LocalUpdate(dataset=dataset_train, indices=dict_users_mnist_iid[indx])
weight, loss = local_update.train(net=copy.deepcopy(global_net).to(device))
local_weights.append(copy.deepcopy(weight))
local_loss.append(copy.deepcopy(loss))
weight_average = copy.deepcopy(local_weights[0])
for key in weight_average.keys():
for indx in range(1, len(local_weights)):
weight_average[key] += local_weights[indx][key]
weight_average[key] = torch.div(weight_average[key], len(local_weights))
global_weight = weight_average
global_net.load_state_dict(global_weight)
loss_average = sum(local_loss)/len(local_loss)
print("Iteration {:3d}, Average Loss {:.2f}".format(iter, loss_average))
loss_training.append(loss_average)
import matplotlib
import matplotlib.pyplot as plt
plt.figure()
plt.grid()
plt.xlabel('Global Epochs')
plt.ylabel('Loss')
plt.plot(range(len(loss_training)), loss_training)
plt.savefig('first-fed-averaging.png')
def testing_script(global_net, test_loader):
global_net.eval()
test_loss, correct = 0, 0
for indx, (data, target) in enumerate(test_loader):
data, target = data.to(device), target.to(device)
output = global_net(data)
test_loss+=F.nll_loss(output, target, reduction='sum').item()
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).long().cpu().sum()
test_loss/=len(test_loader.dataset)
accuracy = 100.00*correct/len(test_loader.dataset)
print('Test Set: Loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(test_loss, correct, len(test_loader.dataset),
accuracy))
testing_script(global_net, test_loader)
| 0.764628 | 0.864081 |
# Training a model on the UD Corpus
This notebook looks at how to train a model using the Universal Dependencies Corpus.
We will learn how to (1) download the UD Corpus, (2) train a tokenizer and a tagger model on a specific language and then (3) pack it all up in a zip model that we'll use locally.
This notebook is run on a 18.04 Ubuntu, with Python3 installed. Assume we are working in folder ``/work``. Also, let's assume that NLP-Cube is installed locally in ``/work/NLP-Cube``. If you do not have NLP-Cube installed locally (**not** using ``pip3 install nlpcube``, but direclty cloning the github repo), please first follow the [local install guide](./2.%20Advanced%20usage%20-%20NLP-Cube%20local%20installation.ipynb).
## 1. Download the UD Corpus
Let's download the Universal Dependencies Corpus. Please see [universaldependencies.org](http://www.universaldependencies.org) for more info. At the time of writing, the latest version is 2.2. For other versions please see the UD website for updated download links. Now let's download v2.2:
```
! cd /work; curl --remote-name-all https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-2837/ud-treebanks-v2.2.tgz
```
This command should download the .tgz version of the UD corpus in ``/work/ud-treebanks-v2.2.tgz``. Now, we unzip it:
```
! tar -xzf /work/ud-treebanks-v2.2.tgz -C /work
```
This command extracted the UD Corpus to ``/work/ud-treebanks-v2.2``. We'll use ``UD_English-ParTUT`` for training because it's a smaller dataset. A look at its contents reveals what we need to train and test our model: the train, dev and test datasets, both in raw text (.txt files) and conllu format.
```
! ls -lh /work/ud-treebanks-v2.2/UD_English-ParTUT
```
## 2. Train a model
Next, let's train a model. We'll put everything in its own folder, say ``my_model-1.0``. We create the folder:
```
! mkdir /work/my_model-1.0
```
We also need an embeddings file to use in training. For this example we'll use FastText's wiki vector embeddings for English named ``wiki.en.vec``, downloaded from [here](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) that we've put in ``/work/wiki.en.vec``.
We're ready to start training. Because training might take a long time, please open up a ``screen`` or put the train process in background in case your console disconnects for any reason.
We'll train first a tokenizer with default parameters and then a tagger with custom parameters.
### 2.1. Training a default tokenizer
Let's train our model! We simply cd to ``NLP-Cube`` and run a one-liner in a shell:
```
python3 /work/NLP-Cube/cube/main.py --train=tokenizer --train-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.conllu --dev-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-dev.conllu --raw-train-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.txt --raw-dev-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-dev.txt --embeddings /work/wiki.en.vec --store /work/my_model-1.0/tokenizer --batch-size 1000 --set-mem 8000 --autobatch --patience 1
```
Let's look at the switches in detail ( you can see the full help with the ``--help`` switch):
```
--train=TRAIN select which model to train: tagger, parser,
lemmatizer, tokenizer, mt, compound, ner
```
Because we want to train a tokenizer model, we'll pass ``--train=tokenizer``.
```
--train-file=TRAIN_FILE location of the train dataset
--dev-file=DEV_FILE location of the dev dataset
```
Here we pass the path to the train and dev **.conllu** files. In this case, the train file is ``/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.conllu``
Because we train a tokenizer that has to learn to transform raw text (seen as a string) into tokens (given in the .conllu format), we need to specify the --raw-train-file and --raw-dev-file as well. Note that these switches are only relevant for the tokenizer; all other tasks (lemmatizer, tagger, etc.) need only the .conllu files.
```
--raw-train-file=RAW_TRAIN_FILE location of the raw train file
--raw-dev-file=RAW_DEV_FILE location of the raw dev file
```
Next, we tell NLP-Cube where to find the embeddings file:
```
--embeddings=EMBEDDINGS location of the pre-computed word embeddings file
```
We then tell NLP-Cuve where to store the trained model:
```
--store=OUTPUT_BASE output base for model location
```
Please note that ``--store`` is not a folder path, but rather a **prefix**. For example, ``--store /work/my_model-1.0/tokenizer`` will create in the ``my_model-1.0`` folder several files that begin with _tokenizer_, such as _tokenizer.encodings_, _tokenizer-tok.bestAcc_, etc.
Other switches include ``--patience`` which specifies the number of epochs to stop the training after there is no improvement on the dev set (early stopping condition). In this example we'll set the patience to 1 epoch; for normal training we recommend setting patience anywhere from 20 to 100 epochs. We recommend using a **larger ``--batch-size`` of 1000** and ``--autobatch``ing to speed up training. For autobatching we need to reserve memory in advance with ``--set-mem`` (given as an int: 4000 = 4000 MB). If there is a GPU available, place the training on it with the ``--use-gpu`` flag. We'll see the ``--config`` flag when training a tagger with custom configuration. For now, not specifying a config will create a model with default network parameters.
```
--patience=ITTERS no improvement stopping condition
--config=CONFIG configuration file to load
--batch-size=BATCH_SIZE
--set-mem=MEMORY preallocate memory for batch training (default 2048)
--autobatch turn on/off dynet autobatching
--use-gpu turn on/off GPU support
```
Let's training the model and redirect all stdout messages in a log file with ``&> /work/my_model-1.0/tokenizer.log``. **Please ensure your current working dir is ``/work/NLP-Cube``, otherwise some relative imports won't load correctly**.
```
python3 /work/NLP-Cube/cube/main.py --train=tokenizer --train-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.conllu --dev-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-dev.conllu --raw-train-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.txt --raw-dev-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-dev.txt --embeddings /work/wiki.en.vec --store /work/my_model-1.0/tokenizer --batch-size 1000 --set-mem 8000 --autobatch --patience 1 &> /work/my_model-1.0/tokenizer.log
```
Training might take a long time. Check the log to see how training progresses. If successful, the last lines of the log file will look like:
```
Starting epoch 5
shuffling training data... done
training... 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 avg_loss=7.148099940487884 execution_time=461.953825712204
evaluating on devset... token_accuracy=99.65092779717068 , sentence_accuracy=100.0
Storing /work/my_model-1.0/tokenizer-ss.last
Storing /work/my_model-1.0/tokenizer-tok.last
Training is done with devset sentence tok = 99.74264705882354 and sentence = 100.0
```
This means that we have had a successful training run with a sentence tokenization accuracy of 99.74% and sentence segmentation accuracy of 100%.
A few files have been created in 'my_model-1.0'; the important ones are _tokenizer.log_ with the training progress, _tokenizer-tok.bestAcc_ which is the tokenization model, _tokenizer-ss.bestAcc_ which is the sentence segmentation model, _tokenizer.conf_, and the _tokenizer.encodings_ which contains word lists relevant to the models. We'll see how to use these files later on.
## 2.2. Training a custom tagger
Now let's traing a tagger using custom config. In ``NLP-Cube/examples`` there are a number of default conf files. The _tagger.conf_ looks like:
```
[TaggerConfig]
aux_softmax_layer = 2
input_dropout_prob = 0.33
input_size = 100
layer_dropouts = [0.5, 0.5, 0.5]
layers = [200, 200]
presoftmax_mlp_dropouts = [0.5]
presoftmax_mlp_layers = [500]
```
In this tutorial we won't go into details about the structure of the tagger, but let's say that we want to change the ``layers`` parameter, and instead of 2 BiLSTMs of size 200 we want to use 3 BiLSTMs of size 100. Copy the __tagger.conf__ file from the ``examples/`` folder to ``my_model-v1.0`` and change the layers line to ``layers = [100, 100, 100]``. Feel free to experiment with other values.
Now, let's run the tagger:
```
python3 /work/NLP-Cube/cube/main.py --train=tagger --train-file=/work/ud-treebanks-v2.2/UD_English-EWT/en_ewt-ud-train.conllu --dev-file=/work/corpus/ud-treebanks-v2.2/UD_English-EWT/en_ewt-ud-dev.conllu --embeddings /work/wiki.en.vec --store /work/my_model-1.0/tagger --patience 1 --config /work/my_model-1.0/tagger.conf --batch-size 1000 &> /work/my_model-1.0/tagger.log
```
Note the ``-config /work/my_model-1.0/tagger.conf`` parameter. For our tokenizer we didn't pass this parameter so NLP-Cube has created a default _tokenizer.conf_ file for us. Because now we specify the conf file, the tagger will read it and adjust its internal structure accordingly. Also, for this example we only want to specify the ``--batch-size`` and let NLP-Cube manage memory automatically. Training should finish with the log file ending with:
```
Training is done with devset accuracy=( UPOS=0.9549900596421471 , XPOS=0.950934393638171 , ATTRS=0.9617097415506958 )
```
Just like with the tokenizer, we have a number of _tagger*_ important files. Because we'll use a script that will automatically package them into a single model, we don't have to worry about them. Also, we can use them immediately, as shown in the next tutorial.
### Note about file-naming:
Please note that the ``--store /work/my_model-1.0/tagger`` parameter need to end with : ``/tokenizer`` when training a tokenizer, ``/compound`` when training a compound word expander, ``/lemmatizer`` for lemmatization and ``/parsing`` for training a parser.
We need this convention ...
---
The next tutorial shows how to [use a locally trained model](./4.%20Advanced%20usage%20-%20Use%20a%20locally%20trained%20model.ipynb).
|
github_jupyter
|
! cd /work; curl --remote-name-all https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-2837/ud-treebanks-v2.2.tgz
! tar -xzf /work/ud-treebanks-v2.2.tgz -C /work
! ls -lh /work/ud-treebanks-v2.2/UD_English-ParTUT
! mkdir /work/my_model-1.0
python3 /work/NLP-Cube/cube/main.py --train=tokenizer --train-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.conllu --dev-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-dev.conllu --raw-train-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.txt --raw-dev-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-dev.txt --embeddings /work/wiki.en.vec --store /work/my_model-1.0/tokenizer --batch-size 1000 --set-mem 8000 --autobatch --patience 1
--train=TRAIN select which model to train: tagger, parser,
lemmatizer, tokenizer, mt, compound, ner
--train-file=TRAIN_FILE location of the train dataset
--dev-file=DEV_FILE location of the dev dataset
--raw-train-file=RAW_TRAIN_FILE location of the raw train file
--raw-dev-file=RAW_DEV_FILE location of the raw dev file
--embeddings=EMBEDDINGS location of the pre-computed word embeddings file
--store=OUTPUT_BASE output base for model location
--patience=ITTERS no improvement stopping condition
--config=CONFIG configuration file to load
--batch-size=BATCH_SIZE
--set-mem=MEMORY preallocate memory for batch training (default 2048)
--autobatch turn on/off dynet autobatching
--use-gpu turn on/off GPU support
python3 /work/NLP-Cube/cube/main.py --train=tokenizer --train-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.conllu --dev-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-dev.conllu --raw-train-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-train.txt --raw-dev-file=/work/ud-treebanks-v2.2/UD_English-ParTUT/en_partut-ud-dev.txt --embeddings /work/wiki.en.vec --store /work/my_model-1.0/tokenizer --batch-size 1000 --set-mem 8000 --autobatch --patience 1 &> /work/my_model-1.0/tokenizer.log
Starting epoch 5
shuffling training data... done
training... 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 avg_loss=7.148099940487884 execution_time=461.953825712204
evaluating on devset... token_accuracy=99.65092779717068 , sentence_accuracy=100.0
Storing /work/my_model-1.0/tokenizer-ss.last
Storing /work/my_model-1.0/tokenizer-tok.last
Training is done with devset sentence tok = 99.74264705882354 and sentence = 100.0
[TaggerConfig]
aux_softmax_layer = 2
input_dropout_prob = 0.33
input_size = 100
layer_dropouts = [0.5, 0.5, 0.5]
layers = [200, 200]
presoftmax_mlp_dropouts = [0.5]
presoftmax_mlp_layers = [500]
python3 /work/NLP-Cube/cube/main.py --train=tagger --train-file=/work/ud-treebanks-v2.2/UD_English-EWT/en_ewt-ud-train.conllu --dev-file=/work/corpus/ud-treebanks-v2.2/UD_English-EWT/en_ewt-ud-dev.conllu --embeddings /work/wiki.en.vec --store /work/my_model-1.0/tagger --patience 1 --config /work/my_model-1.0/tagger.conf --batch-size 1000 &> /work/my_model-1.0/tagger.log
Training is done with devset accuracy=( UPOS=0.9549900596421471 , XPOS=0.950934393638171 , ATTRS=0.9617097415506958 )
| 0.471223 | 0.942242 |
# Chapter 2 Housing Example
## Data
```
import sys
sys.path.append('../src/')
from fetch_housing_data import fetch_housing_data,load_housing_data
from CombinedAttrAdders import CombinedAttributesAdder
fetch_housing_data()
housing = load_housing_data()
housing.head()
```
## Take a look
```
housing.info()
housing.ocean_proximity.value_counts()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50,figsize=(20,15))
plt.show()
```
## Create Test Set
Since data doesn't have too much sample, it need to stratified shuffle split
```
from sklearn.model_selection import train_test_split
train_set,test_set = train_test_split(housing,test_size=0.2,random_state=42)
import pandas as pd
import numpy as np
housing['income_cat'] = pd.cut(housing.median_income,
bins = [0,1.5,3,4.5,6,np.inf],
labels = np.arange(1,6,1))
housing['income_cat'].hist()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1,test_size=0.2, random_state=42)
for train_index,test_index in split.split(housing,housing.income_cat):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
strat_test_set.income_cat.value_counts()/len(strat_test_set)
for set_ in (strat_train_set, strat_test_set):
set_.drop("income_cat", axis=1, inplace=True)
```
## Exploring the Data
Create a new variable in order to not harm train set.
```
housing = strat_train_set.copy()
housing.plot(kind='scatter',x='longitude',y='latitude', alpha=0.1)
housing.plot(kind='scatter',x='longitude',y='latitude', alpha=0.4, s=housing.population/100,label='population',
figsize=(10,7),c='median_house_value',cmap=plt.get_cmap("jet"),colorbar=True)
plt.legend()
```
## Looking for Correlations
Since data is not too large you can use .corr method
```
corr_matrix = housing.corr()
most_correlated_attr = corr_matrix.median_house_value.sort_values(ascending=False).head(4).index
most_correlated_attr
from pandas.plotting import scatter_matrix
scatter_matrix(housing[most_correlated_attr],figsize=(12,8))
housing.plot(kind='scatter',x='median_income',y='median_house_value',alpha=.1)
```
## Experimenting with Attribute Combinations
The total number of rooms in a district is not very useful if you don’t know how many households there are.
```
housing["rooms_per_household"] = housing["total_rooms"]/housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"]=housing["population"]/housing["households"]
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
```
The new bedrooms_per_room attribute is much more correlated with the median house value than the total number of rooms or bedrooms.
## Prepare the Data for Machine Learning Algorithms
```
housing = strat_train_set.drop("median_house_value", axis=1)
housing_labels = strat_train_set["median_house_value"].copy()
```
### Data Cleaning
Handle missing values by adding median values for missing values
```
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy = 'median')
# but imputer can not handle categorical vars
housing_num = housing.drop('ocean_proximity',axis=1)
X=imputer.fit_transform(housing_num)
housing_tr = pd.DataFrame(X,index=housing_num.index.values,columns=housing_num.columns.values)
housing_tr.info()
```
### Handling Text and Categorical Attributes
```
housing_cat = housing[['ocean_proximity']]
housing_cat.head()
```
You can use ordinalencoder class or onehotencoder class.
Since there is not difference ordinally in between these categories we can use one hot encoder
```
from sklearn.preprocessing import OneHotEncoder
encoder=OneHotEncoder()
housing_cat1hot=encoder.fit_transform(housing_cat)
housing_cat1hot
```
Some further methods are:
```
housing_cat1hot.toarray()
encoder.categories_
```
### Feature Scaling
There are two common ways to get all attributes to have the same scale: min-max scaling and standardization.
## Transformation pipeline
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer',SimpleImputer(strategy='median')),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
housing_num_tr
from sklearn.compose import ColumnTransformer
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
])
housing_prepared = full_pipeline.fit_transform(housing)
housing_prepared
```
## Select and Train a Model
### Training and Evaluating on the Training Set
```
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
from sklearn.metrics import mean_squared_error
housing_preds = lin_reg.predict(housing_prepared)
lin_mse=mean_squared_error(housing_labels,housing_preds)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
```
this result means mean error of predictions are $68.628,1982 . So this result is not satisfying. Model underfitting the data. So there is three options:
- reduce to constrains on the model(since model is not regularized we cannot use this option)
- try to do with more complex model
- feed model with more data or features
```
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
housing_preds = tree_reg.predict(housing_prepared)
tree_mse=mean_squared_error(housing_labels,housing_preds)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
```
Model seems perfect but as we want to be sure we should cross validate the model.
### Better Evaluation Using Cross-Validation
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-scores)
tree_rmse_scores
tree_rmse_scores.mean()
tree_rmse_scores.std()
```
That’s right: the Decision Tree model is overfitting so badly that it performs worse than the Linear Regression model. Let's try random forests.
```
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared,housing_labels)
scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-scores)
forest_rmse_scores
forest_rmse_scores.mean()
forest_rmse_scores.std()
```
The goal is to shortlist a few (two to five) promising models.
- You should save every model you experiment with so that you can come back easily to any model you want.
import joblib
joblib.dump(my_model, "my_model.pkl")
and later...
my_model_loaded = joblib.load("my_model.pkl")
## Fine-Tune Your Model
Let’s assume that you now have a shortlist of promising models. You now need to fine-tune them. Let’s look at a few ways you can do that.
### Grid Search
```
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring='neg_mean_squared_error',return_train_score=True)
grid_search.fit(housing_prepared, housing_labels)
grid_search.best_params_
grid_search.best_estimator_
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
```
### Randomized Search
when the hyperparameter search space is large, it is often preferable to use RandomizedSearchCV instead.
```
from sklearn.model_selection import RandomizedSearchCV
```
### Analyze the Best Models and Their Errors
```
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
```
### Evaluate Your System on the Test Set
```
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("median_house_value", axis=1)
y_test = strat_test_set["median_house_value"].copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
from scipy import stats
confidence = .95
squared_errors = (final_predictions-y_test)**2
np.sqrt(stats.t.interval(confidence,len(squared_errors)-1,
loc=squared_errors.mean(),
scale=stats.sem(squared_errors)))
```
# Exercises
1. Try a Support Vector Machine regressor (sklearn.svm.SVR) with various hyperparameters, such as kernel="linear" (with various values for the C hyperparameter) or kernel="rbf" (with various values for the C and gamma hyperparameters). Don’t worry about what these hyperparameters mean for now. How does the best SVR predictor perform?
```
# X = housing_prepared
# y = housing_labels
from sklearn.svm import SVR
svr_reg = SVR()
svr_reg.fit(housing_prepared,housing_labels)
mse_svr = mean_squared_error(y_true=housing_labels,
y_pred=svr_reg.predict(housing_prepared))
rmse_svr = np.sqrt(mse_svr)
rmse_svr
from sklearn.svm import SVC
parameteres = [
{'kernel':['linear'],'C':[1,10,20]},
{'kernel':['rbf'],'C':[1,10,20],'gamma':[1,10,20]}
]
grid_search = GridSearchCV(estimator=SVC(),param_grid=parameteres,cv=10,scoring='neg_mean_squared_error',return_train_score=True)
grid_search.fit(housing_prepared, housing_labels)
grid_search.best_params_
```
|
github_jupyter
|
import sys
sys.path.append('../src/')
from fetch_housing_data import fetch_housing_data,load_housing_data
from CombinedAttrAdders import CombinedAttributesAdder
fetch_housing_data()
housing = load_housing_data()
housing.head()
housing.info()
housing.ocean_proximity.value_counts()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50,figsize=(20,15))
plt.show()
from sklearn.model_selection import train_test_split
train_set,test_set = train_test_split(housing,test_size=0.2,random_state=42)
import pandas as pd
import numpy as np
housing['income_cat'] = pd.cut(housing.median_income,
bins = [0,1.5,3,4.5,6,np.inf],
labels = np.arange(1,6,1))
housing['income_cat'].hist()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1,test_size=0.2, random_state=42)
for train_index,test_index in split.split(housing,housing.income_cat):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
strat_test_set.income_cat.value_counts()/len(strat_test_set)
for set_ in (strat_train_set, strat_test_set):
set_.drop("income_cat", axis=1, inplace=True)
housing = strat_train_set.copy()
housing.plot(kind='scatter',x='longitude',y='latitude', alpha=0.1)
housing.plot(kind='scatter',x='longitude',y='latitude', alpha=0.4, s=housing.population/100,label='population',
figsize=(10,7),c='median_house_value',cmap=plt.get_cmap("jet"),colorbar=True)
plt.legend()
corr_matrix = housing.corr()
most_correlated_attr = corr_matrix.median_house_value.sort_values(ascending=False).head(4).index
most_correlated_attr
from pandas.plotting import scatter_matrix
scatter_matrix(housing[most_correlated_attr],figsize=(12,8))
housing.plot(kind='scatter',x='median_income',y='median_house_value',alpha=.1)
housing["rooms_per_household"] = housing["total_rooms"]/housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"]=housing["population"]/housing["households"]
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
housing = strat_train_set.drop("median_house_value", axis=1)
housing_labels = strat_train_set["median_house_value"].copy()
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy = 'median')
# but imputer can not handle categorical vars
housing_num = housing.drop('ocean_proximity',axis=1)
X=imputer.fit_transform(housing_num)
housing_tr = pd.DataFrame(X,index=housing_num.index.values,columns=housing_num.columns.values)
housing_tr.info()
housing_cat = housing[['ocean_proximity']]
housing_cat.head()
from sklearn.preprocessing import OneHotEncoder
encoder=OneHotEncoder()
housing_cat1hot=encoder.fit_transform(housing_cat)
housing_cat1hot
housing_cat1hot.toarray()
encoder.categories_
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer',SimpleImputer(strategy='median')),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
housing_num_tr
from sklearn.compose import ColumnTransformer
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
])
housing_prepared = full_pipeline.fit_transform(housing)
housing_prepared
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
from sklearn.metrics import mean_squared_error
housing_preds = lin_reg.predict(housing_prepared)
lin_mse=mean_squared_error(housing_labels,housing_preds)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
housing_preds = tree_reg.predict(housing_prepared)
tree_mse=mean_squared_error(housing_labels,housing_preds)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-scores)
tree_rmse_scores
tree_rmse_scores.mean()
tree_rmse_scores.std()
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared,housing_labels)
scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-scores)
forest_rmse_scores
forest_rmse_scores.mean()
forest_rmse_scores.std()
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring='neg_mean_squared_error',return_train_score=True)
grid_search.fit(housing_prepared, housing_labels)
grid_search.best_params_
grid_search.best_estimator_
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
from sklearn.model_selection import RandomizedSearchCV
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("median_house_value", axis=1)
y_test = strat_test_set["median_house_value"].copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
from scipy import stats
confidence = .95
squared_errors = (final_predictions-y_test)**2
np.sqrt(stats.t.interval(confidence,len(squared_errors)-1,
loc=squared_errors.mean(),
scale=stats.sem(squared_errors)))
# X = housing_prepared
# y = housing_labels
from sklearn.svm import SVR
svr_reg = SVR()
svr_reg.fit(housing_prepared,housing_labels)
mse_svr = mean_squared_error(y_true=housing_labels,
y_pred=svr_reg.predict(housing_prepared))
rmse_svr = np.sqrt(mse_svr)
rmse_svr
from sklearn.svm import SVC
parameteres = [
{'kernel':['linear'],'C':[1,10,20]},
{'kernel':['rbf'],'C':[1,10,20],'gamma':[1,10,20]}
]
grid_search = GridSearchCV(estimator=SVC(),param_grid=parameteres,cv=10,scoring='neg_mean_squared_error',return_train_score=True)
grid_search.fit(housing_prepared, housing_labels)
grid_search.best_params_
| 0.45641 | 0.928926 |
```
import keras
keras.__version__
```
# Text generation with LSTM
This notebook contains the code samples found in Chapter 8, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
----
[...]
## Implementing character-level LSTM text generation
Let's put these ideas in practice in a Keras implementation. The first thing we need is a lot of text data that we can use to learn a
language model. You could use any sufficiently large text file or set of text files -- Wikipedia, the Lord of the Rings, etc. In this
example we will use some of the writings of Nietzsche, the late-19th century German philosopher (translated to English). The language model
we will learn will thus be specifically a model of Nietzsche's writing style and topics of choice, rather than a more generic model of the
English language.
## Preparing the data
Let's start by downloading the corpus and converting it to lowercase:
```
import keras
import numpy as np
path = keras.utils.get_file(
'nietzsche.txt',
origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('Corpus length:', len(text))
```
Next, we will extract partially-overlapping sequences of length `maxlen`, one-hot encode them and pack them in a 3D Numpy array `x` of
shape `(sequences, maxlen, unique_characters)`. Simultaneously, we prepare a array `y` containing the corresponding targets: the one-hot
encoded characters that come right after each extracted sequence.
```
# Length of extracted character sequences
maxlen = 60
# We sample a new sequence every `step` characters
step = 3
# This holds our extracted sequences
sentences = []
# This holds the targets (the follow-up characters)
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('Number of sequences:', len(sentences))
# List of unique characters in the corpus
chars = sorted(list(set(text)))
print('Unique characters:', len(chars))
# Dictionary mapping unique characters to their index in `chars`
char_indices = dict((char, chars.index(char)) for char in chars)
# Next, one-hot encode the characters into binary arrays.
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
```
## Building the network
Our network is a single `LSTM` layer followed by a `Dense` classifier and softmax over all possible characters. But let us note that
recurrent neural networks are not the only way to do sequence data generation; 1D convnets also have proven extremely successful at it in
recent times.
```
from keras import layers
model = keras.models.Sequential()
model.add(layers.LSTM(128, input_shape=(maxlen, len(chars))))
model.add(layers.Dense(len(chars), activation='softmax'))
```
Since our targets are one-hot encoded, we will use `categorical_crossentropy` as the loss to train the model:
```
optimizer = keras.optimizers.RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
```
## Training the language model and sampling from it
Given a trained model and a seed text snippet, we generate new text by repeatedly:
* 1) Drawing from the model a probability distribution over the next character given the text available so far
* 2) Reweighting the distribution to a certain "temperature"
* 3) Sampling the next character at random according to the reweighted distribution
* 4) Adding the new character at the end of the available text
This is the code we use to reweight the original probability distribution coming out of the model,
and draw a character index from it (the "sampling function"):
```
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
```
Finally, this is the loop where we repeatedly train and generated text. We start generating text using a range of different temperatures
after every epoch. This allows us to see how the generated text evolves as the model starts converging, as well as the impact of
temperature in the sampling strategy.
```
import random
import sys
for epoch in range(1, 60):
print('epoch', epoch)
# Fit the model for 1 epoch on the available training data
model.fit(x, y,
batch_size=128,
epochs=1)
# Select a text seed at random
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print('--- Generating with seed: "' + generated_text + '"')
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('------ temperature:', temperature)
sys.stdout.write(generated_text)
# We generate 400 characters
for i in range(400):
sampled = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(generated_text):
sampled[0, t, char_indices[char]] = 1.
preds = model.predict(sampled, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = chars[next_index]
generated_text += next_char
generated_text = generated_text[1:]
sys.stdout.write(next_char)
sys.stdout.flush()
print()
```
As you can see, a low temperature results in extremely repetitive and predictable text, but where local structure is highly realistic: in
particular, all words (a word being a local pattern of characters) are real English words. With higher temperatures, the generated text
becomes more interesting, surprising, even creative; it may sometimes invent completely new words that sound somewhat plausible (such as
"eterned" or "troveration"). With a high temperature, the local structure starts breaking down and most words look like semi-random strings
of characters. Without a doubt, here 0.5 is the most interesting temperature for text generation in this specific setup. Always experiment
with multiple sampling strategies! A clever balance between learned structure and randomness is what makes generation interesting.
Note that by training a bigger model, longer, on more data, you can achieve generated samples that will look much more coherent and
realistic than ours. But of course, don't expect to ever generate any meaningful text, other than by random chance: all we are doing is
sampling data from a statistical model of which characters come after which characters. Language is a communication channel, and there is
a distinction between what communications are about, and the statistical structure of the messages in which communications are encoded. To
evidence this distinction, here is a thought experiment: what if human language did a better job at compressing communications, much like
our computers do with most of our digital communications? Then language would be no less meaningful, yet it would lack any intrinsic
statistical structure, thus making it impossible to learn a language model like we just did.
## Take aways
* We can generate discrete sequence data by training a model to predict the next tokens(s) given previous tokens.
* In the case of text, such a model is called a "language model" and could be based on either words or characters.
* Sampling the next token requires balance between adhering to what the model judges likely, and introducing randomness.
* One way to handle this is the notion of _softmax temperature_. Always experiment with different temperatures to find the "right" one.
|
github_jupyter
|
import keras
keras.__version__
import keras
import numpy as np
path = keras.utils.get_file(
'nietzsche.txt',
origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('Corpus length:', len(text))
# Length of extracted character sequences
maxlen = 60
# We sample a new sequence every `step` characters
step = 3
# This holds our extracted sequences
sentences = []
# This holds the targets (the follow-up characters)
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('Number of sequences:', len(sentences))
# List of unique characters in the corpus
chars = sorted(list(set(text)))
print('Unique characters:', len(chars))
# Dictionary mapping unique characters to their index in `chars`
char_indices = dict((char, chars.index(char)) for char in chars)
# Next, one-hot encode the characters into binary arrays.
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
from keras import layers
model = keras.models.Sequential()
model.add(layers.LSTM(128, input_shape=(maxlen, len(chars))))
model.add(layers.Dense(len(chars), activation='softmax'))
optimizer = keras.optimizers.RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
import random
import sys
for epoch in range(1, 60):
print('epoch', epoch)
# Fit the model for 1 epoch on the available training data
model.fit(x, y,
batch_size=128,
epochs=1)
# Select a text seed at random
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print('--- Generating with seed: "' + generated_text + '"')
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('------ temperature:', temperature)
sys.stdout.write(generated_text)
# We generate 400 characters
for i in range(400):
sampled = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(generated_text):
sampled[0, t, char_indices[char]] = 1.
preds = model.predict(sampled, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = chars[next_index]
generated_text += next_char
generated_text = generated_text[1:]
sys.stdout.write(next_char)
sys.stdout.flush()
print()
| 0.599602 | 0.970604 |
**7장 – 앙상블 학습과 랜덤 포레스트**
_이 노트북은 7장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/handson-ml2/blob/master/07_ensemble_learning_and_random_forests.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩에서 실행하기</a>
</td>
</table>
# 설정
먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다.
```
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)
# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"
# 공통 모듈 임포트
import numpy as np
import os
# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)
# 깔끔한 그래프 출력을 위해
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 그림을 저장할 위치
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("그림 저장:", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# 투표기반 분류기
```
from scipy.stats import binom
1-binom.cdf(499, 1000, 0.51), 1-binom.cdf(4999, 10000, 0.51)
'''
앞면이 나올 확률이 51%인 동전 하나는 그 하나로서 앞면 예측률이 51%라고 할 수 있다.
이 동전의 예측률을 높일 순 없을까?
이 동전 1000개를 모아서 앞면이 다수(다수결 투표라고 한다면)가 나오면 앞면 이라고 한다고 하자.
이때 앞면이 다수가 될 확률은 0.7467502275561786이다. 즉, 1000개를 모으면 앞면으로 예측할 수 있는
능력이 약 75%까지 올라간다는 것을 알 수 있다.
또한 결과적으로 앞면이 우세할 확률은 큰수의 법칙에 따라 51%다.
여기서 헷갈리지 말아야하는 것이 1000개든 1000000개든 결과적으로 던지면 51% 정도만 앞면이 나올 것이다.
그러나 각 동전들이 절반 이상 앞면이 나올 확률은 75%라는 것이다. (결과가 51%일 확률도 포함해서)
이것은 각 분류기의 절반 이상이 앞면이 나온다고 할 확률이 75%란 뜻이고 하나일 때의 51%보다 훨씬
신뢰할 수 있는 수치로 올라감을 의미한다.
찍는 것(50%)보다 약간 좋은 성능의 분류기를 모으면 훨씬 더 좋은 예측률을 갖을 수 있는 것이다.
'''
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
```
**노트**: 향후 버전을 위해 사이킷런에서 기본 값이 될 `solver="lbfgs"`, `n_estimators=100`, `gamma="scale"`로 지정합니다.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
```
**노트**: 사이킷런 알고리즘이 이따금 업데이트되기 때문에 이 노트북의 결과가 책과 조금 다를 수 있습니다.
간접 투표:
```
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
```
# 배깅 앙상블
```
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(), n_estimators=500,
max_samples=100, bootstrap=True, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.sca(axes[1])
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
plt.ylabel("")
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
```
# 랜덤 포레스트
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(max_features="sqrt", max_leaf_nodes=16),
n_estimators=500, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # 거의 에측이 동일합니다.
# iris 데이터셋의 특성 중요도
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.02, contour=False)
plt.show()
```
## OOB 평가
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(), n_estimators=500,
bootstrap=True, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
```
## 특성 중요도
```
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.target = mnist.target.astype(np.uint8)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
```
# 에이다부스트
```
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
for subplot, learning_rate in ((0, 1), (1, 0.5)):
sample_weights = np.ones(m) / m
plt.sca(axes[subplot])
for i in range(5):
svm_clf = SVC(kernel="rbf", C=0.2, gamma=0.6, random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights * m)
y_pred = svm_clf.predict(X_train)
r = sample_weights[y_pred != y_train].sum() / sample_weights.sum() # equation 7-1
alpha = learning_rate * np.log((1 - r) / r) # equation 7-2
sample_weights[y_pred != y_train] *= np.exp(alpha) # equation 7-3
sample_weights /= sample_weights.sum() # normalization step
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 0:
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
else:
plt.ylabel("")
save_fig("boosting_plot")
plt.show()
```
# 그레이디언트 부스팅
```
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.sca(axes[1])
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
save_fig("gbrt_learning_rate_plot")
plt.show()
```
## 조기 종료를 사용한 그래디언트 부스팅
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
# staged_predict는 훈련의 각 단계(트리 한 개, 두 개...)에서 앙상블에 의해 만들어진
# 예측기를 순회하는 반복자(iterator)를 반환합니다.
# staged_predict(X) : Predict regression target at each stage for X.
# 각 훈련 단계에서의 X_val에 따른 y_pred를 구한 다음 mean_squared_error(y_val, y_pred)를 계산하면
# mse가 내려가다가 올라가는 훈련 단계를 구할 수 있다!!!
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors) + 1
gbrt_best = GradientBoostingRegressor(max_depth=2, n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.ylabel("Error", fontsize=16)
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.xlabel("$x_1$", fontsize=16)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
print("Minimum validation MSE:", min_val_error)
```
## XGBoost 사용하기
```
try:
import xgboost
except ImportError as ex:
print("에러: xgboost 라이브러리 설치되지 않았습니다.")
xgboost = None
if xgboost is not None: # 책에 없음
xgb_reg = xgboost.XGBRegressor(random_state=42)
xgb_reg.fit(X_train, y_train)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # 책에 없음
print("Validation MSE:", val_error) # 책에 없음
if xgboost is not None: # 책에 없음
xgb_reg.fit(X_train, y_train,
eval_set=[(X_val, y_val)], early_stopping_rounds=2)
# num_boost_round 만큼 반복하는데 early_stopping_rounds 만큼 성능 향상이 없으면 중단
# early_stopping_rounds를 사용하려면 eval 데이터 셋을 명기해야함
# https://hwi-doc.tistory.com/entry/%EC%9D%B4%ED%95%B4%ED%95%98%EA%B3%A0-%EC%82%AC%EC%9A%A9%ED%95%98%EC%9E%90-XGBoost
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # 책에 없음
print("Validation MSE:", val_error) # 책에 없음
%timeit xgboost.XGBRegressor().fit(X_train, y_train) if xgboost is not None else None
%timeit GradientBoostingRegressor().fit(X_train, y_train)
```
# 연습문제 해답
## 1. to 7.
부록 A 참조.
## 8. 투표 기반 분류기
문제: _MNIST 데이터를 불러들여 훈련 세트, 검증 세트, 테스트 세트로 나눕니다(예를 들면 훈련에 40,000개 샘플, 검증에 10,000개 샘플, 테스트에 10,000개 샘플)._
MNIST 데이터셋은 앞에서 로드했습니다.
```
from sklearn.model_selection import train_test_split
X_train_val, X_test, y_train_val, y_test = train_test_split(
mnist.data, mnist.target, test_size=10000, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=10000, random_state=42)
```
문제: _그런 다음 랜덤 포레스트 분류기, 엑스트라 트리 분류기, SVM 같은 여러 종류의 분류기를 훈련시킵니다._
```
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
random_forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
extra_trees_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
svm_clf = LinearSVC(max_iter=100, tol=20, random_state=42)
mlp_clf = MLPClassifier(random_state=42)
estimators = [random_forest_clf, extra_trees_clf, svm_clf, mlp_clf]
for estimator in estimators:
print("Training the", estimator)
estimator.fit(X_train, y_train)
[estimator.score(X_val, y_val) for estimator in estimators]
```
선형 SVM이 다른 분류기보다 성능이 많이 떨어집니다. 그러나 투표 기반 분류기의 성능을 향상시킬 수 있으므로 그대로 두겠습니다.
문제: _그리고 검증 세트에서 개개의 분류기보다 더 높은 성능을 내도록 이들을 간접 또는 직접 투표 분류기를 사용하는 앙상블로 연결해보세요._
```
from sklearn.ensemble import VotingClassifier
named_estimators = [
("random_forest_clf", random_forest_clf),
("extra_trees_clf", extra_trees_clf),
("svm_clf", svm_clf),
("mlp_clf", mlp_clf),
]
voting_clf = VotingClassifier(named_estimators)
# voting = 'hard' 가 디폴트다.
voting_clf.fit(X_train, y_train)
voting_clf.score(X_val, y_val)
[estimator.score(X_val, y_val) for estimator in voting_clf.estimators_]
```
SVM 모델을 제거해서 성능이 향상되는지 확인해 보죠. 다음과 같이 `set_params()`를 사용하여 `None`으로 지정하면 특정 예측기를 제외시킬 수 있습니다:
```
voting_clf.set_params(svm_clf=None)
```
예측기 목록이 업데이트되었습니다:
```
voting_clf.estimators
```
하지만 훈련된 예측기 목록은 업데이트되지 않습니다:
```
voting_clf.estimators_
```
`VotingClassifier`를 다시 훈련시키거나 그냥 훈련된 예측기 목록에서 SVM 모델을 제거할 수 있습니다:
```
del voting_clf.estimators_[2]
```
`VotingClassifier`를 다시 평가해 보죠:
```
voting_clf.score(X_val, y_val)
```
훨씬 나아졌네요! SVM 모델이 성능을 저하시켰습니다. 이제 간접 투표 분류기를 사용해 보죠. 분류기를 다시 훈련시킬 필요는 없고 `voting`을 `"soft"`로 지정하면 됩니다:
```
voting_clf.voting = "soft"
voting_clf.score(X_val, y_val)
```
이 경우는 직접 투표 방식이 낫네요.
_앙상블을 얻고 나면 테스트 세트로 확인해보세요. 개개의 분류기와 비교해서 성능이 얼마나 향상되나요?_
```
voting_clf.voting = "hard"
voting_clf.score(X_test, y_test)
[estimator.score(X_test, y_test) for estimator in voting_clf.estimators_]
```
여기서는 투표 기반 분류기가 최선의 모델의 오차율을 아주 조금만 감소시킵니다.
## 9. 스태킹 앙상블
문제: _이전 연습문제의 각 분류기를 실행해서 검증 세트에서 예측을 만들고 그 결과로 새로운 훈련 세트를 만들어보세요. 각 훈련 샘플은 하나의 이미지에 대한 전체 분류기의 예측을 담은 벡터고 타깃은 이미지의 클래스입니다. 새로운 훈련 세트에 분류기 하나를 훈련시켜 보세요._
```
X_val_predictions = np.empty((len(X_val), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_val_predictions[:, index] = estimator.predict(X_val)
X_val_predictions
rnd_forest_blender = RandomForestClassifier(n_estimators=200, oob_score=True, random_state=42)
rnd_forest_blender.fit(X_val_predictions, y_val)
rnd_forest_blender.oob_score_
```
이 블렌더를 세밀하게 튜닝하거나 다른 종류의 블렌더(예를 들어, `MLPClassifier`)를 시도해 볼 수 있습니다. 그런 늘 하던대로 다음 교차 검증을 사용해 가장 좋은 것을 선택합니다.
문제: _축하합니다. 방금 블렌더를 훈련시켰습니다. 그리고 이 분류기를 모아서 스태킹 앙상블을 구성했습니다. 이제 테스트 세트에 앙상블을 평가해보세요. 테스트 세트의 각 이미지에 대해 모든 분류기로 예측을 만들고 앙상블의 예측 결과를 만들기 위해 블렌더에 그 예측을 주입합니다. 앞서 만든 투표 분류기와 비교하면 어떤가요?_
```
X_test_predictions = np.empty((len(X_test), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_test_predictions[:, index] = estimator.predict(X_test)
y_pred = rnd_forest_blender.predict(X_test_predictions)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
```
이 스태킹 앙상블은 앞서 만든 투표 기반 분류기만큼 성능을 내지는 못합니다. 최선의 개별 분류기만큼 뛰어나지는 않습니다.
|
github_jupyter
|
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)
# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"
# 공통 모듈 임포트
import numpy as np
import os
# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)
# 깔끔한 그래프 출력을 위해
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 그림을 저장할 위치
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("그림 저장:", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
from scipy.stats import binom
1-binom.cdf(499, 1000, 0.51), 1-binom.cdf(4999, 10000, 0.51)
'''
앞면이 나올 확률이 51%인 동전 하나는 그 하나로서 앞면 예측률이 51%라고 할 수 있다.
이 동전의 예측률을 높일 순 없을까?
이 동전 1000개를 모아서 앞면이 다수(다수결 투표라고 한다면)가 나오면 앞면 이라고 한다고 하자.
이때 앞면이 다수가 될 확률은 0.7467502275561786이다. 즉, 1000개를 모으면 앞면으로 예측할 수 있는
능력이 약 75%까지 올라간다는 것을 알 수 있다.
또한 결과적으로 앞면이 우세할 확률은 큰수의 법칙에 따라 51%다.
여기서 헷갈리지 말아야하는 것이 1000개든 1000000개든 결과적으로 던지면 51% 정도만 앞면이 나올 것이다.
그러나 각 동전들이 절반 이상 앞면이 나올 확률은 75%라는 것이다. (결과가 51%일 확률도 포함해서)
이것은 각 분류기의 절반 이상이 앞면이 나온다고 할 확률이 75%란 뜻이고 하나일 때의 51%보다 훨씬
신뢰할 수 있는 수치로 올라감을 의미한다.
찍는 것(50%)보다 약간 좋은 성능의 분류기를 모으면 훨씬 더 좋은 예측률을 갖을 수 있는 것이다.
'''
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(), n_estimators=500,
max_samples=100, bootstrap=True, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.sca(axes[1])
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
plt.ylabel("")
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
bag_clf = BaggingClassifier(
DecisionTreeClassifier(max_features="sqrt", max_leaf_nodes=16),
n_estimators=500, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # 거의 에측이 동일합니다.
# iris 데이터셋의 특성 중요도
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.02, contour=False)
plt.show()
bag_clf = BaggingClassifier(
DecisionTreeClassifier(), n_estimators=500,
bootstrap=True, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.target = mnist.target.astype(np.uint8)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
for subplot, learning_rate in ((0, 1), (1, 0.5)):
sample_weights = np.ones(m) / m
plt.sca(axes[subplot])
for i in range(5):
svm_clf = SVC(kernel="rbf", C=0.2, gamma=0.6, random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights * m)
y_pred = svm_clf.predict(X_train)
r = sample_weights[y_pred != y_train].sum() / sample_weights.sum() # equation 7-1
alpha = learning_rate * np.log((1 - r) / r) # equation 7-2
sample_weights[y_pred != y_train] *= np.exp(alpha) # equation 7-3
sample_weights /= sample_weights.sum() # normalization step
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 0:
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
else:
plt.ylabel("")
save_fig("boosting_plot")
plt.show()
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.sca(axes[1])
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
save_fig("gbrt_learning_rate_plot")
plt.show()
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
# staged_predict는 훈련의 각 단계(트리 한 개, 두 개...)에서 앙상블에 의해 만들어진
# 예측기를 순회하는 반복자(iterator)를 반환합니다.
# staged_predict(X) : Predict regression target at each stage for X.
# 각 훈련 단계에서의 X_val에 따른 y_pred를 구한 다음 mean_squared_error(y_val, y_pred)를 계산하면
# mse가 내려가다가 올라가는 훈련 단계를 구할 수 있다!!!
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors) + 1
gbrt_best = GradientBoostingRegressor(max_depth=2, n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.ylabel("Error", fontsize=16)
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.xlabel("$x_1$", fontsize=16)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
print("Minimum validation MSE:", min_val_error)
try:
import xgboost
except ImportError as ex:
print("에러: xgboost 라이브러리 설치되지 않았습니다.")
xgboost = None
if xgboost is not None: # 책에 없음
xgb_reg = xgboost.XGBRegressor(random_state=42)
xgb_reg.fit(X_train, y_train)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # 책에 없음
print("Validation MSE:", val_error) # 책에 없음
if xgboost is not None: # 책에 없음
xgb_reg.fit(X_train, y_train,
eval_set=[(X_val, y_val)], early_stopping_rounds=2)
# num_boost_round 만큼 반복하는데 early_stopping_rounds 만큼 성능 향상이 없으면 중단
# early_stopping_rounds를 사용하려면 eval 데이터 셋을 명기해야함
# https://hwi-doc.tistory.com/entry/%EC%9D%B4%ED%95%B4%ED%95%98%EA%B3%A0-%EC%82%AC%EC%9A%A9%ED%95%98%EC%9E%90-XGBoost
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # 책에 없음
print("Validation MSE:", val_error) # 책에 없음
%timeit xgboost.XGBRegressor().fit(X_train, y_train) if xgboost is not None else None
%timeit GradientBoostingRegressor().fit(X_train, y_train)
from sklearn.model_selection import train_test_split
X_train_val, X_test, y_train_val, y_test = train_test_split(
mnist.data, mnist.target, test_size=10000, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=10000, random_state=42)
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
random_forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
extra_trees_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
svm_clf = LinearSVC(max_iter=100, tol=20, random_state=42)
mlp_clf = MLPClassifier(random_state=42)
estimators = [random_forest_clf, extra_trees_clf, svm_clf, mlp_clf]
for estimator in estimators:
print("Training the", estimator)
estimator.fit(X_train, y_train)
[estimator.score(X_val, y_val) for estimator in estimators]
from sklearn.ensemble import VotingClassifier
named_estimators = [
("random_forest_clf", random_forest_clf),
("extra_trees_clf", extra_trees_clf),
("svm_clf", svm_clf),
("mlp_clf", mlp_clf),
]
voting_clf = VotingClassifier(named_estimators)
# voting = 'hard' 가 디폴트다.
voting_clf.fit(X_train, y_train)
voting_clf.score(X_val, y_val)
[estimator.score(X_val, y_val) for estimator in voting_clf.estimators_]
voting_clf.set_params(svm_clf=None)
voting_clf.estimators
voting_clf.estimators_
del voting_clf.estimators_[2]
voting_clf.score(X_val, y_val)
voting_clf.voting = "soft"
voting_clf.score(X_val, y_val)
voting_clf.voting = "hard"
voting_clf.score(X_test, y_test)
[estimator.score(X_test, y_test) for estimator in voting_clf.estimators_]
X_val_predictions = np.empty((len(X_val), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_val_predictions[:, index] = estimator.predict(X_val)
X_val_predictions
rnd_forest_blender = RandomForestClassifier(n_estimators=200, oob_score=True, random_state=42)
rnd_forest_blender.fit(X_val_predictions, y_val)
rnd_forest_blender.oob_score_
X_test_predictions = np.empty((len(X_test), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_test_predictions[:, index] = estimator.predict(X_test)
y_pred = rnd_forest_blender.predict(X_test_predictions)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
| 0.382257 | 0.978426 |
# Data Visualization
## Specifications
This workflow should produce three publication-quality visualizations:
1. Daily-mean radiative fluxes at top of atmosphere and surface from ERA5 for 22 September 2020.
2. Intrinsic atmospheric radiative properties (reflectivity, absorptivity, and transmissivity) based on the Stephens et al. (2015) model.
3. Potential surface-reflected outgoing solar radiation, actual surface-reflected outgoing solar radiation, and their difference (gain).
## Preliminaries
### Requirements
* A Google Cloud project with Cloud Storage enabled ([Create new account](https://cloud.google.com/))
* Python packages. See `environments` directory for platform and notebook specific environment files.
### Imports
```
from utils import check_environment
check_environment("visualize")
import logging
import os
import cartopy.crs as ccrs
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
from google.cloud import storage
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib import colors
import numpy as np
import xarray as xr
```
### Setup
```
data_dir = "../assets"
# Xarray configuration
xr.set_options(keep_attrs=True)
# Logging configuration
logging.basicConfig(filename="visualize.log", filemode="w", level=logging.INFO)
```
## Functions
```
def get_data_gcs(bucket_name, file_name, file_path="."):
"""Download a dataset for a single date from Google Cloud Storage.
Args:
bucket_name: Google Cloud Storage bucket to download from.
file_name: name of file to download from gcs.
file_path: local path to download the file.
Returns:
Nothing; downloads data from Google Cloud Storage.
"""
if os.path.exists(os.path.join(file_path, file_name)):
logging.info(f"{file_name} already exists locally; skipping GCS download.")
else:
client = storage.Client()
bucket = client.get_bucket(bucket_name)
blob = bucket.blob(file_name)
blob.download_to_filename(filename=os.path.join(file_path, file_name))
def plot_geodata(ax, lats, lons, data, levels, vmax, cmap, title):
"""Visualize geographic data."""
lon_formatter = LongitudeFormatter(zero_direction_label=True)
lat_formatter = LatitudeFormatter()
ax.contourf(lons, lats, data, levels, vmin=0, vmax=vmax,
cmap=cmap, transform=ccrs.PlateCarree())
ax.set_title(title)
ax.coastlines("50m", linewidth=0.75)
ax.set_xticks([-180, -120, -60, 0, 60, 120, 180], crs=ccrs.PlateCarree())
ax.set_yticks([-90, -60, -30, 0, 30, 60, 90], crs=ccrs.PlateCarree())
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
```
## Workflow
### Download Data from GCS
```
get_data_gcs("era5-single-level-daily", "20200922.nc", data_dir)
get_data_gcs("rom-input", "rom_analysis.nc", data_dir)
```
### Figure 1: Boundary Fluxes
1. Top of atmosphere incoming solar radiation
1. Surface downwelling solar radiation
1. Surface upwelling solar radiation
1. Top of atmosphere outgoing solar radiation
```
era5_daily = xr.open_dataset(os.path.join(data_dir, "20200922.nc"))
lons = era5_daily.longitude.data
lats = era5_daily.latitude.data
tisr = era5_daily.tisr.isel(time=0).data
ssrd = era5_daily.ssrd.isel(time=0).data
tosr = era5_daily.tosr.isel(time=0).data
ssru = era5_daily.ssru.isel(time=0).data
fig, axs = plt.subplots(4, 1, figsize=(11,14),
subplot_kw=dict(projection=ccrs.PlateCarree()))
cmap = cm.viridis
ax1 = axs[0]
ax2 = axs[1]
ax3 = axs[2]
ax4 = axs[3]
plot_geodata(ax1, lats, lons, tisr, 40, 400, cmap, "Top of Atmosphere - Incoming")
plot_geodata(ax2, lats, lons, ssrd, 40, 400, cmap, "Surface - Downwelling")
plot_geodata(ax3, lats, lons, ssru, 40, 400, cmap, "Surface - Upwelling")
plot_geodata(ax4, lats, lons, tosr, 40, 400, cmap, "Top of Atmosphere - Outgoing")
fig.tight_layout(pad=2.0)
fig.suptitle("22 September 2020",
y=0.999, fontweight="bold")
cbar = fig.colorbar(cm.ScalarMappable(norm=colors.Normalize(vmin=0, vmax=400),
cmap=cmap),
ax=axs, orientation='horizontal', fraction=.1, shrink=0.45,
pad=0.03, extend="max")
cbar.ax.xaxis.set_ticks(np.arange(0, 401, 50), minor=True)
cbar.set_label("Daily-Mean Solar Radiation ($\mathrm{W m^{-2}}$)")
plt.savefig(os.path.join(data_dir, "RER1_Fig1.png"),
dpi=300, bbox_inches="tight", facecolor="white")
```
### Figure 2: Intrinsic Atmospheric Radiative Properties
1. Reflectivity
2. Absorptivity
3. Transmissivity
```
rom_analysis = xr.open_dataset(os.path.join(data_dir, "rom_analysis.nc"))
lons = rom_analysis.longitude.data
lats = rom_analysis.latitude.data
reflectivity = rom_analysis.r.data
transmissivity = rom_analysis.t.data
absorptivity = rom_analysis.a.data
{"transmittance (min, max)": [transmissivity.min(), transmissivity.max()],
"absorptance (min, max)": [absorptivity.min(), absorptivity.max()],
"reflectance (min, max)": [reflectivity.min(), reflectivity.max()]}
fig, axs = plt.subplots(3, 1, figsize=(11,10),
subplot_kw=dict(projection=ccrs.PlateCarree()))
cmap = cm.plasma
ax1 = axs[0]
ax2 = axs[1]
ax3 = axs[2]
plot_geodata(ax1, lats, lons, reflectivity, 10, 1, cmap, "Reflectance")
plot_geodata(ax2, lats, lons, absorptivity, 10, 1, cmap, "Absorptance")
plot_geodata(ax3, lats, lons, transmissivity, 10, 1, cmap, "Transmittance")
fig.tight_layout(pad=2.0)
fig.suptitle("Atmospheric Radiative Properties",
y=0.999, fontweight="bold")
cbar = fig.colorbar(cm.ScalarMappable(norm=colors.BoundaryNorm(boundaries=[0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1],
ncolors=cmap.N),
cmap=cmap),
ax=axs, orientation='horizontal', fraction=.1, shrink=0.415,
pad=0.04)
cbar.ax.xaxis.set_ticks(np.arange(0, 1, 20), minor=True)
cbar.set_label("Coefficient")
plt.savefig(os.path.join(data_dir, "RER1_Fig2.png"),
dpi=300, bbox_inches="tight",
facecolor='white')
```
### Figure 3: Reflectivity Optimization Map
1. Potential surface-reflected outgoing solar radiation
2. Actual surface-reflected outgoing solar radiation
3. Difference (gain)
```
lons = rom_analysis.longitude.data
lats = rom_analysis.latitude.data
psrosr = rom_analysis.psrosr.data
srosr = rom_analysis.srosr.data
diff = (rom_analysis.psrosr.data - rom_analysis.srosr.data)
fig, axs = plt.subplots(3, 1, figsize=(11,10),
subplot_kw=dict(projection=ccrs.PlateCarree()))
cmap = cm.viridis
ax1 = axs[0]
ax2 = axs[1]
ax3 = axs[2]
plot_geodata(ax1, lats, lons, psrosr, 25, 250, cmap, "Potential")
plot_geodata(ax2, lats, lons, srosr, 25, 250, cmap, "Actual")
plot_geodata(ax3, lats, lons, diff, 25, 250, cmap, "Difference")
fig.tight_layout(pad=2.0)
fig.suptitle("Surface-Reflected Solar Radiation",
y=0.999, fontweight="bold")
cbar = fig.colorbar(cm.ScalarMappable(norm=colors.Normalize(vmin=0, vmax=250),
cmap=cmap),
ax=axs, orientation='horizontal', fraction=.1, shrink=0.41,
pad=0.04, extend="max")
cbar.ax.xaxis.set_ticks(np.arange(0, 250, 50), minor=True)
cbar.set_label("Daily-Mean Surface-Reflected Solar Radiation ($\mathrm{W m^{-2}}$)")
plt.savefig(os.path.join(data_dir, "RER1_Fig3.png"),
dpi=300, bbox_inches="tight",
facecolor='white')
```
## Discussion
In this notebook we developed some publication-quality figures for a Reflective Earth report. This concludes our analysis for now. Additional material can be contributed to existing notebooks or new notebooks via a pull request.
The results of our analysis suggest geographic regions that have the greatest potential to reduce Earth's energy imbalance with surface reflectivity increaes. They are primarily in the tropics and subtropics. Some notable areas include the Andes, Southern Africa, coastal West Africa, Southwest North America, and Australia.
One limitation of our analysis is that we restricted our focus to the annual-mean energy budget. A practical reason to increase reflectivity near human habitations is to reduce heat and hence increase thermal comfort. From this perspective, reflectivity increases may help offset summertime heat. An analysis across the annual cycle could help identify areas with seasonal reflectivity potential.
|
github_jupyter
|
from utils import check_environment
check_environment("visualize")
import logging
import os
import cartopy.crs as ccrs
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
from google.cloud import storage
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib import colors
import numpy as np
import xarray as xr
data_dir = "../assets"
# Xarray configuration
xr.set_options(keep_attrs=True)
# Logging configuration
logging.basicConfig(filename="visualize.log", filemode="w", level=logging.INFO)
def get_data_gcs(bucket_name, file_name, file_path="."):
"""Download a dataset for a single date from Google Cloud Storage.
Args:
bucket_name: Google Cloud Storage bucket to download from.
file_name: name of file to download from gcs.
file_path: local path to download the file.
Returns:
Nothing; downloads data from Google Cloud Storage.
"""
if os.path.exists(os.path.join(file_path, file_name)):
logging.info(f"{file_name} already exists locally; skipping GCS download.")
else:
client = storage.Client()
bucket = client.get_bucket(bucket_name)
blob = bucket.blob(file_name)
blob.download_to_filename(filename=os.path.join(file_path, file_name))
def plot_geodata(ax, lats, lons, data, levels, vmax, cmap, title):
"""Visualize geographic data."""
lon_formatter = LongitudeFormatter(zero_direction_label=True)
lat_formatter = LatitudeFormatter()
ax.contourf(lons, lats, data, levels, vmin=0, vmax=vmax,
cmap=cmap, transform=ccrs.PlateCarree())
ax.set_title(title)
ax.coastlines("50m", linewidth=0.75)
ax.set_xticks([-180, -120, -60, 0, 60, 120, 180], crs=ccrs.PlateCarree())
ax.set_yticks([-90, -60, -30, 0, 30, 60, 90], crs=ccrs.PlateCarree())
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
get_data_gcs("era5-single-level-daily", "20200922.nc", data_dir)
get_data_gcs("rom-input", "rom_analysis.nc", data_dir)
era5_daily = xr.open_dataset(os.path.join(data_dir, "20200922.nc"))
lons = era5_daily.longitude.data
lats = era5_daily.latitude.data
tisr = era5_daily.tisr.isel(time=0).data
ssrd = era5_daily.ssrd.isel(time=0).data
tosr = era5_daily.tosr.isel(time=0).data
ssru = era5_daily.ssru.isel(time=0).data
fig, axs = plt.subplots(4, 1, figsize=(11,14),
subplot_kw=dict(projection=ccrs.PlateCarree()))
cmap = cm.viridis
ax1 = axs[0]
ax2 = axs[1]
ax3 = axs[2]
ax4 = axs[3]
plot_geodata(ax1, lats, lons, tisr, 40, 400, cmap, "Top of Atmosphere - Incoming")
plot_geodata(ax2, lats, lons, ssrd, 40, 400, cmap, "Surface - Downwelling")
plot_geodata(ax3, lats, lons, ssru, 40, 400, cmap, "Surface - Upwelling")
plot_geodata(ax4, lats, lons, tosr, 40, 400, cmap, "Top of Atmosphere - Outgoing")
fig.tight_layout(pad=2.0)
fig.suptitle("22 September 2020",
y=0.999, fontweight="bold")
cbar = fig.colorbar(cm.ScalarMappable(norm=colors.Normalize(vmin=0, vmax=400),
cmap=cmap),
ax=axs, orientation='horizontal', fraction=.1, shrink=0.45,
pad=0.03, extend="max")
cbar.ax.xaxis.set_ticks(np.arange(0, 401, 50), minor=True)
cbar.set_label("Daily-Mean Solar Radiation ($\mathrm{W m^{-2}}$)")
plt.savefig(os.path.join(data_dir, "RER1_Fig1.png"),
dpi=300, bbox_inches="tight", facecolor="white")
rom_analysis = xr.open_dataset(os.path.join(data_dir, "rom_analysis.nc"))
lons = rom_analysis.longitude.data
lats = rom_analysis.latitude.data
reflectivity = rom_analysis.r.data
transmissivity = rom_analysis.t.data
absorptivity = rom_analysis.a.data
{"transmittance (min, max)": [transmissivity.min(), transmissivity.max()],
"absorptance (min, max)": [absorptivity.min(), absorptivity.max()],
"reflectance (min, max)": [reflectivity.min(), reflectivity.max()]}
fig, axs = plt.subplots(3, 1, figsize=(11,10),
subplot_kw=dict(projection=ccrs.PlateCarree()))
cmap = cm.plasma
ax1 = axs[0]
ax2 = axs[1]
ax3 = axs[2]
plot_geodata(ax1, lats, lons, reflectivity, 10, 1, cmap, "Reflectance")
plot_geodata(ax2, lats, lons, absorptivity, 10, 1, cmap, "Absorptance")
plot_geodata(ax3, lats, lons, transmissivity, 10, 1, cmap, "Transmittance")
fig.tight_layout(pad=2.0)
fig.suptitle("Atmospheric Radiative Properties",
y=0.999, fontweight="bold")
cbar = fig.colorbar(cm.ScalarMappable(norm=colors.BoundaryNorm(boundaries=[0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1],
ncolors=cmap.N),
cmap=cmap),
ax=axs, orientation='horizontal', fraction=.1, shrink=0.415,
pad=0.04)
cbar.ax.xaxis.set_ticks(np.arange(0, 1, 20), minor=True)
cbar.set_label("Coefficient")
plt.savefig(os.path.join(data_dir, "RER1_Fig2.png"),
dpi=300, bbox_inches="tight",
facecolor='white')
lons = rom_analysis.longitude.data
lats = rom_analysis.latitude.data
psrosr = rom_analysis.psrosr.data
srosr = rom_analysis.srosr.data
diff = (rom_analysis.psrosr.data - rom_analysis.srosr.data)
fig, axs = plt.subplots(3, 1, figsize=(11,10),
subplot_kw=dict(projection=ccrs.PlateCarree()))
cmap = cm.viridis
ax1 = axs[0]
ax2 = axs[1]
ax3 = axs[2]
plot_geodata(ax1, lats, lons, psrosr, 25, 250, cmap, "Potential")
plot_geodata(ax2, lats, lons, srosr, 25, 250, cmap, "Actual")
plot_geodata(ax3, lats, lons, diff, 25, 250, cmap, "Difference")
fig.tight_layout(pad=2.0)
fig.suptitle("Surface-Reflected Solar Radiation",
y=0.999, fontweight="bold")
cbar = fig.colorbar(cm.ScalarMappable(norm=colors.Normalize(vmin=0, vmax=250),
cmap=cmap),
ax=axs, orientation='horizontal', fraction=.1, shrink=0.41,
pad=0.04, extend="max")
cbar.ax.xaxis.set_ticks(np.arange(0, 250, 50), minor=True)
cbar.set_label("Daily-Mean Surface-Reflected Solar Radiation ($\mathrm{W m^{-2}}$)")
plt.savefig(os.path.join(data_dir, "RER1_Fig3.png"),
dpi=300, bbox_inches="tight",
facecolor='white')
| 0.654122 | 0.912864 |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
# ResNet34 inference
```
import albumentations
import gc
import numpy as np
import pandas as pd
import pretrainedmodels
import torch
import torch.nn as nn
import torch.nn.functional as F
from PIL import Image
from pathlib import Path
from torch.utils.data import DataLoader
from tqdm import tqdm
```
## Constants
```
TEST_BATCH_SIZE = 64
IMG_HEIGHT = 137
IMG_WIDTH = 236
MODEL_MEAN = (0.485,0.465,0.406)
MODEL_STD = (0.229,0.224,0.225)
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
DATA_PATH = Path('../input')
!ls {DATA_PATH}
```
## Dataset
```
class BengaliDatasetTest:
def __init__(self, df, img_height, img_width, mean, std):
self.image_ids = df['image_id'].values
self.image_array = df.drop('image_id', axis=1).values
self.aug = albumentations.Compose([
albumentations.Resize(img_height, img_width, always_apply=True),
albumentations.Normalize(mean, std, always_apply=True),
])
def __len__(self):
return len(self.image_ids)
def __getitem__(self, idx):
image_id = self.image_ids[idx]
image = self.image_array[idx]
image = image.reshape(137, 236).astype('float')
image = Image.fromarray(image).convert('RGB')
# Apply augmentation
image = self.aug(image=np.array(image))['image']
image = np.transpose(image, (2, 0, 1)).astype(np.float32)
return {
'image_id': image_id,
'image': torch.tensor(image, dtype=torch.float)
}
```
## Model
```
class ResNet34(nn.Module):
def __init__(self, pretrained):
super(ResNet34, self).__init__()
if pretrained:
self.model = pretrainedmodels.__dict__['resnet34'](pretrained='imagenet')
else:
self.model = pretrainedmodels.__dict__['resnet34'](pretrained=None)
self.l0 = nn.Linear(512, 168)
self.l1 = nn.Linear(512, 11)
self.l2 = nn.Linear(512, 7)
def forward(self, x):
batch_size = x.shape[0]
x = self.model.features(x)
x = F.adaptive_avg_pool2d(x, 1).reshape(batch_size, -1)
out0 = self.l0(x)
out1 = self.l1(x)
out2 = self.l2(x)
return out0, out1, out2
model = ResNet34(pretrained=False)
```
## Inference
```
df = pd.read_feather(DATA_PATH / 'train_image_data_0.feather'); df.head()
df_train = pd.read_csv(DATA_PATH / 'train.csv'); df_train.head()
labels = {
'grapheme_root': df_train.loc[:50209,'grapheme_root'].values,
'vowel_diacritic': df_train.loc[:50209, 'vowel_diacritic'].values,
'consonant_diacritic': df_train.loc[:50209, 'consonant_diacritic'].values,
'image_id': df_train.loc[:50209, 'image_id'].values
}
del df_train
gc.collect()
def get_predictions(model, df):
g_logits, v_logits, c_logits, image_id_list = [], [], [], []
dataset = BengaliDatasetTest(
df=df,
img_height=IMG_HEIGHT,
img_width=IMG_WIDTH,
mean=MODEL_MEAN,
std=MODEL_STD)
dataloader = DataLoader(
dataset=dataset,
batch_size=TEST_BATCH_SIZE,
shuffle=False)
model.eval()
with torch.no_grad():
for d in tqdm(dataloader):
image_ids = d['image_id']
images = d['image']
images = images.to(DEVICE)
g, v, c = model(images)
for idx, image_id in enumerate(image_ids):
image_id_list.append(image_id)
g_logits.append(g[idx].cpu().detach().numpy())
v_logits.append(v[idx].cpu().detach().numpy())
c_logits.append(c[idx].cpu().detach().numpy())
return g_logits, v_logits, c_logits, image_id_list
```
### Blend
```
g_logits_arr, v_logits_arr, c_logits_arr = [], [], []
image_ids = []
for fold_idx in range(3, 5):
model.load_state_dict(torch.load(
f'../src/weights/resnet34_fold{fold_idx}.pth'))
model.to(DEVICE)
g_logits, v_logits, c_logits, image_id_list = get_predictions(model, df)
g_logits_arr.append(g_logits)
v_logits_arr.append(v_logits)
c_logits_arr.append(c_logits)
if fold_idx == 0:
image_ids.extend(image_id_list)
g_preds = np.argmax(np.mean(np.array(g_logits_arr), axis=0), axis=1)
v_preds = np.argmax(np.mean(np.array(v_logits_arr), axis=0), axis=1)
c_preds = np.argmax(np.mean(np.array(c_logits_arr), axis=0), axis=1)
total = 3 * len(g_preds)
correct = (g_preds == labels['grapheme_root']).sum()
correct += (v_preds == labels['vowel_diacritic']).sum()
correct += (c_preds == labels['consonant_diacritic']).sum()
correct / total
torch.cuda.empty_cache()
gc.collect()
```
|
github_jupyter
|
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import albumentations
import gc
import numpy as np
import pandas as pd
import pretrainedmodels
import torch
import torch.nn as nn
import torch.nn.functional as F
from PIL import Image
from pathlib import Path
from torch.utils.data import DataLoader
from tqdm import tqdm
TEST_BATCH_SIZE = 64
IMG_HEIGHT = 137
IMG_WIDTH = 236
MODEL_MEAN = (0.485,0.465,0.406)
MODEL_STD = (0.229,0.224,0.225)
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
DATA_PATH = Path('../input')
!ls {DATA_PATH}
class BengaliDatasetTest:
def __init__(self, df, img_height, img_width, mean, std):
self.image_ids = df['image_id'].values
self.image_array = df.drop('image_id', axis=1).values
self.aug = albumentations.Compose([
albumentations.Resize(img_height, img_width, always_apply=True),
albumentations.Normalize(mean, std, always_apply=True),
])
def __len__(self):
return len(self.image_ids)
def __getitem__(self, idx):
image_id = self.image_ids[idx]
image = self.image_array[idx]
image = image.reshape(137, 236).astype('float')
image = Image.fromarray(image).convert('RGB')
# Apply augmentation
image = self.aug(image=np.array(image))['image']
image = np.transpose(image, (2, 0, 1)).astype(np.float32)
return {
'image_id': image_id,
'image': torch.tensor(image, dtype=torch.float)
}
class ResNet34(nn.Module):
def __init__(self, pretrained):
super(ResNet34, self).__init__()
if pretrained:
self.model = pretrainedmodels.__dict__['resnet34'](pretrained='imagenet')
else:
self.model = pretrainedmodels.__dict__['resnet34'](pretrained=None)
self.l0 = nn.Linear(512, 168)
self.l1 = nn.Linear(512, 11)
self.l2 = nn.Linear(512, 7)
def forward(self, x):
batch_size = x.shape[0]
x = self.model.features(x)
x = F.adaptive_avg_pool2d(x, 1).reshape(batch_size, -1)
out0 = self.l0(x)
out1 = self.l1(x)
out2 = self.l2(x)
return out0, out1, out2
model = ResNet34(pretrained=False)
df = pd.read_feather(DATA_PATH / 'train_image_data_0.feather'); df.head()
df_train = pd.read_csv(DATA_PATH / 'train.csv'); df_train.head()
labels = {
'grapheme_root': df_train.loc[:50209,'grapheme_root'].values,
'vowel_diacritic': df_train.loc[:50209, 'vowel_diacritic'].values,
'consonant_diacritic': df_train.loc[:50209, 'consonant_diacritic'].values,
'image_id': df_train.loc[:50209, 'image_id'].values
}
del df_train
gc.collect()
def get_predictions(model, df):
g_logits, v_logits, c_logits, image_id_list = [], [], [], []
dataset = BengaliDatasetTest(
df=df,
img_height=IMG_HEIGHT,
img_width=IMG_WIDTH,
mean=MODEL_MEAN,
std=MODEL_STD)
dataloader = DataLoader(
dataset=dataset,
batch_size=TEST_BATCH_SIZE,
shuffle=False)
model.eval()
with torch.no_grad():
for d in tqdm(dataloader):
image_ids = d['image_id']
images = d['image']
images = images.to(DEVICE)
g, v, c = model(images)
for idx, image_id in enumerate(image_ids):
image_id_list.append(image_id)
g_logits.append(g[idx].cpu().detach().numpy())
v_logits.append(v[idx].cpu().detach().numpy())
c_logits.append(c[idx].cpu().detach().numpy())
return g_logits, v_logits, c_logits, image_id_list
g_logits_arr, v_logits_arr, c_logits_arr = [], [], []
image_ids = []
for fold_idx in range(3, 5):
model.load_state_dict(torch.load(
f'../src/weights/resnet34_fold{fold_idx}.pth'))
model.to(DEVICE)
g_logits, v_logits, c_logits, image_id_list = get_predictions(model, df)
g_logits_arr.append(g_logits)
v_logits_arr.append(v_logits)
c_logits_arr.append(c_logits)
if fold_idx == 0:
image_ids.extend(image_id_list)
g_preds = np.argmax(np.mean(np.array(g_logits_arr), axis=0), axis=1)
v_preds = np.argmax(np.mean(np.array(v_logits_arr), axis=0), axis=1)
c_preds = np.argmax(np.mean(np.array(c_logits_arr), axis=0), axis=1)
total = 3 * len(g_preds)
correct = (g_preds == labels['grapheme_root']).sum()
correct += (v_preds == labels['vowel_diacritic']).sum()
correct += (c_preds == labels['consonant_diacritic']).sum()
correct / total
torch.cuda.empty_cache()
gc.collect()
| 0.762247 | 0.701585 |
```
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
df = pd.read_csv("COVIDiSTRESS June 17.csv",encoding='latin-1')
df.head()
df=df.drop(columns=['Dem_Expat','Country',
'Unnamed: 0',
'neu',
'ext',
'ope',
'agr',
'con',
'Duration..in.seconds.',
'UserLanguage',
'Scale_PSS10_UCLA_1',
'Scale_PSS10_UCLA_2',
'Scale_PSS10_UCLA_3',
'Scale_PSS10_UCLA_4',
'Scale_PSS10_UCLA_5',
'Scale_PSS10_UCLA_6',
'Scale_PSS10_UCLA_7',
'Scale_PSS10_UCLA_8',
'Scale_PSS10_UCLA_9',
'Scale_PSS10_UCLA_10',
'Scale_SLON_1',
'Scale_SLON_2',
'Scale_SLON_3',
'OECD_people_1',
'OECD_people_2',
'OECD_insititutions_1',
'OECD_insititutions_2',
'OECD_insititutions_3',
'OECD_insititutions_4',
'OECD_insititutions_5',
'OECD_insititutions_6',
'Corona_concerns_1',
'Corona_concerns_2',
'Corona_concerns_3',
'Corona_concerns_4',
'Corona_concerns_5',
'Trust_countrymeasure',
'Compliance_1',
'Compliance_2',
'Compliance_3',
'Compliance_4',
'Compliance_5',
'Compliance_6',
'born_92',
'experience_war',
'experience_war_TXT',
'war_injury',
'loss_during_war',
'time_spent_in_war',
'time_spent_in_war_TXT',
'Scale_UCLA_TRI_1',
'Scale_UCLA_TRI_2',
'Scale_UCLA_TRI_3',
'Scale_UCLA_TRI_4',
'PS_PTSD_1',
'PS_PTSD_2',
'PS_PTSD_3',
'PS_PTSD_4',
'PS_PTSD_5',
'BFF_15_1',
'BFF_15_2',
'BFF_15_3',
'BFF_15_4',
'BFF_15_5',
'BFF_15_6',
'BFF_15_7',
'BFF_15_8',
'BFF_15_9',
'BFF_15_10',
'BFF_15_11',
'BFF_15_12',
'BFF_15_13',
'BFF_15_14',
'BFF_15_15',
'Expl_Distress_1',
'Expl_Distress_2',
'Expl_Distress_3',
'Expl_Distress_4',
'Expl_Distress_5',
'Expl_Distress_6',
'Expl_Distress_7',
'Expl_Distress_8',
'Expl_Distress_9',
'Expl_Distress_10',
'Expl_Distress_11',
'Expl_Distress_12',
'Expl_Distress_13',
'Expl_Distress_14',
'Expl_Distress_15',
'Expl_Distress_16',
'Expl_Distress_17',
'Expl_Distress_18',
'Expl_Distress_19',
'Expl_Distress_20',
'Expl_Distress_21',
'Expl_Distress_22',
'Expl_Distress_23',
'Expl_Distress_24',
'Expl_Distress_txt',
'SPS_1',
'SPS_2',
'SPS_3',
'SPS_4',
'SPS_5',
'SPS_6',
'SPS_7',
'SPS_8',
'SPS_9',
'SPS_10',
'Expl_Coping_1',
'Expl_Coping_2',
'Expl_Coping_3',
'Expl_Coping_4',
'Expl_Coping_5',
'Expl_Coping_6',
'Expl_Coping_7',
'Expl_Coping_8',
'Expl_Coping_9',
'Expl_Coping_10',
'Expl_Coping_11',
'Expl_Coping_12',
'Expl_Coping_13',
'Expl_Coping_14',
'Expl_Coping_15',
'Expl_Coping_16',
'Expl_coping_txt',
'Expl_media_1',
'Expl_media_2',
'Expl_media_3',
'Expl_media_4',
'Expl_media_5',
'Expl_media_6',
'Final_open'])
for col_name in df.columns:
print(col_name)
df['Dem_maritalstatus'].value_counts().plot(kind="bar")
df['Dem_riskgroup'].value_counts().plot(kind="bar")
df['Dem_gender'].value_counts().plot(kind="bar")
df['Dem_edu'].value_counts().plot(kind="bar")
df=df[df.Dem_edu!="Up to 12 years of school"]
df=df[df.Dem_edu!="Up to 6 years of school"]
df=df[df.Dem_edu!="Up to 9 years of school"]
df=df[df.Dem_edu!="Uninformative Response"]
df=df[df.Dem_employment=="Full time employed"]
df=df[df.Dem_maritalstatus!="Uninformative response"]
df=df.drop(columns=["Dem_employment"])
df=df.drop(columns=["AD_gain","AD_loss", "AD_check", "Dem_state", "Dem_edu_mom"])
df=df.drop(columns=["Scale_UCLA_TRI_avg"])
df=df.drop(columns=["RecordedDate"])
df=df.drop(columns=["Dem_islolation","Dem_isolation_adults", "Dem_isolation_kids"])
df=df.drop(columns=["Dem_dependents","Dem_riskgroup"])
df=df.dropna()
print ("(Rows, Columns) :", df.shape)
df.head()
# Import the encoder from sklearn
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder()
df.info()
df_num = df[["PSS10_avg","SLON3_avg","SPS_avg"]]
df['Burnout?'] = (((df["PSS10_avg"]) <3) & (df["SPS_avg"].astype(float) < 3) & (df["SLON3_avg"].astype(float)<=3))
df.head()
## df['Burnout?'] = (((df["PSS10_avg"]) <=2.500000) & (df["SPS_avg"].astype(float) <= 2.333333) & (df["SLON3_avg"].astype(float)<=5.1))
df["Burnout?"].value_counts()
df.info()
# OneHotEncoding of categorical predictors (not the response)
df_cat = df[['Dem_edu','Dem_gender','Dem_maritalstatus']]
ohe.fit(df_cat)
df_cat_ohe = pd.DataFrame(ohe.transform(df_cat).toarray(),
columns=ohe.get_feature_names(df_cat.columns))
# Check the encoded variables
df_cat_ohe.info()
# Combining Numeric features with the OHE Categorical features
df_num = df[['PSS10_avg','SLON3_avg','SPS_avg','Dem_age']]
df_res = df['Burnout?']
df_ohe = pd.concat([df_num, df_cat_ohe, df_res],
sort = False, axis = 1).reindex(index=df_num.index)
# Check the final dataframe
df_ohe.info()
for col in df_ohe.columns:
df_ohe[col] = np.where(df_ohe[col].isnull(),"Unknown",df_ohe[col])
# Get names of indexes for which column Age has value 30
indexNames = df_ohe[df_ohe[col] == "Unknown"].index
# Delete these row indexes from dataFrame
df_ohe.drop(indexNames , inplace=True)
# Import essential models and functions from sklearn
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.tree import plot_tree
# Extract Response and Predictors
y = pd.DataFrame(df_ohe['Burnout?'])
X = pd.DataFrame(df_ohe.drop('Burnout?', axis = 1))
# Split the Dataset into Train and Test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
pd.isnull(X_train).sum() > 0
X_train = X_train.astype("float64")
X_test = X_test.astype("float64")
y_test= y_test.astype(int)
y_train= y_train.astype(int)
y_test.head()
# Decision Tree using Train Data
dectree = DecisionTreeClassifier(max_depth = 4)
dectree.fit(X_train, y_train)
# Plot the trained Decision Tree
f = plt.figure(figsize=(24,24))
plot_tree(dectree, filled=True, rounded=True,
feature_names=X_train.columns,
class_names=["No","Yes"])
# Predict the Response corresponding to Predictors
y_train_pred = dectree.predict(X_train)
# Print the Classification Accuracy
print("Train Data")
print("Accuracy :\t", dectree.score(X_train, y_train))
print()
# Print the Accuracy Measures from the Confusion Matrix
cmTrain = confusion_matrix(y_train, y_train_pred)
tpTrain = cmTrain[1][1] # True Positives : Good (1) predicted Good (1)
fpTrain = cmTrain[0][1] # False Positives : Bad (0) predicted Good (1)
tnTrain = cmTrain[0][0] # True Negatives : Bad (0) predicted Bad (0)
fnTrain = cmTrain[1][0] # False Negatives : Good (1) predicted Bad (0)
print("TPR Train :\t", (tpTrain/(tpTrain + fnTrain)))
print("TNR Train :\t", (tnTrain/(tnTrain + fpTrain)))
print()
print("FPR Train :\t", (fpTrain/(tnTrain + fpTrain)))
print("FNR Train :\t", (fnTrain/(tpTrain + fnTrain)))
# Plot the two-way Confusion Matrix
sb.heatmap(confusion_matrix(y_train, y_train_pred),
annot = True, fmt=".0f", annot_kws={"size": 18})
# Import the required metric from sklearn
from sklearn.metrics import confusion_matrix
# Predict the Response corresponding to Predictors
y_test_pred = dectree.predict(X_test)
# Print the Classification Accuracy
print("Test Data")
print("Accuracy :\t", dectree.score(X_test, y_test))
print()
# Print the Accuracy Measures from the Confusion Matrix
cmTest = confusion_matrix(y_test, y_test_pred)
tpTest = cmTest[1][1] # True Positives : Good (1) predicted Good (1)
fpTest = cmTest[0][1] # False Positives : Bad (0) predicted Good (1)
tnTest = cmTest[0][0] # True Negatives : Bad (0) predicted Bad (0)
fnTest = cmTest[1][0] # False Negatives : Good (1) predicted Bad (0)
print("TPR Test :\t", (tpTest/(tpTest + fnTest)))
print("TNR Test :\t", (tnTest/(tnTest + fpTest)))
print()
print("FPR Test :\t", (fpTest/(fpTest + tnTest)))
print("FNR Test :\t", (fnTest/(fnTest + tpTest)))
# Plot the two-way Confusion Matrix
sb.heatmap(confusion_matrix(y_test, y_test_pred),
annot = True, fmt=".0f", annot_kws={"size": 18})
# Import essential models and functions from sklearn
from sklearn.model_selection import train_test_split
# Extract Response and Predictors
y = pd.DataFrame(df_ohe['Burnout?'])
X = pd.DataFrame(df_ohe.drop('Burnout?', axis = 1))
X=X.astype("float64")
y=y.astype(int)
# Split the Dataset into Train and Test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.5)
# Import GridSearch for hyperparameter tuning using Cross-Validation (CV)
from sklearn.model_selection import GridSearchCV
# Define the Hyper-parameter Grid to search on, in case of Random Forest
param_grid = {'n_estimators': np.arange(100,901,150),
'max_depth': np.arange(5, 11)}
# Create the Hyper-parameter Grid
hpGrid = GridSearchCV(RandomForestClassifier(),
param_grid,
cv = 5,
scoring = 'accuracy')
# Train the models using Cross-Validation
hpGrid.fit(X_train, y_train["Burnout?"].ravel())
# Fetch the best Model or the best set of Hyper-parameters
print(hpGrid.best_estimator_)
# Print the score (accuracy) of the best Model after CV
print(np.abs(hpGrid.best_score_))
# Import RandomForestClassifier model from Scikit-Learn
from sklearn.ensemble import RandomForestClassifier
# Create the Random Forest object
rforest = RandomForestClassifier(n_estimators = 550, # n_estimators denote number of trees
max_depth = 9) # set the maximum depth of each tree
# Fit Random Forest on Train Data
rforest.fit(X_train, y_train["Burnout?"].ravel())
# Import confusion_matrix from Scikit-Learn
from sklearn.metrics import confusion_matrix
# Predict the Response corresponding to Predictors
y_train_pred = rforest.predict(X_train)
# Print the Classification Accuracy
print("Train Data")
print("Accuracy :\t", rforest.score(X_train, y_train))
print()
# Print the Accuracy Measures from the Confusion Matrix
cmTrain = confusion_matrix(y_train, y_train_pred)
tpTrain = cmTrain[1][1] # True Positives : Good (1) predicted Good (1)
fpTrain = cmTrain[0][1] # False Positives : Bad (0) predicted Good (1)
tnTrain = cmTrain[0][0] # True Negatives : Bad (0) predicted Bad (0)
fnTrain = cmTrain[1][0] # False Negatives : Good (1) predicted Bad (0)
print("TPR Train :\t", (tpTrain/(tpTrain + fnTrain)))
print("TNR Train :\t", (tnTrain/(tnTrain + fpTrain)))
print()
print("FPR Train :\t", (fpTrain/(tnTrain + fpTrain)))
print("FNR Train :\t", (fnTrain/(tpTrain + fnTrain)))
# Plot the two-way Confusion Matrix
sb.heatmap(confusion_matrix(y_train, y_train_pred),
annot = True, fmt=".0f", annot_kws={"size": 18})
# Import the required metric from sklearn
from sklearn.metrics import confusion_matrix
# Predict the Response corresponding to Predictors
y_test_pred = rforest.predict(X_test)
# Print the Classification Accuracy
print("Test Data")
print("Accuracy :\t", rforest.score(X_test, y_test))
print()
# Print the Accuracy Measures from the Confusion Matrix
cmTest = confusion_matrix(y_test, y_test_pred)
tpTest = cmTest[1][1] # True Positives : Good (1) predicted Good (1)
fpTest = cmTest[0][1] # False Positives : Bad (0) predicted Good (1)
tnTest = cmTest[0][0] # True Negatives : Bad (0) predicted Bad (0)
fnTest = cmTest[1][0] # False Negatives : Good (1) predicted Bad (0)
print("TPR Test :\t", (tpTest/(tpTest + fnTest)))
print("TNR Test :\t", (tnTest/(tnTest + fpTest)))
print()
print("FPR Test :\t", (fpTest/(fpTest + tnTest)))
print("FNR Test :\t", (fnTest/(fnTest + tpTest)))
# Plot the two-way Confusion Matrix
sb.heatmap(confusion_matrix(y_test, y_test_pred),
annot = True, fmt=".0f", annot_kws={"size": 18})
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
df = pd.read_csv("COVIDiSTRESS June 17.csv",encoding='latin-1')
df.head()
df=df.drop(columns=['Dem_Expat','Country',
'Unnamed: 0',
'neu',
'ext',
'ope',
'agr',
'con',
'Duration..in.seconds.',
'UserLanguage',
'Scale_PSS10_UCLA_1',
'Scale_PSS10_UCLA_2',
'Scale_PSS10_UCLA_3',
'Scale_PSS10_UCLA_4',
'Scale_PSS10_UCLA_5',
'Scale_PSS10_UCLA_6',
'Scale_PSS10_UCLA_7',
'Scale_PSS10_UCLA_8',
'Scale_PSS10_UCLA_9',
'Scale_PSS10_UCLA_10',
'Scale_SLON_1',
'Scale_SLON_2',
'Scale_SLON_3',
'OECD_people_1',
'OECD_people_2',
'OECD_insititutions_1',
'OECD_insititutions_2',
'OECD_insititutions_3',
'OECD_insititutions_4',
'OECD_insititutions_5',
'OECD_insititutions_6',
'Corona_concerns_1',
'Corona_concerns_2',
'Corona_concerns_3',
'Corona_concerns_4',
'Corona_concerns_5',
'Trust_countrymeasure',
'Compliance_1',
'Compliance_2',
'Compliance_3',
'Compliance_4',
'Compliance_5',
'Compliance_6',
'born_92',
'experience_war',
'experience_war_TXT',
'war_injury',
'loss_during_war',
'time_spent_in_war',
'time_spent_in_war_TXT',
'Scale_UCLA_TRI_1',
'Scale_UCLA_TRI_2',
'Scale_UCLA_TRI_3',
'Scale_UCLA_TRI_4',
'PS_PTSD_1',
'PS_PTSD_2',
'PS_PTSD_3',
'PS_PTSD_4',
'PS_PTSD_5',
'BFF_15_1',
'BFF_15_2',
'BFF_15_3',
'BFF_15_4',
'BFF_15_5',
'BFF_15_6',
'BFF_15_7',
'BFF_15_8',
'BFF_15_9',
'BFF_15_10',
'BFF_15_11',
'BFF_15_12',
'BFF_15_13',
'BFF_15_14',
'BFF_15_15',
'Expl_Distress_1',
'Expl_Distress_2',
'Expl_Distress_3',
'Expl_Distress_4',
'Expl_Distress_5',
'Expl_Distress_6',
'Expl_Distress_7',
'Expl_Distress_8',
'Expl_Distress_9',
'Expl_Distress_10',
'Expl_Distress_11',
'Expl_Distress_12',
'Expl_Distress_13',
'Expl_Distress_14',
'Expl_Distress_15',
'Expl_Distress_16',
'Expl_Distress_17',
'Expl_Distress_18',
'Expl_Distress_19',
'Expl_Distress_20',
'Expl_Distress_21',
'Expl_Distress_22',
'Expl_Distress_23',
'Expl_Distress_24',
'Expl_Distress_txt',
'SPS_1',
'SPS_2',
'SPS_3',
'SPS_4',
'SPS_5',
'SPS_6',
'SPS_7',
'SPS_8',
'SPS_9',
'SPS_10',
'Expl_Coping_1',
'Expl_Coping_2',
'Expl_Coping_3',
'Expl_Coping_4',
'Expl_Coping_5',
'Expl_Coping_6',
'Expl_Coping_7',
'Expl_Coping_8',
'Expl_Coping_9',
'Expl_Coping_10',
'Expl_Coping_11',
'Expl_Coping_12',
'Expl_Coping_13',
'Expl_Coping_14',
'Expl_Coping_15',
'Expl_Coping_16',
'Expl_coping_txt',
'Expl_media_1',
'Expl_media_2',
'Expl_media_3',
'Expl_media_4',
'Expl_media_5',
'Expl_media_6',
'Final_open'])
for col_name in df.columns:
print(col_name)
df['Dem_maritalstatus'].value_counts().plot(kind="bar")
df['Dem_riskgroup'].value_counts().plot(kind="bar")
df['Dem_gender'].value_counts().plot(kind="bar")
df['Dem_edu'].value_counts().plot(kind="bar")
df=df[df.Dem_edu!="Up to 12 years of school"]
df=df[df.Dem_edu!="Up to 6 years of school"]
df=df[df.Dem_edu!="Up to 9 years of school"]
df=df[df.Dem_edu!="Uninformative Response"]
df=df[df.Dem_employment=="Full time employed"]
df=df[df.Dem_maritalstatus!="Uninformative response"]
df=df.drop(columns=["Dem_employment"])
df=df.drop(columns=["AD_gain","AD_loss", "AD_check", "Dem_state", "Dem_edu_mom"])
df=df.drop(columns=["Scale_UCLA_TRI_avg"])
df=df.drop(columns=["RecordedDate"])
df=df.drop(columns=["Dem_islolation","Dem_isolation_adults", "Dem_isolation_kids"])
df=df.drop(columns=["Dem_dependents","Dem_riskgroup"])
df=df.dropna()
print ("(Rows, Columns) :", df.shape)
df.head()
# Import the encoder from sklearn
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder()
df.info()
df_num = df[["PSS10_avg","SLON3_avg","SPS_avg"]]
df['Burnout?'] = (((df["PSS10_avg"]) <3) & (df["SPS_avg"].astype(float) < 3) & (df["SLON3_avg"].astype(float)<=3))
df.head()
## df['Burnout?'] = (((df["PSS10_avg"]) <=2.500000) & (df["SPS_avg"].astype(float) <= 2.333333) & (df["SLON3_avg"].astype(float)<=5.1))
df["Burnout?"].value_counts()
df.info()
# OneHotEncoding of categorical predictors (not the response)
df_cat = df[['Dem_edu','Dem_gender','Dem_maritalstatus']]
ohe.fit(df_cat)
df_cat_ohe = pd.DataFrame(ohe.transform(df_cat).toarray(),
columns=ohe.get_feature_names(df_cat.columns))
# Check the encoded variables
df_cat_ohe.info()
# Combining Numeric features with the OHE Categorical features
df_num = df[['PSS10_avg','SLON3_avg','SPS_avg','Dem_age']]
df_res = df['Burnout?']
df_ohe = pd.concat([df_num, df_cat_ohe, df_res],
sort = False, axis = 1).reindex(index=df_num.index)
# Check the final dataframe
df_ohe.info()
for col in df_ohe.columns:
df_ohe[col] = np.where(df_ohe[col].isnull(),"Unknown",df_ohe[col])
# Get names of indexes for which column Age has value 30
indexNames = df_ohe[df_ohe[col] == "Unknown"].index
# Delete these row indexes from dataFrame
df_ohe.drop(indexNames , inplace=True)
# Import essential models and functions from sklearn
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.tree import plot_tree
# Extract Response and Predictors
y = pd.DataFrame(df_ohe['Burnout?'])
X = pd.DataFrame(df_ohe.drop('Burnout?', axis = 1))
# Split the Dataset into Train and Test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
pd.isnull(X_train).sum() > 0
X_train = X_train.astype("float64")
X_test = X_test.astype("float64")
y_test= y_test.astype(int)
y_train= y_train.astype(int)
y_test.head()
# Decision Tree using Train Data
dectree = DecisionTreeClassifier(max_depth = 4)
dectree.fit(X_train, y_train)
# Plot the trained Decision Tree
f = plt.figure(figsize=(24,24))
plot_tree(dectree, filled=True, rounded=True,
feature_names=X_train.columns,
class_names=["No","Yes"])
# Predict the Response corresponding to Predictors
y_train_pred = dectree.predict(X_train)
# Print the Classification Accuracy
print("Train Data")
print("Accuracy :\t", dectree.score(X_train, y_train))
print()
# Print the Accuracy Measures from the Confusion Matrix
cmTrain = confusion_matrix(y_train, y_train_pred)
tpTrain = cmTrain[1][1] # True Positives : Good (1) predicted Good (1)
fpTrain = cmTrain[0][1] # False Positives : Bad (0) predicted Good (1)
tnTrain = cmTrain[0][0] # True Negatives : Bad (0) predicted Bad (0)
fnTrain = cmTrain[1][0] # False Negatives : Good (1) predicted Bad (0)
print("TPR Train :\t", (tpTrain/(tpTrain + fnTrain)))
print("TNR Train :\t", (tnTrain/(tnTrain + fpTrain)))
print()
print("FPR Train :\t", (fpTrain/(tnTrain + fpTrain)))
print("FNR Train :\t", (fnTrain/(tpTrain + fnTrain)))
# Plot the two-way Confusion Matrix
sb.heatmap(confusion_matrix(y_train, y_train_pred),
annot = True, fmt=".0f", annot_kws={"size": 18})
# Import the required metric from sklearn
from sklearn.metrics import confusion_matrix
# Predict the Response corresponding to Predictors
y_test_pred = dectree.predict(X_test)
# Print the Classification Accuracy
print("Test Data")
print("Accuracy :\t", dectree.score(X_test, y_test))
print()
# Print the Accuracy Measures from the Confusion Matrix
cmTest = confusion_matrix(y_test, y_test_pred)
tpTest = cmTest[1][1] # True Positives : Good (1) predicted Good (1)
fpTest = cmTest[0][1] # False Positives : Bad (0) predicted Good (1)
tnTest = cmTest[0][0] # True Negatives : Bad (0) predicted Bad (0)
fnTest = cmTest[1][0] # False Negatives : Good (1) predicted Bad (0)
print("TPR Test :\t", (tpTest/(tpTest + fnTest)))
print("TNR Test :\t", (tnTest/(tnTest + fpTest)))
print()
print("FPR Test :\t", (fpTest/(fpTest + tnTest)))
print("FNR Test :\t", (fnTest/(fnTest + tpTest)))
# Plot the two-way Confusion Matrix
sb.heatmap(confusion_matrix(y_test, y_test_pred),
annot = True, fmt=".0f", annot_kws={"size": 18})
# Import essential models and functions from sklearn
from sklearn.model_selection import train_test_split
# Extract Response and Predictors
y = pd.DataFrame(df_ohe['Burnout?'])
X = pd.DataFrame(df_ohe.drop('Burnout?', axis = 1))
X=X.astype("float64")
y=y.astype(int)
# Split the Dataset into Train and Test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.5)
# Import GridSearch for hyperparameter tuning using Cross-Validation (CV)
from sklearn.model_selection import GridSearchCV
# Define the Hyper-parameter Grid to search on, in case of Random Forest
param_grid = {'n_estimators': np.arange(100,901,150),
'max_depth': np.arange(5, 11)}
# Create the Hyper-parameter Grid
hpGrid = GridSearchCV(RandomForestClassifier(),
param_grid,
cv = 5,
scoring = 'accuracy')
# Train the models using Cross-Validation
hpGrid.fit(X_train, y_train["Burnout?"].ravel())
# Fetch the best Model or the best set of Hyper-parameters
print(hpGrid.best_estimator_)
# Print the score (accuracy) of the best Model after CV
print(np.abs(hpGrid.best_score_))
# Import RandomForestClassifier model from Scikit-Learn
from sklearn.ensemble import RandomForestClassifier
# Create the Random Forest object
rforest = RandomForestClassifier(n_estimators = 550, # n_estimators denote number of trees
max_depth = 9) # set the maximum depth of each tree
# Fit Random Forest on Train Data
rforest.fit(X_train, y_train["Burnout?"].ravel())
# Import confusion_matrix from Scikit-Learn
from sklearn.metrics import confusion_matrix
# Predict the Response corresponding to Predictors
y_train_pred = rforest.predict(X_train)
# Print the Classification Accuracy
print("Train Data")
print("Accuracy :\t", rforest.score(X_train, y_train))
print()
# Print the Accuracy Measures from the Confusion Matrix
cmTrain = confusion_matrix(y_train, y_train_pred)
tpTrain = cmTrain[1][1] # True Positives : Good (1) predicted Good (1)
fpTrain = cmTrain[0][1] # False Positives : Bad (0) predicted Good (1)
tnTrain = cmTrain[0][0] # True Negatives : Bad (0) predicted Bad (0)
fnTrain = cmTrain[1][0] # False Negatives : Good (1) predicted Bad (0)
print("TPR Train :\t", (tpTrain/(tpTrain + fnTrain)))
print("TNR Train :\t", (tnTrain/(tnTrain + fpTrain)))
print()
print("FPR Train :\t", (fpTrain/(tnTrain + fpTrain)))
print("FNR Train :\t", (fnTrain/(tpTrain + fnTrain)))
# Plot the two-way Confusion Matrix
sb.heatmap(confusion_matrix(y_train, y_train_pred),
annot = True, fmt=".0f", annot_kws={"size": 18})
# Import the required metric from sklearn
from sklearn.metrics import confusion_matrix
# Predict the Response corresponding to Predictors
y_test_pred = rforest.predict(X_test)
# Print the Classification Accuracy
print("Test Data")
print("Accuracy :\t", rforest.score(X_test, y_test))
print()
# Print the Accuracy Measures from the Confusion Matrix
cmTest = confusion_matrix(y_test, y_test_pred)
tpTest = cmTest[1][1] # True Positives : Good (1) predicted Good (1)
fpTest = cmTest[0][1] # False Positives : Bad (0) predicted Good (1)
tnTest = cmTest[0][0] # True Negatives : Bad (0) predicted Bad (0)
fnTest = cmTest[1][0] # False Negatives : Good (1) predicted Bad (0)
print("TPR Test :\t", (tpTest/(tpTest + fnTest)))
print("TNR Test :\t", (tnTest/(tnTest + fpTest)))
print()
print("FPR Test :\t", (fpTest/(fpTest + tnTest)))
print("FNR Test :\t", (fnTest/(fnTest + tpTest)))
# Plot the two-way Confusion Matrix
sb.heatmap(confusion_matrix(y_test, y_test_pred),
annot = True, fmt=".0f", annot_kws={"size": 18})
| 0.48121 | 0.136695 |
<a href="https://colab.research.google.com/github/davidbro-in/natural-language-processing/blob/main/3_custom_embedding_using_gensim.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://www.inove.com.ar"><img src="https://github.com/hernancontigiani/ceia_memorias_especializacion/raw/master/Figures/logoFIUBA.jpg" width="500" align="center"></a>
# Procesamiento de lenguaje natural
## Custom embedddings con Gensim
### Objetivo
El objetivo es utilizar documentos / corpus para crear embeddings de palabras basado en ese contexto. Se utilizará canciones de bandas para generar los embeddings, es decir, que los vectores tendrán la forma en función de como esa banda haya utilizado las palabras en sus canciones.
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import multiprocessing
from gensim.models import Word2Vec
```
### Datos
Utilizaremos como dataset el manifiesto comunista.
```
!wget https://www.gutenberg.org/cache/epub/31193/pg31193.txt
# Armar el dataset utilizando salto de línea para separar las oraciones/docs
df = pd.read_csv('/content/pg31193.txt', sep='/n', header=None)
df.head()
print("Cantidad de documentos:", df.shape[0])
```
### 1 - Preprocesamiento
```
from keras.preprocessing.text import text_to_word_sequence
sentence_tokens = []
# Recorrer todas las filas y transformar las oraciones
# en una secuencia de palabras (esto podría realizarse con NLTK o spaCy también)
for _, row in df[:None].iterrows():
sentence_tokens.append(text_to_word_sequence(row[0]))
# Demos un vistazo
sentence_tokens[:2]
```
### 2 - Crear los vectores (word2vec)
```
from gensim.models.callbacks import CallbackAny2Vec
# Durante el entrenamiento gensim por defecto no informa el "loss" en cada época
# Sobracargamos el callback para poder tener esta información
class callback(CallbackAny2Vec):
"""
Callback to print loss after each epoch
"""
def __init__(self):
self.epoch = 0
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
if self.epoch == 0:
print('Loss after epoch {}: {}'.format(self.epoch, loss))
else:
print('Loss after epoch {}: {}'.format(self.epoch, loss- self.loss_previous_step))
self.epoch += 1
self.loss_previous_step = loss
# Crearmos el modelo generador de vectoeres
# En este caso utilizaremos la estructura modelo Skipgram
w2v_model = Word2Vec(min_count=5, # frecuencia mínima de palabra para incluirla en el vocabulario
window=2, # cant de palabras antes y desp de la predicha
size=300, # dimensionalidad de los vectores
negative=20, # cantidad de negative samples... 0 es no se usa
workers=1, # si tienen más cores pueden cambiar este valor
sg=1) # modelo 0:CBOW 1:skipgram
# Buildear el vocabularui con los tokens
w2v_model.build_vocab(sentence_tokens)
# Cantidad de filas/docs encontradas en el corpus
print("Cantidad de docs en el corpus:", w2v_model.corpus_count)
# Cantidad de words encontradas en el corpus
print("Cantidad de words distintas en el corpus:", len(w2v_model.wv.vocab))
```
### 3 - Entrenar el modelo generador
```
# Entrenamos el modelo generador de vectores
# Utilizamos nuestro callback
w2v_model.train(sentence_tokens,
total_examples=w2v_model.corpus_count,
epochs=20,
compute_loss = True,
callbacks=[callback()]
)
```
### 4 - Ensayar
```
# Palabras que MÁS se relacionan con...:
w2v_model.wv.most_similar(positive=["capital"], topn=10)
# Palabras que MENOS se relacionan con...:
w2v_model.wv.most_similar(positive=["right"], topn=10)
# Palabras que MÁS se relacionan con...:
w2v_model.wv.most_similar(positive=["power"], topn=10)
# Palabras que MÁS se relacionan con...:
w2v_model.wv.most_similar(positive=["state"], topn=5)
# Ensayar con una palabra que no está en el corpus (en vocab):
w2v_model.wv.most_similar(negative=["work"])
```
### 5 - Visualizar agrupación de vectores
```
from sklearn.decomposition import IncrementalPCA
from sklearn.manifold import TSNE
import numpy as np
def reduce_dimensions(model):
num_dimensions = 2
vectors = np.asarray(model.wv.vectors)
labels = np.asarray(model.wv.index2word)
tsne = TSNE(n_components=num_dimensions, random_state=0)
vectors = tsne.fit_transform(vectors)
x_vals = [v[0] for v in vectors]
y_vals = [v[1] for v in vectors]
return x_vals, y_vals, labels
# Graficar los embedddings en 2D
import plotly.graph_objects as go
import plotly.express as px
x_vals, y_vals, labels = reduce_dimensions(w2v_model)
MAX_WORDS=200
fig = px.scatter(x=x_vals[:MAX_WORDS], y=y_vals[:MAX_WORDS], text=labels[:MAX_WORDS])
fig.show(renderer="colab") # esto para plotly en colab
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import multiprocessing
from gensim.models import Word2Vec
!wget https://www.gutenberg.org/cache/epub/31193/pg31193.txt
# Armar el dataset utilizando salto de línea para separar las oraciones/docs
df = pd.read_csv('/content/pg31193.txt', sep='/n', header=None)
df.head()
print("Cantidad de documentos:", df.shape[0])
from keras.preprocessing.text import text_to_word_sequence
sentence_tokens = []
# Recorrer todas las filas y transformar las oraciones
# en una secuencia de palabras (esto podría realizarse con NLTK o spaCy también)
for _, row in df[:None].iterrows():
sentence_tokens.append(text_to_word_sequence(row[0]))
# Demos un vistazo
sentence_tokens[:2]
from gensim.models.callbacks import CallbackAny2Vec
# Durante el entrenamiento gensim por defecto no informa el "loss" en cada época
# Sobracargamos el callback para poder tener esta información
class callback(CallbackAny2Vec):
"""
Callback to print loss after each epoch
"""
def __init__(self):
self.epoch = 0
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
if self.epoch == 0:
print('Loss after epoch {}: {}'.format(self.epoch, loss))
else:
print('Loss after epoch {}: {}'.format(self.epoch, loss- self.loss_previous_step))
self.epoch += 1
self.loss_previous_step = loss
# Crearmos el modelo generador de vectoeres
# En este caso utilizaremos la estructura modelo Skipgram
w2v_model = Word2Vec(min_count=5, # frecuencia mínima de palabra para incluirla en el vocabulario
window=2, # cant de palabras antes y desp de la predicha
size=300, # dimensionalidad de los vectores
negative=20, # cantidad de negative samples... 0 es no se usa
workers=1, # si tienen más cores pueden cambiar este valor
sg=1) # modelo 0:CBOW 1:skipgram
# Buildear el vocabularui con los tokens
w2v_model.build_vocab(sentence_tokens)
# Cantidad de filas/docs encontradas en el corpus
print("Cantidad de docs en el corpus:", w2v_model.corpus_count)
# Cantidad de words encontradas en el corpus
print("Cantidad de words distintas en el corpus:", len(w2v_model.wv.vocab))
# Entrenamos el modelo generador de vectores
# Utilizamos nuestro callback
w2v_model.train(sentence_tokens,
total_examples=w2v_model.corpus_count,
epochs=20,
compute_loss = True,
callbacks=[callback()]
)
# Palabras que MÁS se relacionan con...:
w2v_model.wv.most_similar(positive=["capital"], topn=10)
# Palabras que MENOS se relacionan con...:
w2v_model.wv.most_similar(positive=["right"], topn=10)
# Palabras que MÁS se relacionan con...:
w2v_model.wv.most_similar(positive=["power"], topn=10)
# Palabras que MÁS se relacionan con...:
w2v_model.wv.most_similar(positive=["state"], topn=5)
# Ensayar con una palabra que no está en el corpus (en vocab):
w2v_model.wv.most_similar(negative=["work"])
from sklearn.decomposition import IncrementalPCA
from sklearn.manifold import TSNE
import numpy as np
def reduce_dimensions(model):
num_dimensions = 2
vectors = np.asarray(model.wv.vectors)
labels = np.asarray(model.wv.index2word)
tsne = TSNE(n_components=num_dimensions, random_state=0)
vectors = tsne.fit_transform(vectors)
x_vals = [v[0] for v in vectors]
y_vals = [v[1] for v in vectors]
return x_vals, y_vals, labels
# Graficar los embedddings en 2D
import plotly.graph_objects as go
import plotly.express as px
x_vals, y_vals, labels = reduce_dimensions(w2v_model)
MAX_WORDS=200
fig = px.scatter(x=x_vals[:MAX_WORDS], y=y_vals[:MAX_WORDS], text=labels[:MAX_WORDS])
fig.show(renderer="colab") # esto para plotly en colab
| 0.638497 | 0.917266 |
# Applications
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
```
## Polynomial Interpolation
[Polynomial interpolation](https://en.wikipedia.org/wiki/Polynomial_interpolation) finds the unique polynomial of degree $n$ which passes through $n+1$ points in the $xy$-plane. For example, two points in the $xy$-plane determine a line and three points determine a parabola.
### Formulation
Suppose we have $n + 1$ points in the $xy$-plane
$$
(x_0,y_0),(x_1,y_1),\dots,(x_n,y_n)
$$
such that all the $x$ values are distinct ($x_i \not= x_j$ for $i \not= j$). The general form of a degree $n$ polynomial is
$$
p(x) = a_0 + a_1 x + a_2x^2 + \cdots + a_n x^n
$$
If $p(x)$ is the unique degree $n$ polynomial which interpolates all the points, then the coefficients $a_0$, $a_1$, $\dots$, $a_n$ satisfy the following equations:
\begin{align}
a_0 + a_1x_0 + a_2x_0^2 + \cdots + a_n x_0^n &= y_0 \\\
a_0 + a_1x_1 + a_2x_1^2 + \cdots + a_n x_1^n &= y_1 \\\
& \ \ \vdots \\\
a_0 + a_1x_n + a_2x_n^2 + \cdots + a_n x_n^n &= y_n
\end{align}
Therefore the vector of coefficients
$$
\mathbf{a} =
\begin{bmatrix}
a_0 \\\
a_1 \\\
\vdots \\\
a_n
\end{bmatrix}
$$
is the unique the solution of the linear system of equations
$$
X \mathbf{a}=\mathbf{y}
$$
where $X$ is the [Vandermonde matrix](https://en.wikipedia.org/wiki/Vandermonde_matrix) and $\mathbf{y}$ is the vector of $y$ values
$$
X =
\begin{bmatrix}
1 & x_0 & x_0^2 & \dots & x_0^n \\\
1 & x_1 & x_1^2 & \dots & x_1^n \\\
& \vdots & & & \vdots \\\
1 & x_n & x_n^2 & \dots & x_n^n \\\
\end{bmatrix}
\ \ \mathrm{and} \ \
\mathbf{y} =
\begin{bmatrix}
y_0 \\\
y_1 \\\
y_2 \\\
\vdots \\\
y_n
\end{bmatrix}
$$
### Examples
**Simple Parabola**
Let's do a simple example. We know that $y=x^2$ is the unique degree 2 polynomial that interpolates the points $(-1,1)$, $(0,0)$ and $(1,1)$. Let's compute the polynomial interpolation of these points and verify the expected result $a_0=0$, $a_1=0$ and $a_2=1$.
Create the Vandermonde matrix $X$ with the array of $x$ values:
```
x = np.array([-1,0,1])
X = np.column_stack([[1,1,1],x,x**2])
print(X)
```
Create the vector $\mathbf{y}$ of $y$ values:
```
y = np.array([1,0,1]).reshape(3,1)
print(y)
```
We expect the solution $\mathbf{a} = [0,0,1]^T$:
```
a = la.solve(X,y)
print(a)
```
Success!
**Another Parabola**
The polynomial interpolation of 3 points $(x_0,y_0)$, $(x_1,y_1)$ and $(x_2,y_2)$ is the parabola $p(x) = a_0 + a_1x + a_2x^2$ such that the coefficients satisfy
\begin{align}
a_0 + a_1x_0 + a_2x_0^2 = y_0 \\\
a_0 + a_1x_1 + a_2x_1^2 = y_1 \\\
a_0 + a_1x_2 + a_2x_2^2 = y_2
\end{align}
Let's find the polynomial interpolation of the points $(0,6)$, $(3,1)$ and $(8,2)$.
Create the Vandermonde matrix $X$:
```
x = np.array([0,3,8])
X = np.column_stack([[1,1,1],x,x**2])
print(X)
```
And the vector of $y$ values:
```
y = np.array([6,1,2]).reshape(3,1)
print(y)
```
Compute the vector $\mathbf{a}$ of coefficients:
```
a = la.solve(X,y)
print(a)
```
And plot the result:
```
xs = np.linspace(0,8,20)
ys = a[0] + a[1]*xs + a[2]*xs**2
plt.plot(xs,ys,x,y,'b.',ms=20)
plt.show()
```
**Over Fitting 10 Random Points**
Now let's interpolate points with $x_i=i$, $i=0,\dots,9$, and 10 random integers sampled from $[0,10)$ as $y$ values:
```
N = 10
x = np.arange(0,N)
y = np.random.randint(0,10,N)
plt.plot(x,y,'r.')
plt.show()
```
Create the Vandermonde matrix and verify the first 5 rows and columns:
```
X = np.column_stack([x**k for k in range(0,N)])
print(X[:5,:5])
```
We could also use the NumPy function [`numpy.vander`](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.vander.html). We specify the option `increasing=True` so that powers of $x_i$ increase left-to-right:
```
X = np.vander(x,increasing=True)
print(X[:5,:5])
```
Solve the linear system:
```
a = la.solve(X,y)
```
Plot the interpolation:
```
xs = np.linspace(0,N-1,200)
ys = sum([a[k]*xs**k for k in range(0,N)])
plt.plot(x,y,'r.',xs,ys)
plt.show()
```
Success! But notice how unstable the curve is. That's why it better to use a [cubic spline](https://en.wikipedia.org/wiki/Spline_%28mathematics%29) to interpolate a large number of points.
However real-life data is usually very noisy and interpolation is not the best tool to fit a line to data. Instead we would want to take a polynomial with smaller degree (like a line) and fit it as best we can without interpolating the points.
## Least Squares Linear Regression
Suppose we have $n+1$ points
$$
(x_0,y_0) , (x_1,y_1) , \dots , (x_n,y_n)
$$
in the $xy$-plane and we want to fit a line
$$
y=a_0 + a_1x
$$
that "best fits" the data. There are different ways to quantify what "best fit" means but the most common method is called [least squares linear regression](https://en.wikipedia.org/wiki/Linear_regression). In least squares linear regression, we want to minimize the sum of squared errors
$$
SSE = \sum_i (y_i - (a_0 + a_1 x_i))^2
$$
### Formulation
If we form matrices
$$
X =
\begin{bmatrix}
1 & x_0 \\\
1 & x_1 \\\
\vdots & \vdots \\\
1 & x_n
\end{bmatrix}
\ , \ \
\mathbf{y} =
\begin{bmatrix}
y_0 \\\
y_1 \\\
\vdots \\\
y_n
\end{bmatrix}
\ , \ \
\mathbf{a} =
\begin{bmatrix}
a_0 \\\ a_1
\end{bmatrix}
$$
then the sum of squared errors can be expressed as
$$
SSE = \Vert \mathbf{y} - X \mathbf{a} \Vert^2
$$
---
**Theorem.** (Least Squares Linear Regression) Consider $n+1$ points
$$
(x_0,y_0) , (x_1,y_1) , \dots , (x_n,y_n)
$$
in the $xy$-plane. The coefficients $\mathbf{a} = [a_0,a_1]^T$ which minimize the sum of squared errors
$$
SSE = \sum_i (y_i - (a_0 + a_1 x_i))^2
$$
is the unique solution of the system
$$
\left( X^T X \right) \mathbf{a} = X^T \mathbf{y}
$$
*Sketch of Proof.* The product $X\mathbf{a}$ is in the column space of $X$. The line connecting $\mathbf{y}$ to the nearest point in the column space of $X$ is perpendicluar to the column space of $X$. Therefore
$$
X^T \left( \mathbf{y} - X \mathbf{a} \right) = \mathbf{0}
$$
and so
$$
\left( X^T X \right) \mathbf{a} = X^T \mathbf{y}
$$
---
### Examples
**Fake Noisy Linear Data**
Let's do an example with some fake data. Let's build a set of random points based on the model
$$
y = a_0 + a_1x + \epsilon
$$
for some arbitrary choice of $a_0$ and $a_1$. The factor $\epsilon$ represents some random noise which we model using the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution). We can generate random numbers sampled from the standard normal distribution using the NumPy function [`numpy.random.rand`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randn.html).
The goal is to demonstrate that we can use linear regression to retrieve the coefficeints $a_0$ and $a_1$ from the linear regression calculation.
```
a0 = 2
a1 = 3
N = 100
x = np.random.rand(100)
noise = 0.1*np.random.randn(100)
y = a0 + a1*x + noise
plt.scatter(x,y);
plt.show()
```
Let's use linear regression to retrieve the coefficients $a_0$ and $a_1$. Construct the matrix $X$:
```
X = np.column_stack([np.ones(N),x])
print(X.shape)
```
Let's look at the first 5 rows of $X$ to see that it is in the correct form:
```
X[:5,:]
```
Use `scipy.linalg.solve` to solve $\left(X^T X\right)\mathbf{a} = \left(X^T\right)\mathbf{y}$ for $\mathbf{a}$:
```
a = la.solve(X.T @ X, X.T @ y)
print(a)
```
We have retrieved the coefficients of the model almost exactly! Let's plot the random data points with the linear regression we just computed.
```
xs = np.linspace(0,1,10)
ys = a[0] + a[1]*xs
plt.plot(xs,ys,'r',linewidth=4)
plt.scatter(x,y);
plt.show()
```
**Real Kobe Bryant Data**
Let's work with some real data. [Kobe Bryant](https://www.basketball-reference.com/players/b/bryanko01.html) retired in 2016 with 33643 total points which is the [third highest total points in NBA history](https://en.wikipedia.org/wiki/List_of_National_Basketball_Association_career_scoring_leaders). How many more years would Kobe Bryant have to had played to pass [Kareem Abdul-Jabbar's](https://en.wikipedia.org/wiki/Kareem_Abdul-Jabbar) record 38387 points?
Kobe Bryant's peak was the 2005-2006 NBA season. Let's look at Kobe Bryant's total games played and points per game from 2006 to 2016.
```
years = np.array([2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016])
games = [80,77,82,82,73,82,58,78,6,35,66]
points = np.array([35.4,31.6,28.3,26.8,27,25.3,27.9,27.3,13.8,22.3,17.6])
fig = plt.figure(figsize=(12,10))
axs = fig.subplots(2,1,sharex=True)
axs[0].plot(years,points,'b.',ms=15)
axs[0].set_title('Kobe Bryant, Points per Game')
axs[0].set_ylim([0,40])
axs[0].grid(True)
axs[1].bar(years,games)
axs[1].set_title('Kobe Bryant, Games Played')
axs[1].set_ylim([0,100])
axs[1].grid(True)
plt.show()
```
Kobe was injured for most of the 2013-2014 NBA season and played only 6 games. This is an outlier and so we can drop this data point:
```
years = np.array([2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2015, 2016])
games = np.array([80,77,82,82,73,82,58,78,35,66])
points = np.array([35.4,31.6,28.3,26.8,27,25.3,27.9,27.3,22.3,17.6])
```
Let's compute the average games played per season over this period:
```
avg_games_per_year = np.mean(games)
print(avg_games_per_year)
```
Compute the linear model for points per game:
```
X = np.column_stack([np.ones(len(years)),years])
a = la.solve(X.T @ X, X.T @ points)
model = a[0] + a[1]*years
plt.plot(years,model,years,points,'b.',ms=15)
plt.title('Kobe Bryant, Points per Game')
plt.ylim([0,40])
plt.grid(True)
plt.show()
```
Now we can extrapolate to future years and multiply points per games by games per season and compute the cumulative sum to see Kobe's total points:
```
future_years = np.array([2017,2018,2019,2020,2021])
future_points = (a[0] + a[1]*future_years)*avg_games_per_year
total_points = 33643 + np.cumsum(future_points)
kareem = 38387*np.ones(len(future_years))
plt.plot(future_years,total_points,future_years,kareem)
plt.grid(True)
plt.xticks(future_years)
plt.title('Kobe Bryant Total Points Prediction')
plt.show()
```
Only 4 more years!
## Polynomial Regression
### Formulation
The same idea works for fitting a degree $d$ polynomial model
$$
y = a_0 + a_1x + a_2x^2 + \cdots + a_dx^d
$$
to a set of $n+1$ data points
$$
(x_0,y_0), (x_1,y_1), \dots , (x_n,y_n)
$$
We form the matrices as before but now the Vandermonde matrix $X$ has $d+1$ columns
$$
X =
\begin{bmatrix}
1 & x_0 & x_0^2 & \cdots & x_0^d \\\
1 & x_1 & x_1^2 & \cdots & x_1^d \\\
& \vdots & & & \vdots \\\
1 & x_n & x_n^2 & \cdots & x_n^d
\end{bmatrix}
\ , \ \
\mathbf{y} =
\begin{bmatrix}
y_0 \\\
y_1 \\\
\vdots \\\
y_n
\end{bmatrix}
\ , \ \
\mathbf{a} =
\begin{bmatrix}
a_0 \\\
a_1 \\\
a_2 \\\
\vdots \\\
a_d
\end{bmatrix}
$$
The coefficients $\mathbf{a} = [a_0,a_1,a_2,\dots,a_d]^T$ which minimize the sum of squared errors $SSE$ is the unique solution of the linear system
$$
\left( X^T X \right) \mathbf{a} = \left( X^T \right) \mathbf{y}
$$
### Example
**Fake Noisy Quadratic Data**
Let's build some fake data using a quadratic model $y = a_0 + a_1x + a_2x^2 + \epsilon$ and use linear regression to retrieve the coefficients $a_0$, $a_1$ and $a_2$.
```
a0 = 3
a1 = 5
a2 = 8
N = 1000
x = 2*np.random.rand(N) - 1 # Random numbers in the interval (-1,1)
noise = np.random.randn(N)
y = a0 + a1*x + a2*x**2 + noise
plt.scatter(x,y,alpha=0.5,lw=0);
plt.show()
```
Construct the matrix $X$:
```
X = np.column_stack([np.ones(N),x,x**2])
```
Use `scipy.linalg.solve` to solve $\left( X^T X \right) \mathbf{a} = \left( X^T \right) \mathbf{y}$:
```
a = la.solve((X.T @ X),X.T @ y)
```
Plot the result:
```
xs = np.linspace(-1,1,20)
ys = a[0] + a[1]*xs + a[2]*xs**2
plt.plot(xs,ys,'r',linewidth=4)
plt.scatter(x,y,alpha=0.5,lw=0)
plt.show()
```
## Graph Theory
A [graph](https://en.wikipedia.org/wiki/Graph_%28discrete_mathematics%29) is a set of vertices and a set of edges connecting some of the vertices. We will consider simple, undirected, connected graphs:
* a graph is [simple](https://en.wikipedia.org/wiki/Graph_%28discrete_mathematics%29#Simple_graph) if there are no loops or multiple edges between vertices
* a graph is [undirected](https://en.wikipedia.org/wiki/Graph_%28discrete_mathematics%29#Undirected_graph) if the edges do not have an orientation
* a graph is [connected](https://en.wikipedia.org/wiki/Graph_%28discrete_mathematics%29#Connected_graph) if each vertex is connected to every other vertex in the graph by a path
We can visualize a graph as a set of vertices and edges and answer questions about the graph just by looking at it. However this becomes much more difficult with a large graphs such as a [social network graph](https://en.wikipedia.org/wiki/Social_network_analysis). Instead, we construct matrices from the graph such as the [adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix) and the [Laplacian matrix](https://en.wikipedia.org/wiki/Laplacian_matrix) and study their properties.
[Spectral graph theory](https://en.wikipedia.org/wiki/Spectral_graph_theory) is the study of the eigenvalues of the adjacency matrix (and other associated matrices) and the relationships to the structure of $G$.
### NetworkX
Let's use the Python package [NetworkX](https://networkx.github.io/) to construct and visualize some simple graphs.
```
import networkx as nx
```
### Adjacency Matrix
The [adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix) $A_G$ of a graph $G$ with $n$ vertices is the square matrix of size $n$ such that $A_{i,j} = 1$ if vertices $i$ and $j$ are connected by an edge, and $A_{i,j} = 0$ otherwise.
We can use `networkx` to create the adjacency matrix of a graph $G$. The function `nx.adjacency_matrix` returns a [sparse matrix](https://docs.scipy.org/doc/scipy/reference/sparse.html) and we convert it to a regular NumPy array using the `todense` method.
For example, plot the [complete graph](https://en.wikipedia.org/wiki/Complete_graph) with 5 vertices and compute the adjacency matrix:
```
G = nx.complete_graph(5)
nx.draw(G,with_labels=True)
A = nx.adjacency_matrix(G).todense()
print(A)
```
### Length of the Shortest Path
The length of the [shortest path](https://en.wikipedia.org/wiki/Shortest_path_problem) between vertices in a simple, undirected graph $G$ can be easily computed from the adjacency matrix $A_G$. In particular, the length of shortest path from vertex $i$ to vertex $j$ ($i\not=j$) is the smallest positive integer $k$ such that $A^k_{i,j} \not= 0$.
Plot the [dodecahedral graph](https://en.wikipedia.org/wiki/Regular_dodecahedron#Dodecahedral_graph):
```
G = nx.dodecahedral_graph()
nx.draw(G,with_labels=True)
A = nx.adjacency_matrix(G).todense()
print(A)
```
With this labelling, let's find the length of the shortest path from vertex $0$ to $15$:
```
i = 0
j = 15
k = 1
Ak = A
while Ak[i,j] == 0:
Ak = Ak @ A
k = k + 1
print('Length of the shortest path is',k)
```
### Triangles in a Graph
A simple result in spectral graph theory is the number of [triangles](https://en.wikipedia.org/wiki/Adjacency_matrix#Matrix_powers) in a graph $T(G)$ is given by:
$$
T(G) = \frac{1}{6} ( \lambda_1^3 + \lambda_2^3 + \cdots + \lambda_n^3)
$$
where $\lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n$ are the eigenvalues of the adjacency matrix.
Let's verify this for the simplest case, the complete graph on 3 vertices:
```
C3 = nx.complete_graph(3)
nx.draw(C3,with_labels=True)
A3 = nx.adjacency_matrix(C3).todense()
eigvals, eigvecs = la.eig(A3)
int(np.round(np.sum(eigvals.real**3)/6,0))
```
Let's compute the number of triangles in the complete graph 7 vertices:
```
C7 = nx.complete_graph(7)
nx.draw(C7,with_labels=True)
A7 = nx.adjacency_matrix(C7).todense()
eigvals, eigvecs = la.eig(A7)
int(np.round(np.sum(eigvals.real**3)/6,0))
```
There are 35 triangles in the complete graph with 7 vertices!
Let's write a function called `triangles` which takes a square matrix `M` and return the sum
$$
\frac{1}{6} ( \lambda_1^3 + \lambda_2^3 + \cdots + \lambda_n^3)
$$
where $\lambda_i$ are the eigenvalues of the symmetric matrix $A = (M + M^T)/2$. Note that $M = A$ if $M$ is symmetric. The return value is the number of triangles in the graph $G$ if the input $M$ is the adjacency matrix.
```
def triangles(M):
A = (M + M.T)/2
eigvals, eigvecs = la.eig(A)
eigvals = eigvals.real
return int(np.round(np.sum(eigvals**3)/6,0))
```
Next, let's try a [Turan graph](https://en.wikipedia.org/wiki/Tur%C3%A1n_graph).
```
G = nx.turan_graph(10,5)
nx.draw(G,with_labels=True)
A = nx.adjacency_matrix(G).todense()
print(A)
```
Find the number of triangles:
```
triangles(A)
```
Finally, let's compute the number of triangles in the dodecahedral graph:
```
G = nx.dodecahedral_graph()
nx.draw(G,with_labels=True)
A = nx.adjacency_matrix(G).todense()
print(A)
np.round(triangles(A),2)
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
x = np.array([-1,0,1])
X = np.column_stack([[1,1,1],x,x**2])
print(X)
y = np.array([1,0,1]).reshape(3,1)
print(y)
a = la.solve(X,y)
print(a)
x = np.array([0,3,8])
X = np.column_stack([[1,1,1],x,x**2])
print(X)
y = np.array([6,1,2]).reshape(3,1)
print(y)
a = la.solve(X,y)
print(a)
xs = np.linspace(0,8,20)
ys = a[0] + a[1]*xs + a[2]*xs**2
plt.plot(xs,ys,x,y,'b.',ms=20)
plt.show()
N = 10
x = np.arange(0,N)
y = np.random.randint(0,10,N)
plt.plot(x,y,'r.')
plt.show()
X = np.column_stack([x**k for k in range(0,N)])
print(X[:5,:5])
X = np.vander(x,increasing=True)
print(X[:5,:5])
a = la.solve(X,y)
xs = np.linspace(0,N-1,200)
ys = sum([a[k]*xs**k for k in range(0,N)])
plt.plot(x,y,'r.',xs,ys)
plt.show()
a0 = 2
a1 = 3
N = 100
x = np.random.rand(100)
noise = 0.1*np.random.randn(100)
y = a0 + a1*x + noise
plt.scatter(x,y);
plt.show()
X = np.column_stack([np.ones(N),x])
print(X.shape)
X[:5,:]
a = la.solve(X.T @ X, X.T @ y)
print(a)
xs = np.linspace(0,1,10)
ys = a[0] + a[1]*xs
plt.plot(xs,ys,'r',linewidth=4)
plt.scatter(x,y);
plt.show()
years = np.array([2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016])
games = [80,77,82,82,73,82,58,78,6,35,66]
points = np.array([35.4,31.6,28.3,26.8,27,25.3,27.9,27.3,13.8,22.3,17.6])
fig = plt.figure(figsize=(12,10))
axs = fig.subplots(2,1,sharex=True)
axs[0].plot(years,points,'b.',ms=15)
axs[0].set_title('Kobe Bryant, Points per Game')
axs[0].set_ylim([0,40])
axs[0].grid(True)
axs[1].bar(years,games)
axs[1].set_title('Kobe Bryant, Games Played')
axs[1].set_ylim([0,100])
axs[1].grid(True)
plt.show()
years = np.array([2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2015, 2016])
games = np.array([80,77,82,82,73,82,58,78,35,66])
points = np.array([35.4,31.6,28.3,26.8,27,25.3,27.9,27.3,22.3,17.6])
avg_games_per_year = np.mean(games)
print(avg_games_per_year)
X = np.column_stack([np.ones(len(years)),years])
a = la.solve(X.T @ X, X.T @ points)
model = a[0] + a[1]*years
plt.plot(years,model,years,points,'b.',ms=15)
plt.title('Kobe Bryant, Points per Game')
plt.ylim([0,40])
plt.grid(True)
plt.show()
future_years = np.array([2017,2018,2019,2020,2021])
future_points = (a[0] + a[1]*future_years)*avg_games_per_year
total_points = 33643 + np.cumsum(future_points)
kareem = 38387*np.ones(len(future_years))
plt.plot(future_years,total_points,future_years,kareem)
plt.grid(True)
plt.xticks(future_years)
plt.title('Kobe Bryant Total Points Prediction')
plt.show()
a0 = 3
a1 = 5
a2 = 8
N = 1000
x = 2*np.random.rand(N) - 1 # Random numbers in the interval (-1,1)
noise = np.random.randn(N)
y = a0 + a1*x + a2*x**2 + noise
plt.scatter(x,y,alpha=0.5,lw=0);
plt.show()
X = np.column_stack([np.ones(N),x,x**2])
a = la.solve((X.T @ X),X.T @ y)
xs = np.linspace(-1,1,20)
ys = a[0] + a[1]*xs + a[2]*xs**2
plt.plot(xs,ys,'r',linewidth=4)
plt.scatter(x,y,alpha=0.5,lw=0)
plt.show()
import networkx as nx
G = nx.complete_graph(5)
nx.draw(G,with_labels=True)
A = nx.adjacency_matrix(G).todense()
print(A)
G = nx.dodecahedral_graph()
nx.draw(G,with_labels=True)
A = nx.adjacency_matrix(G).todense()
print(A)
i = 0
j = 15
k = 1
Ak = A
while Ak[i,j] == 0:
Ak = Ak @ A
k = k + 1
print('Length of the shortest path is',k)
C3 = nx.complete_graph(3)
nx.draw(C3,with_labels=True)
A3 = nx.adjacency_matrix(C3).todense()
eigvals, eigvecs = la.eig(A3)
int(np.round(np.sum(eigvals.real**3)/6,0))
C7 = nx.complete_graph(7)
nx.draw(C7,with_labels=True)
A7 = nx.adjacency_matrix(C7).todense()
eigvals, eigvecs = la.eig(A7)
int(np.round(np.sum(eigvals.real**3)/6,0))
def triangles(M):
A = (M + M.T)/2
eigvals, eigvecs = la.eig(A)
eigvals = eigvals.real
return int(np.round(np.sum(eigvals**3)/6,0))
G = nx.turan_graph(10,5)
nx.draw(G,with_labels=True)
A = nx.adjacency_matrix(G).todense()
print(A)
triangles(A)
G = nx.dodecahedral_graph()
nx.draw(G,with_labels=True)
A = nx.adjacency_matrix(G).todense()
print(A)
np.round(triangles(A),2)
| 0.326379 | 0.988503 |
Human Strategy
----------------
Human strategy is a strategy which asks the user to input a move rather than deriving its own action.
The history of the match is also shown in the terminal, thus you will be able to see the history of the game.
We are now going to open an editor. There we are going to create a script which will call the human player. Our script will look similar to this:
```python
import axelrod as axl
import random
strategies = [s() for s in axl.strategies]
opponent = random.choice(axl.strategies)
me = axl.Human(name='Nikoleta')
players = [opponent(), me]
# play the match and return winner and final score
match = axl.Match(players, turns=3)
match.play()
print('You have competed against {}, the final score of the match is:{} and the winner was {}'.format(opponent.name, match.final_score(),match.winner()))
```
Exercise
--------
Using the script that was described above:
- create a `Human()` player with your name
- play against various opponents and write down your scores
Human script (Optional)
-----------------------
A script that calls the `Human()` strategy and creates a match using a random generated opponent has already been written (`human.py`). For now we will have to move to a terminal.
Once you have opened your terminal activate the `game-python` environment and you can now type the following command:
`$ python hyman.py -h`
This will output some information about the script.
We can see the script allows the following arguments:
-n NAME The name of the human strategy [default: me]
-t TURNS The number of turns [default: 5]
Now we run the script. Try typing the following command:
`$ python human.py -n Human -t 3`
Once we run the command you will be prompted for the action to play at each turn:
Starting new match
Turn 1 action [C or D] for Human: C
Turn 1: me played C, opponent played C
Turn 2 action [C or D] for Human: D
Turn 2: me played D, opponent played C
Turn 3 action [C or D] for Human: C
You have competed against Prober, the final score of the match is: (8, 8) and the winner was False
The script will automatically output the name of your opponent, the final score and the winner of the match.
**Note**: You might have a different opponent name, that is because the script randomly picks a player and
the winner is False because it is a draw.
Summary
--------
This section has discussed:
- The human strategy and how we can compete against the strategies of the library.
|
github_jupyter
|
import axelrod as axl
import random
strategies = [s() for s in axl.strategies]
opponent = random.choice(axl.strategies)
me = axl.Human(name='Nikoleta')
players = [opponent(), me]
# play the match and return winner and final score
match = axl.Match(players, turns=3)
match.play()
print('You have competed against {}, the final score of the match is:{} and the winner was {}'.format(opponent.name, match.final_score(),match.winner()))
| 0.173288 | 0.828384 |
# Max-Voting
### Getting Ready
```
import os
import pandas as pd
os.chdir(".../Chapter 2")
os.getcwd()
```
#### Download the dataset Cryotherapy.csv from the github location and copy the same to your working directory. Let's read the dataset.
```
cryotherapy_data = pd.read_csv("Cryotherapy.csv")
```
#### Let's take a glance at the data with the below code:
```
cryotherapy_data.head(5)
```
### How to do it...
#### We import the required libraries for building the decision tree, support vector machines and logistic regression models. We also import VotingClassifier for max voting
```
# Import required libraries
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import VotingClassifier
```
#### We move onto building our feature set and creating our train & test dataset
```
# We create train & Test sample from our dataset
from sklearn.cross_validation import train_test_split
# create feature & response variables
feature_columns = ['sex', 'age', 'Time', 'Number_of_Warts', 'Type', 'Area']
X = cryotherapy_data[feature_columns]
Y = cryotherapy_data['Result_of_Treatment']
# Create train & test sets
X_train, X_test, Y_train, Y_test = \
train_test_split(X, Y, test_size=0.20, random_state=1)
```
### Hard Voting
#### We build our models with decision tree, support vector machines and logistic regression algorithms
```
# create the sub models
estimators = []
dt_model = DecisionTreeClassifier(random_state=1)
estimators.append(('DecisionTree', dt_model))
svm_model = SVC(random_state=1)
estimators.append(('SupportVector', svm_model))
logit_model = LogisticRegression(random_state=1)
estimators.append(('Logistic Regression', logit_model))
#dt_model.fit(X_train,Y_train)
#svm_model.fit(X_train,Y_train)
#knn_model.fit(X_train,Y_train)
```
#### We build individual models with each of the classifiers we have chosen
```
from sklearn.metrics import accuracy_score
for each_estimator in (dt_model, svm_model, logit_model):
each_estimator.fit(X_train, Y_train)
Y_pred = each_estimator.predict(X_test)
print(each_estimator.__class__.__name__, accuracy_score(Y_test, Y_pred))
```
#### We proceed to ensemble our models and use VotingClassifier to score accuracy
```
# Using VotingClassifier() to build ensemble model with Hard Voting
ensemble_model = VotingClassifier(estimators=estimators, voting='hard')
ensemble_model.fit(X_train,Y_train)
predicted_labels = ensemble_model.predict(X_test)
print("Classifier Accuracy using Hard Voting: ", accuracy_score(Y_test, predicted_labels))
```
### Soft Voting
#### The below code creates an ensemble using soft voting:
```
# create the sub models
estimators = []
dt_model = DecisionTreeClassifier(random_state=1)
estimators.append(('DecisionTree', dt_model))
svm_model = SVC(random_state=1, probability=True)
estimators.append(('SupportVector', svm_model))
logit_model = LogisticRegression(random_state=1)
estimators.append(('Logistic Regression', logit_model))
for each_estimator in (dt_model, svm_model, logit_model):
each_estimator.fit(X_train, Y_train)
Y_pred = each_estimator.predict(X_test)
print(each_estimator.__class__.__name__, accuracy_score(Y_test, Y_pred))
# Using VotingClassifier() to build ensemble model with Soft Voting
ensemble_model = VotingClassifier(estimators=estimators, voting='soft')
ensemble_model.fit(X_train,Y_train)
predicted_labels = ensemble_model.predict(X_test)
print("Classifier Accuracy using Soft Voting: ", accuracy_score(Y_test, predicted_labels))
```
# Averaging
#### We download the dataset whitewines.csv from the github location and copy the same to your working directory. Let's read the dataset.
```
wine_data = pd.read_csv("whitewines.csv")
```
#### Let's take a glance at the data with the below code
```
wine_data.head(5)
```
#### We import the required libraries
```
# Import required libraries
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.svm import SVR
```
#### We create the response and the feature set
```
# Create feature and response variable set
from sklearn.cross_validation import train_test_split
# create feature & response variables
feature_columns = ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar',\
'chlorides', 'free sulfur dioxide', 'total sulfur dioxide',\
'density', 'pH', 'sulphates', 'alcohol']
X = wine_data[feature_columns]
Y = wine_data['quality']
```
#### We split our data into train & test set
```
# Create train & test sets
X_train, X_test, Y_train, Y_test = \
train_test_split(X, Y, test_size=0.30, random_state=1)
```
#### We build our base regression learners with linear regression, SVR & decision tree
```
# Build base learners
linreg_model = LinearRegression()
svr_model = SVR()
regressiontree_model = DecisionTreeRegressor()
linreg_model.fit(X_train, Y_train)
svr_model.fit(X_train, Y_train)
regressiontree_model.fit(X_train, Y_train)
```
#### Use the base learners to predict on the test data
```
linreg_predictions = linreg_model.predict(X_test)
svr_predictions = svr_model.predict(X_test)
regtree_predictions = regressiontree_model.predict(X_test)
```
#### We add the predictions and divide by the number of base learners
```
average_predictions=(linreg_predictions + svr_predictions + regtree_predictions)/3
```
# Weighted Averaging
```
os.chdir(".../Chapter 2")
os.getcwd()
```
#### We download the Diagnostic Wisconsin Breast Cancer database wisc_bc_data.csv from the github location and copy the same to your working directory. Let's read the dataset.
```
cancer_data = pd.read_csv("wisc_bc_data.csv")
```
#### Let's take a look at the data with the below code
```
cancer_data.head(5)
```
#### We import the required libraries
```
# Import required libraries
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
```
#### We create the response and the feature set
```
# Create feature and response variable set
# We create train & Test sample from our dataset
from sklearn.cross_validation import train_test_split
# create feature & response variables
X = cancer_data.iloc[:,2:32]
Y = cancer_data['diagnosis']
# Create train & test sets
X_train, X_test, Y_train, Y_test = \
train_test_split(X, Y, test_size=0.30, random_state=1)
```
#### We build our base classifier models
```
# create the sub models
estimators = []
dt_model = DecisionTreeClassifier()
estimators.append(('DecisionTree', dt_model))
svm_model = SVC(probability=True)
estimators.append(('SupportVector', svm_model))
logit_model = LogisticRegression()
estimators.append(('Logistic Regression', logit_model))
```
#### We fit our models on the test data
```
dt_model.fit(X_train, Y_train)
svm_model.fit(X_train, Y_train)
logit_model.fit(X_train, Y_train)
#### We use the predict_proba() function to predict the class probabilities
dt_predictions = dt_model.predict_proba(X_test)
svm_predictions = svm_model.predict_proba(X_test)
logit_predictions = logit_model.predict_proba(X_test)
```
#### We assign different weights to each of the models to get our final predictions
```
weighted_average_predictions=(dt_predictions * 0.3 + svm_predictions * 0.4 + logit_predictions * 0.3)
```
|
github_jupyter
|
import os
import pandas as pd
os.chdir(".../Chapter 2")
os.getcwd()
cryotherapy_data = pd.read_csv("Cryotherapy.csv")
cryotherapy_data.head(5)
# Import required libraries
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import VotingClassifier
# We create train & Test sample from our dataset
from sklearn.cross_validation import train_test_split
# create feature & response variables
feature_columns = ['sex', 'age', 'Time', 'Number_of_Warts', 'Type', 'Area']
X = cryotherapy_data[feature_columns]
Y = cryotherapy_data['Result_of_Treatment']
# Create train & test sets
X_train, X_test, Y_train, Y_test = \
train_test_split(X, Y, test_size=0.20, random_state=1)
# create the sub models
estimators = []
dt_model = DecisionTreeClassifier(random_state=1)
estimators.append(('DecisionTree', dt_model))
svm_model = SVC(random_state=1)
estimators.append(('SupportVector', svm_model))
logit_model = LogisticRegression(random_state=1)
estimators.append(('Logistic Regression', logit_model))
#dt_model.fit(X_train,Y_train)
#svm_model.fit(X_train,Y_train)
#knn_model.fit(X_train,Y_train)
from sklearn.metrics import accuracy_score
for each_estimator in (dt_model, svm_model, logit_model):
each_estimator.fit(X_train, Y_train)
Y_pred = each_estimator.predict(X_test)
print(each_estimator.__class__.__name__, accuracy_score(Y_test, Y_pred))
# Using VotingClassifier() to build ensemble model with Hard Voting
ensemble_model = VotingClassifier(estimators=estimators, voting='hard')
ensemble_model.fit(X_train,Y_train)
predicted_labels = ensemble_model.predict(X_test)
print("Classifier Accuracy using Hard Voting: ", accuracy_score(Y_test, predicted_labels))
# create the sub models
estimators = []
dt_model = DecisionTreeClassifier(random_state=1)
estimators.append(('DecisionTree', dt_model))
svm_model = SVC(random_state=1, probability=True)
estimators.append(('SupportVector', svm_model))
logit_model = LogisticRegression(random_state=1)
estimators.append(('Logistic Regression', logit_model))
for each_estimator in (dt_model, svm_model, logit_model):
each_estimator.fit(X_train, Y_train)
Y_pred = each_estimator.predict(X_test)
print(each_estimator.__class__.__name__, accuracy_score(Y_test, Y_pred))
# Using VotingClassifier() to build ensemble model with Soft Voting
ensemble_model = VotingClassifier(estimators=estimators, voting='soft')
ensemble_model.fit(X_train,Y_train)
predicted_labels = ensemble_model.predict(X_test)
print("Classifier Accuracy using Soft Voting: ", accuracy_score(Y_test, predicted_labels))
wine_data = pd.read_csv("whitewines.csv")
wine_data.head(5)
# Import required libraries
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.svm import SVR
# Create feature and response variable set
from sklearn.cross_validation import train_test_split
# create feature & response variables
feature_columns = ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar',\
'chlorides', 'free sulfur dioxide', 'total sulfur dioxide',\
'density', 'pH', 'sulphates', 'alcohol']
X = wine_data[feature_columns]
Y = wine_data['quality']
# Create train & test sets
X_train, X_test, Y_train, Y_test = \
train_test_split(X, Y, test_size=0.30, random_state=1)
# Build base learners
linreg_model = LinearRegression()
svr_model = SVR()
regressiontree_model = DecisionTreeRegressor()
linreg_model.fit(X_train, Y_train)
svr_model.fit(X_train, Y_train)
regressiontree_model.fit(X_train, Y_train)
linreg_predictions = linreg_model.predict(X_test)
svr_predictions = svr_model.predict(X_test)
regtree_predictions = regressiontree_model.predict(X_test)
average_predictions=(linreg_predictions + svr_predictions + regtree_predictions)/3
os.chdir(".../Chapter 2")
os.getcwd()
cancer_data = pd.read_csv("wisc_bc_data.csv")
cancer_data.head(5)
# Import required libraries
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
# Create feature and response variable set
# We create train & Test sample from our dataset
from sklearn.cross_validation import train_test_split
# create feature & response variables
X = cancer_data.iloc[:,2:32]
Y = cancer_data['diagnosis']
# Create train & test sets
X_train, X_test, Y_train, Y_test = \
train_test_split(X, Y, test_size=0.30, random_state=1)
# create the sub models
estimators = []
dt_model = DecisionTreeClassifier()
estimators.append(('DecisionTree', dt_model))
svm_model = SVC(probability=True)
estimators.append(('SupportVector', svm_model))
logit_model = LogisticRegression()
estimators.append(('Logistic Regression', logit_model))
dt_model.fit(X_train, Y_train)
svm_model.fit(X_train, Y_train)
logit_model.fit(X_train, Y_train)
#### We use the predict_proba() function to predict the class probabilities
dt_predictions = dt_model.predict_proba(X_test)
svm_predictions = svm_model.predict_proba(X_test)
logit_predictions = logit_model.predict_proba(X_test)
weighted_average_predictions=(dt_predictions * 0.3 + svm_predictions * 0.4 + logit_predictions * 0.3)
| 0.646125 | 0.868213 |
## Random Forest importance
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.feature_selection import SelectFromModel
```
## Read Data
```
data = pd.read_csv('../DoHBrwTest.csv')
data.shape
data.head()
```
### Train - Test Split
```
X_train, X_test, y_train, y_test = train_test_split(
data.drop(labels=['is_intrusion'], axis=1),
data['is_intrusion'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
```
### Select features with tree importance
```
sel_ = SelectFromModel(RandomForestClassifier(n_estimators=10, random_state=10))
sel_.fit(X_train, y_train)
sel_.get_support()
selected_feat = X_train.columns[(sel_.get_support())]
len(selected_feat)
selected_feat
```
### Plot importances
```
pd.Series(sel_.estimator_.feature_importances_.ravel()).hist(bins=20)
plt.xlabel('Feature importance')
plt.ylabel('Number of Features')
plt.show()
print('total features: {}'.format((X_train.shape[1])))
print('selected features: {}'.format(len(selected_feat)))
print(
'features with importance greater than the mean importance of all features: {}'.format(
np.sum(sel_.estimator_.feature_importances_ >
sel_.estimator_.feature_importances_.mean())))
selected_feat
X_train = X_train[selected_feat]
X_test = X_test[selected_feat]
X_train.shape, X_test.shape
```
## Standardize Data
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
```
## Classifiers
```
from sklearn import linear_model
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from catboost import CatBoostClassifier
```
## Metrics Evaluation
```
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve, f1_score
from sklearn import metrics
from sklearn.model_selection import cross_val_score
```
### Logistic Regression
```
%%time
clf_LR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=0.1).fit(X_train, y_train)
pred_y_test = clf_LR.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_test))
f1 = f1_score(y_test, pred_y_test)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_test)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
```
### Naive Bayes
```
%%time
clf_NB = GaussianNB(var_smoothing=1e-09).fit(X_train, y_train)
pred_y_testNB = clf_NB.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testNB))
f1 = f1_score(y_test, pred_y_testNB)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testNB)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
```
### Random Forest
```
%%time
clf_RF = RandomForestClassifier(random_state=0,max_depth=70,n_estimators=100).fit(X_train, y_train)
pred_y_testRF = clf_RF.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testRF))
f1 = f1_score(y_test, pred_y_testRF, average='weighted', zero_division=0)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testRF)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
```
### KNN
```
%%time
clf_KNN = KNeighborsClassifier(algorithm='brute',leaf_size=1,n_neighbors=2,weights='distance').fit(X_train, y_train)
pred_y_testKNN = clf_KNN.predict(X_test)
print('accuracy_score:', accuracy_score(y_test, pred_y_testKNN))
f1 = f1_score(y_test, pred_y_testKNN)
print('f1:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testKNN)
print('fpr:', fpr[1])
print('tpr:', tpr[1])
```
### CatBoost
```
%%time
clf_CB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04).fit(X_train, y_train)
pred_y_testCB = clf_CB.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testCB))
f1 = f1_score(y_test, pred_y_testCB, average='weighted', zero_division=0)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testCB)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
```
## Model Evaluation
```
import pandas as pd, numpy as np
test_df = pd.read_csv("../KDDTest.csv")
test_df.shape
# Create feature matrix X and target vextor y
y_eval = test_df['is_intrusion']
X_eval = test_df.drop(columns=['is_intrusion'])
X_eval = X_eval[selected_feat]
X_eval.shape
```
### Model Evaluation - Logistic Regression
```
modelLR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=0.1)
modelLR.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredLR = modelLR.predict(X_eval)
y_predLR = modelLR.predict(X_test)
train_scoreLR = modelLR.score(X_train, y_train)
test_scoreLR = modelLR.score(X_test, y_test)
print("Training accuracy is ", train_scoreLR)
print("Testing accuracy is ", test_scoreLR)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreLR)
print('F1 Score:',f1_score(y_test, y_predLR))
print('Precision Score:',precision_score(y_test, y_predLR))
print('Recall Score:', recall_score(y_test, y_predLR))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predLR))
```
### Cross validation - Logistic Regression
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
### Model Evaluation - Naive Bayes
```
modelNB = GaussianNB(var_smoothing=1e-09)
modelNB.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredNB = modelNB.predict(X_eval)
y_predNB = modelNB.predict(X_test)
train_scoreNB = modelNB.score(X_train, y_train)
test_scoreNB = modelNB.score(X_test, y_test)
print("Training accuracy is ", train_scoreNB)
print("Testing accuracy is ", test_scoreNB)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreNB)
print('F1 Score:',f1_score(y_test, y_predNB))
print('Precision Score:',precision_score(y_test, y_predNB))
print('Recall Score:', recall_score(y_test, y_predNB))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predNB))
```
### Cross validation - Naive Bayes
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
### Model Evaluation - Random Forest
```
modelRF = RandomForestClassifier(random_state=0,max_depth=70,n_estimators=100)
modelRF.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredRF = modelRF.predict(X_eval)
y_predRF = modelRF.predict(X_test)
train_scoreRF = modelRF.score(X_train, y_train)
test_scoreRF = modelRF.score(X_test, y_test)
print("Training accuracy is ", train_scoreRF)
print("Testing accuracy is ", test_scoreRF)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreRF)
print('F1 Score:', f1_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Precision Score:', precision_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Recall Score:', recall_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predRF))
```
### Cross validation - Random Forest
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
### Model Evaluation - KNN
```
modelKNN = KNeighborsClassifier(algorithm='brute',leaf_size=1,n_neighbors=2,weights='distance')
modelKNN.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredKNN = modelKNN.predict(X_eval)
y_predKNN = modelKNN.predict(X_test)
train_scoreKNN = modelKNN.score(X_train, y_train)
test_scoreKNN = modelKNN.score(X_test, y_test)
print("Training accuracy is ", train_scoreKNN)
print("Testing accuracy is ", test_scoreKNN)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreKNN)
print('F1 Score:', f1_score(y_test, y_predKNN))
print('Precision Score:', precision_score(y_test, y_predKNN))
print('Recall Score:', recall_score(y_test, y_predKNN))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predKNN))
```
### Cross validation - KNN
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
### Model Evaluation - CatBoost
```
modelCB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04)
modelCB.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredCB = modelCB.predict(X_eval)
y_predCB = modelCB.predict(X_test)
train_scoreCB = modelCB.score(X_train, y_train)
test_scoreCB = modelCB.score(X_test, y_test)
print("Training accuracy is ", train_scoreCB)
print("Testing accuracy is ", test_scoreCB)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreCB)
print('F1 Score:',f1_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Precision Score:',precision_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Recall Score:', recall_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predCB))
```
### Cross validation - CatBoost
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='accuracy')
f = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='f1')
precision = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='precision')
recall = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='recall')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.feature_selection import SelectFromModel
data = pd.read_csv('../DoHBrwTest.csv')
data.shape
data.head()
X_train, X_test, y_train, y_test = train_test_split(
data.drop(labels=['is_intrusion'], axis=1),
data['is_intrusion'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
sel_ = SelectFromModel(RandomForestClassifier(n_estimators=10, random_state=10))
sel_.fit(X_train, y_train)
sel_.get_support()
selected_feat = X_train.columns[(sel_.get_support())]
len(selected_feat)
selected_feat
pd.Series(sel_.estimator_.feature_importances_.ravel()).hist(bins=20)
plt.xlabel('Feature importance')
plt.ylabel('Number of Features')
plt.show()
print('total features: {}'.format((X_train.shape[1])))
print('selected features: {}'.format(len(selected_feat)))
print(
'features with importance greater than the mean importance of all features: {}'.format(
np.sum(sel_.estimator_.feature_importances_ >
sel_.estimator_.feature_importances_.mean())))
selected_feat
X_train = X_train[selected_feat]
X_test = X_test[selected_feat]
X_train.shape, X_test.shape
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
from sklearn import linear_model
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from catboost import CatBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve, f1_score
from sklearn import metrics
from sklearn.model_selection import cross_val_score
%%time
clf_LR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=0.1).fit(X_train, y_train)
pred_y_test = clf_LR.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_test))
f1 = f1_score(y_test, pred_y_test)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_test)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
%%time
clf_NB = GaussianNB(var_smoothing=1e-09).fit(X_train, y_train)
pred_y_testNB = clf_NB.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testNB))
f1 = f1_score(y_test, pred_y_testNB)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testNB)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
%%time
clf_RF = RandomForestClassifier(random_state=0,max_depth=70,n_estimators=100).fit(X_train, y_train)
pred_y_testRF = clf_RF.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testRF))
f1 = f1_score(y_test, pred_y_testRF, average='weighted', zero_division=0)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testRF)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
%%time
clf_KNN = KNeighborsClassifier(algorithm='brute',leaf_size=1,n_neighbors=2,weights='distance').fit(X_train, y_train)
pred_y_testKNN = clf_KNN.predict(X_test)
print('accuracy_score:', accuracy_score(y_test, pred_y_testKNN))
f1 = f1_score(y_test, pred_y_testKNN)
print('f1:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testKNN)
print('fpr:', fpr[1])
print('tpr:', tpr[1])
%%time
clf_CB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04).fit(X_train, y_train)
pred_y_testCB = clf_CB.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testCB))
f1 = f1_score(y_test, pred_y_testCB, average='weighted', zero_division=0)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testCB)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
import pandas as pd, numpy as np
test_df = pd.read_csv("../KDDTest.csv")
test_df.shape
# Create feature matrix X and target vextor y
y_eval = test_df['is_intrusion']
X_eval = test_df.drop(columns=['is_intrusion'])
X_eval = X_eval[selected_feat]
X_eval.shape
modelLR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=0.1)
modelLR.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredLR = modelLR.predict(X_eval)
y_predLR = modelLR.predict(X_test)
train_scoreLR = modelLR.score(X_train, y_train)
test_scoreLR = modelLR.score(X_test, y_test)
print("Training accuracy is ", train_scoreLR)
print("Testing accuracy is ", test_scoreLR)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreLR)
print('F1 Score:',f1_score(y_test, y_predLR))
print('Precision Score:',precision_score(y_test, y_predLR))
print('Recall Score:', recall_score(y_test, y_predLR))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predLR))
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
modelNB = GaussianNB(var_smoothing=1e-09)
modelNB.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredNB = modelNB.predict(X_eval)
y_predNB = modelNB.predict(X_test)
train_scoreNB = modelNB.score(X_train, y_train)
test_scoreNB = modelNB.score(X_test, y_test)
print("Training accuracy is ", train_scoreNB)
print("Testing accuracy is ", test_scoreNB)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreNB)
print('F1 Score:',f1_score(y_test, y_predNB))
print('Precision Score:',precision_score(y_test, y_predNB))
print('Recall Score:', recall_score(y_test, y_predNB))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predNB))
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
modelRF = RandomForestClassifier(random_state=0,max_depth=70,n_estimators=100)
modelRF.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredRF = modelRF.predict(X_eval)
y_predRF = modelRF.predict(X_test)
train_scoreRF = modelRF.score(X_train, y_train)
test_scoreRF = modelRF.score(X_test, y_test)
print("Training accuracy is ", train_scoreRF)
print("Testing accuracy is ", test_scoreRF)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreRF)
print('F1 Score:', f1_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Precision Score:', precision_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Recall Score:', recall_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predRF))
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelRF, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
modelKNN = KNeighborsClassifier(algorithm='brute',leaf_size=1,n_neighbors=2,weights='distance')
modelKNN.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredKNN = modelKNN.predict(X_eval)
y_predKNN = modelKNN.predict(X_test)
train_scoreKNN = modelKNN.score(X_train, y_train)
test_scoreKNN = modelKNN.score(X_test, y_test)
print("Training accuracy is ", train_scoreKNN)
print("Testing accuracy is ", test_scoreKNN)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreKNN)
print('F1 Score:', f1_score(y_test, y_predKNN))
print('Precision Score:', precision_score(y_test, y_predKNN))
print('Recall Score:', recall_score(y_test, y_predKNN))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predKNN))
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
modelCB = CatBoostClassifier(random_state=0,depth=7,iterations=50,learning_rate=0.04)
modelCB.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredCB = modelCB.predict(X_eval)
y_predCB = modelCB.predict(X_test)
train_scoreCB = modelCB.score(X_train, y_train)
test_scoreCB = modelCB.score(X_test, y_test)
print("Training accuracy is ", train_scoreCB)
print("Testing accuracy is ", test_scoreCB)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreCB)
print('F1 Score:',f1_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Precision Score:',precision_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Recall Score:', recall_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predCB))
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='accuracy')
f = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='f1')
precision = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='precision')
recall = cross_val_score(modelCB, X_eval, y_eval, cv=10, scoring='recall')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
| 0.659076 | 0.894144 |
# Data preparation
```
import sqlite3
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import Cdf
import Pmf
# suppress unnecessary warnings
import warnings
warnings.filterwarnings("ignore", module="numpy")
# define global plot parameters
params = {'axes.labelsize' : 12, 'axes.titlesize' : 12,
'font.size' : 12, 'legend.fontsize' : 12,
'xtick.labelsize' : 12, 'ytick.labelsize' : 12}
plt.rcParams.update(params)
plt.rcParams.update({'figure.max_open_warning': 0})
# connect to database and get data
conn = sqlite3.connect('data/youtube-traceroute.db')
# read necessary tables from database
date_cols = ['dtime']
deltas = pd.read_sql_query('select * from deltas', conn, parse_dates = date_cols)
conn.close()
# pad/fill missing data to display time series correctly
padded = pd.DataFrame(pd.date_range(start = '2016-05-15', end = '2018-05-31', freq = 'h'), columns = ['padded_time'])
deltas = pd.merge(deltas, padded, how = 'outer', left_on = 'dtime', right_on = 'padded_time')
deltas.drop('dtime', axis = 1, inplace = True)
deltas.rename(columns = { 'padded_time' : 'dtime' }, inplace = True)
# add new columns for year, month and day for more refined grouping
deltas['year'] = deltas['dtime'].dt.year
deltas['month'] = deltas['dtime'].dt.month
deltas['day'] = deltas['dtime'].dt.day
```
## Aggregated by month
### Deltas of metrics between IPv4 and IPv6
```
# pack relevant metrics and column names together
info = [('TTL', 'ttl_delta'),
('RTT', 'rtt_delta')]
# define data range for TTL deltas
ttl_yticks = range(-6, 7, 2)
# create plots for both metrics
for (metric, col) in info:
# create plots
ts_fig, ts_ax = plt.subplots(figsize = (5, 2))
bp = deltas.boxplot(column = [col], by = ['year', 'month'], ax = ts_ax, sym = "",
medianprops = { 'linewidth' : 1.5 }, return_type = 'dict')
# workaround for changing color of median line
for key in bp.keys():
for item in bp[key]['medians']:
item.set_color('red')
# PLOT FORMATTING
ts_fig.suptitle('')
ts_ax.set_title('') # ('%s time series' % metric)
ts_ax.set_xlabel('')
# labeling of y-axis for either TTL or RTT
if metric == 'TTL':
ts_ax.set_ylabel('TTL delta')
ts_ax.set_ylim([-6.5, 6.5])
ts_ax.yaxis.set_ticks(ttl_yticks)
ts_ax.yaxis.labelpad = 11
elif metric == 'RTT':
ts_ax.set_yscale('symlog')
ts_ax.set_ylabel('RTT delta [ms]')
from matplotlib.ticker import ScalarFormatter
ts_ax.yaxis.set_major_formatter(ScalarFormatter())
# adjust x-axis labeling and ticks
major_ticklabels = ts_ax.xaxis.get_majorticklabels()
for ticklabel in major_ticklabels:
label = ticklabel.get_text()[1:-1] # ignore ( ) at beginning and end of string
y, m = label.split(', ')
y = y[:-2]
m = m[:-2]
if int(m) in [5, 8, 11, 2]: # skip a few months
label = m.zfill(2) + '/\n' + y
else:
label = ''
ticklabel.set_text(label)
# customize grid appearance
ts_ax.grid(False)
ts_ax.spines['right'].set_color('none')
ts_ax.spines['top'].set_color('none')
ts_ax.yaxis.set_ticks_position('left')
ts_ax.xaxis.set_ticks_position('bottom')
ts_ax.spines['bottom'].set_position(('axes', -0.03))
ts_ax.spines['left'].set_position(('axes', -0.03))
ts_ax.set_xticklabels(major_ticklabels, rotation = 0)
# hide superfluous tick lines
i = 0
for t in ts_ax.xaxis.get_ticklines()[::1]:
if i not in range(0, 50, 6):
t.set_visible(False)
i = i + 1
# annotate plot to show directions where IPv6 is better/worse
if metric == 'TTL':
ts_ax.annotate('', xy = (1.05, 0.45),
xycoords = 'axes fraction',
xytext = (1.05, 0),
arrowprops = dict(arrowstyle = "<-"))
ts_ax.annotate('', xy = (1.05, 1),
xycoords = 'axes fraction',
xytext = (1.05, 0.55),
arrowprops = dict(arrowstyle="->"))
ts_ax.text(27.25, -4, " IPv6\n slower", rotation = 90)
ts_ax.text(27.25, 2.5, " IPv6\n faster", rotation = 90)
elif metric == 'RTT':
ts_ax.annotate('', xy = (1.05, 0.45),
xycoords = 'axes fraction',
xytext = (1.05, 0),
arrowprops = dict(arrowstyle = "<-"))
ts_ax.annotate('', xy = (1.05, 1),
xycoords = 'axes fraction',
xytext = (1.05, 0.6),
arrowprops = dict(arrowstyle="->"))
ts_ax.text(27.25, -3.5, " IPv6\n slower", rotation = 90)
ts_ax.text(27.25, 1.5, " IPv6\nfaster", rotation = 90)
# dotted horizontal line to separate positive and negative delta region
ts_ax.axhline(0, linestyle = 'dotted', color = 'black', linewidth = 0.5)
ts_ax.axvspan(15.75, 21.25, alpha = 0.25, color = 'lightgrey')
ts_ax.axvline(15.75, color = 'black', linestyle = 'dotted', linewidth = 0.5)
ts_ax.axvline(21.25, color = 'black', linestyle = 'dotted', linewidth = 0.5)
# saving and showing plot
#ts_fig.savefig('plots/ts_%s_delta_by_month.pdf' % metric.lower(), bbox_inches = 'tight')
plt.show()
plt.close('all')
```
### Absolute values of metrics
```
# pack relevant metrics, descriptors and column names together
info = [('TTL', 'IPv4', 'max(ttl)_v4'),
('TTL', 'IPv6', 'max(ttl)_v6'),
('RTT', 'IPv4', 'rtt_v4'),
('RTT', 'IPv6', 'rtt_v6')]
# create plots for all tuples
for (metric, version, col) in info:
# create plots
ts_fig, ts_ax = plt.subplots(figsize = (5, 2))
bp = deltas.boxplot(column = [col], by = ['year', 'month'], ax = ts_ax, sym = "",
medianprops = { 'linewidth' : 1.5 }, return_type = 'dict')
# workaround for changing color of median line
for key in bp.keys():
for item in bp[key]['medians']:
item.set_color('red')
# PLOT FORMATTING
ts_fig.suptitle('')
ts_ax.set_title('') # ('%s time series' % metric)
ts_ax.set_xlabel('')
# labeling of y-axis for either TTL or RTT
if metric == 'TTL':
ts_ax.set_ylabel('TTL')
ts_ax.set_ylim([-1, 21])
elif metric == 'RTT':
ts_ax.set_ylabel('RTT [ms]')
ts_ax.set_ylim([-3, 63])
# adjust x-axis labeling and ticks
major_ticklabels = ts_ax.xaxis.get_majorticklabels()
for ticklabel in major_ticklabels:
label = ticklabel.get_text()[1:-1] # ignore ( ) at beginning and end of string
y, m = label.split(', ')
y = y[:-2]
m = m[:-2]
if int(m) in [5, 8, 11, 2]: # skip a few months
label = m.zfill(2) + '/\n' + y
else:
label = ''
ticklabel.set_text(label)
# customize grid appearance
ts_ax.grid(False)
ts_ax.spines['right'].set_color('none')
ts_ax.spines['top'].set_color('none')
ts_ax.yaxis.set_ticks_position('left')
ts_ax.xaxis.set_ticks_position('bottom')
ts_ax.spines['bottom'].set_position(('axes', -0.03))
ts_ax.spines['left'].set_position(('axes', -0.03))
ts_ax.set_xticklabels(major_ticklabels, rotation = 0)
# add labeling for IP address family
ax1_ = ts_ax.twinx()
ax1_.spines['right'].set_color('none')
ax1_.spines['top'].set_color('none')
ax1_.spines['left'].set_color('none')
ax1_.spines['bottom'].set_color('none')
ax1_.yaxis.set_ticks_position('none')
ax1_.set_ylabel('%s' % version)
plt.setp(ax1_.get_yticklabels(), visible = False)
# hide superfluous tick lines
i = 0
for t in ts_ax.xaxis.get_ticklines()[::1]:
if i not in range(0, 50, 6):
t.set_visible(False)
i = i + 1
ts_ax.axvspan(15.75, 21.25, alpha = 0.25, color = 'lightgrey')
ts_ax.axvline(15.75, color = 'black', linestyle = 'dotted', linewidth = 0.5)
ts_ax.axvline(21.25, color = 'black', linestyle = 'dotted', linewidth = 0.5)
# saving and showing plot
#ts_fig.savefig('plots/ts_%s_%s_by_month.pdf' % (metric.lower(), version[-2:]), bbox_inches = 'tight')
plt.show()
plt.close('all')
```
### Absolutes in one single plot
```
# update plot parameters to avoid oversizing too much
fsize = 10
params = {'axes.labelsize' : fsize, 'axes.titlesize' : fsize,
'font.size' : fsize, 'legend.fontsize' : fsize,
'xtick.labelsize' : fsize, 'ytick.labelsize' : fsize}
plt.rcParams.update(params)
# pack relevant plot coordinates, metrics, descriptors and column names together
info = [(0, 0, 'TTL', 'IPv4', 'max(ttl)_v4'),
(0, 1, 'TTL', 'IPv6', 'max(ttl)_v6'),
(1, 0, 'RTT [ms]', 'IPv4', 'rtt_v4'),
(1, 1, 'RTT [ms]', 'IPv6', 'rtt_v6')]
# create plot grid
ts_fig, ts_axes = plt.subplots(figsize = (8, 2.5), ncols = 2, nrows = 2)
# create plots for all tuples
for (i, j, metric, version, col) in info:
# create plots
bp = deltas.boxplot(column = [col], by = ['year', 'month'], ax = ts_axes[i, j], sym = "",
medianprops = { 'linewidth' : 1.5 }, return_type = 'dict')
# workaround for changing color of median line
for key in bp.keys():
for item in bp[key]['medians']:
item.set_color('red')
# PLOT FORMATTING
ts_axes[i, j].set_title('')
ts_axes[i, j].set_xlabel('')
# labeling of y-axis for either TTL or RTT
if metric == 'TTL':
ts_axes[i, j].set_ylim([-1, 21])
ts_axes[i, j].set_yticks(np.arange(0, 21, 5))
elif metric == 'RTT [ms]':
ts_axes[i, j].set_ylim([-3, 63])
ts_axes[i, j].set_yticks(np.arange(0, 61, 20))
if j==0:
ts_axes[i, j].set_ylabel(metric)
else:
ts_axes[i, j].set_yticklabels("")
# adjust x-axis labeling and ticks
major_ticklabels = ts_axes[i, j].xaxis.get_majorticklabels()
for ticklabel in major_ticklabels:
if i==0:
ticklabel.set_text('')
else:
label = ticklabel.get_text()[1:-1] # ignore ( ) at beginning and end of string
y, m = label.split(', ')
y = y[:-2]
m = m[:-2]
# format months to look nicely
if int(m) in [5]:
label = m.zfill(2) + '/\n' + y
elif int(m) in [2, 8, 11]:
label = m.zfill(2)
else:
label = ''
ticklabel.set_text(label)
# customize grid appearance
ts_axes[i, j].grid(False)
ts_axes[i, j].spines['right'].set_color('none')
ts_axes[i, j].spines['top'].set_color('none')
ts_axes[i, j].yaxis.set_ticks_position('left')
ts_axes[i, j].xaxis.set_ticks_position('bottom')
ts_axes[i, j].spines['bottom'].set_position(('axes', -0.03))
ts_axes[i, j].spines['left'].set_position(('axes', -0.03))
ts_axes[i, j].set_xticklabels(major_ticklabels, rotation = 0)
# hide superfluous tick lines
n = 0
for t in ts_axes[i, j].xaxis.get_ticklines()[::1]:
if n not in range(0, 50, 6):
t.set_visible(False)
n = n + 1
ts_axes[i, j].axvspan(15.75, 21.25, alpha = 0.25, color = 'lightgrey')
ts_axes[i, j].axvline(15.75, color = 'black', linestyle = 'dotted', linewidth = 0.5)
ts_axes[i, j].axvline(21.25, color = 'black', linestyle = 'dotted', linewidth = 0.5)
# set plot titles to finish
ts_axes[0, 0].set_title('IPv4')
ts_axes[0, 1].set_title('IPv6')
# add some space to not look too crammed
plt.subplots_adjust(hspace=0.35)
# remove figure title
ts_fig.suptitle('')
# saving and showing plot
ts_fig.savefig('plots/ts_by_month.pdf', bbox_inches = 'tight')
plt.show()
plt.close('all')
```
|
github_jupyter
|
import sqlite3
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import Cdf
import Pmf
# suppress unnecessary warnings
import warnings
warnings.filterwarnings("ignore", module="numpy")
# define global plot parameters
params = {'axes.labelsize' : 12, 'axes.titlesize' : 12,
'font.size' : 12, 'legend.fontsize' : 12,
'xtick.labelsize' : 12, 'ytick.labelsize' : 12}
plt.rcParams.update(params)
plt.rcParams.update({'figure.max_open_warning': 0})
# connect to database and get data
conn = sqlite3.connect('data/youtube-traceroute.db')
# read necessary tables from database
date_cols = ['dtime']
deltas = pd.read_sql_query('select * from deltas', conn, parse_dates = date_cols)
conn.close()
# pad/fill missing data to display time series correctly
padded = pd.DataFrame(pd.date_range(start = '2016-05-15', end = '2018-05-31', freq = 'h'), columns = ['padded_time'])
deltas = pd.merge(deltas, padded, how = 'outer', left_on = 'dtime', right_on = 'padded_time')
deltas.drop('dtime', axis = 1, inplace = True)
deltas.rename(columns = { 'padded_time' : 'dtime' }, inplace = True)
# add new columns for year, month and day for more refined grouping
deltas['year'] = deltas['dtime'].dt.year
deltas['month'] = deltas['dtime'].dt.month
deltas['day'] = deltas['dtime'].dt.day
# pack relevant metrics and column names together
info = [('TTL', 'ttl_delta'),
('RTT', 'rtt_delta')]
# define data range for TTL deltas
ttl_yticks = range(-6, 7, 2)
# create plots for both metrics
for (metric, col) in info:
# create plots
ts_fig, ts_ax = plt.subplots(figsize = (5, 2))
bp = deltas.boxplot(column = [col], by = ['year', 'month'], ax = ts_ax, sym = "",
medianprops = { 'linewidth' : 1.5 }, return_type = 'dict')
# workaround for changing color of median line
for key in bp.keys():
for item in bp[key]['medians']:
item.set_color('red')
# PLOT FORMATTING
ts_fig.suptitle('')
ts_ax.set_title('') # ('%s time series' % metric)
ts_ax.set_xlabel('')
# labeling of y-axis for either TTL or RTT
if metric == 'TTL':
ts_ax.set_ylabel('TTL delta')
ts_ax.set_ylim([-6.5, 6.5])
ts_ax.yaxis.set_ticks(ttl_yticks)
ts_ax.yaxis.labelpad = 11
elif metric == 'RTT':
ts_ax.set_yscale('symlog')
ts_ax.set_ylabel('RTT delta [ms]')
from matplotlib.ticker import ScalarFormatter
ts_ax.yaxis.set_major_formatter(ScalarFormatter())
# adjust x-axis labeling and ticks
major_ticklabels = ts_ax.xaxis.get_majorticklabels()
for ticklabel in major_ticklabels:
label = ticklabel.get_text()[1:-1] # ignore ( ) at beginning and end of string
y, m = label.split(', ')
y = y[:-2]
m = m[:-2]
if int(m) in [5, 8, 11, 2]: # skip a few months
label = m.zfill(2) + '/\n' + y
else:
label = ''
ticklabel.set_text(label)
# customize grid appearance
ts_ax.grid(False)
ts_ax.spines['right'].set_color('none')
ts_ax.spines['top'].set_color('none')
ts_ax.yaxis.set_ticks_position('left')
ts_ax.xaxis.set_ticks_position('bottom')
ts_ax.spines['bottom'].set_position(('axes', -0.03))
ts_ax.spines['left'].set_position(('axes', -0.03))
ts_ax.set_xticklabels(major_ticklabels, rotation = 0)
# hide superfluous tick lines
i = 0
for t in ts_ax.xaxis.get_ticklines()[::1]:
if i not in range(0, 50, 6):
t.set_visible(False)
i = i + 1
# annotate plot to show directions where IPv6 is better/worse
if metric == 'TTL':
ts_ax.annotate('', xy = (1.05, 0.45),
xycoords = 'axes fraction',
xytext = (1.05, 0),
arrowprops = dict(arrowstyle = "<-"))
ts_ax.annotate('', xy = (1.05, 1),
xycoords = 'axes fraction',
xytext = (1.05, 0.55),
arrowprops = dict(arrowstyle="->"))
ts_ax.text(27.25, -4, " IPv6\n slower", rotation = 90)
ts_ax.text(27.25, 2.5, " IPv6\n faster", rotation = 90)
elif metric == 'RTT':
ts_ax.annotate('', xy = (1.05, 0.45),
xycoords = 'axes fraction',
xytext = (1.05, 0),
arrowprops = dict(arrowstyle = "<-"))
ts_ax.annotate('', xy = (1.05, 1),
xycoords = 'axes fraction',
xytext = (1.05, 0.6),
arrowprops = dict(arrowstyle="->"))
ts_ax.text(27.25, -3.5, " IPv6\n slower", rotation = 90)
ts_ax.text(27.25, 1.5, " IPv6\nfaster", rotation = 90)
# dotted horizontal line to separate positive and negative delta region
ts_ax.axhline(0, linestyle = 'dotted', color = 'black', linewidth = 0.5)
ts_ax.axvspan(15.75, 21.25, alpha = 0.25, color = 'lightgrey')
ts_ax.axvline(15.75, color = 'black', linestyle = 'dotted', linewidth = 0.5)
ts_ax.axvline(21.25, color = 'black', linestyle = 'dotted', linewidth = 0.5)
# saving and showing plot
#ts_fig.savefig('plots/ts_%s_delta_by_month.pdf' % metric.lower(), bbox_inches = 'tight')
plt.show()
plt.close('all')
# pack relevant metrics, descriptors and column names together
info = [('TTL', 'IPv4', 'max(ttl)_v4'),
('TTL', 'IPv6', 'max(ttl)_v6'),
('RTT', 'IPv4', 'rtt_v4'),
('RTT', 'IPv6', 'rtt_v6')]
# create plots for all tuples
for (metric, version, col) in info:
# create plots
ts_fig, ts_ax = plt.subplots(figsize = (5, 2))
bp = deltas.boxplot(column = [col], by = ['year', 'month'], ax = ts_ax, sym = "",
medianprops = { 'linewidth' : 1.5 }, return_type = 'dict')
# workaround for changing color of median line
for key in bp.keys():
for item in bp[key]['medians']:
item.set_color('red')
# PLOT FORMATTING
ts_fig.suptitle('')
ts_ax.set_title('') # ('%s time series' % metric)
ts_ax.set_xlabel('')
# labeling of y-axis for either TTL or RTT
if metric == 'TTL':
ts_ax.set_ylabel('TTL')
ts_ax.set_ylim([-1, 21])
elif metric == 'RTT':
ts_ax.set_ylabel('RTT [ms]')
ts_ax.set_ylim([-3, 63])
# adjust x-axis labeling and ticks
major_ticklabels = ts_ax.xaxis.get_majorticklabels()
for ticklabel in major_ticklabels:
label = ticklabel.get_text()[1:-1] # ignore ( ) at beginning and end of string
y, m = label.split(', ')
y = y[:-2]
m = m[:-2]
if int(m) in [5, 8, 11, 2]: # skip a few months
label = m.zfill(2) + '/\n' + y
else:
label = ''
ticklabel.set_text(label)
# customize grid appearance
ts_ax.grid(False)
ts_ax.spines['right'].set_color('none')
ts_ax.spines['top'].set_color('none')
ts_ax.yaxis.set_ticks_position('left')
ts_ax.xaxis.set_ticks_position('bottom')
ts_ax.spines['bottom'].set_position(('axes', -0.03))
ts_ax.spines['left'].set_position(('axes', -0.03))
ts_ax.set_xticklabels(major_ticklabels, rotation = 0)
# add labeling for IP address family
ax1_ = ts_ax.twinx()
ax1_.spines['right'].set_color('none')
ax1_.spines['top'].set_color('none')
ax1_.spines['left'].set_color('none')
ax1_.spines['bottom'].set_color('none')
ax1_.yaxis.set_ticks_position('none')
ax1_.set_ylabel('%s' % version)
plt.setp(ax1_.get_yticklabels(), visible = False)
# hide superfluous tick lines
i = 0
for t in ts_ax.xaxis.get_ticklines()[::1]:
if i not in range(0, 50, 6):
t.set_visible(False)
i = i + 1
ts_ax.axvspan(15.75, 21.25, alpha = 0.25, color = 'lightgrey')
ts_ax.axvline(15.75, color = 'black', linestyle = 'dotted', linewidth = 0.5)
ts_ax.axvline(21.25, color = 'black', linestyle = 'dotted', linewidth = 0.5)
# saving and showing plot
#ts_fig.savefig('plots/ts_%s_%s_by_month.pdf' % (metric.lower(), version[-2:]), bbox_inches = 'tight')
plt.show()
plt.close('all')
# update plot parameters to avoid oversizing too much
fsize = 10
params = {'axes.labelsize' : fsize, 'axes.titlesize' : fsize,
'font.size' : fsize, 'legend.fontsize' : fsize,
'xtick.labelsize' : fsize, 'ytick.labelsize' : fsize}
plt.rcParams.update(params)
# pack relevant plot coordinates, metrics, descriptors and column names together
info = [(0, 0, 'TTL', 'IPv4', 'max(ttl)_v4'),
(0, 1, 'TTL', 'IPv6', 'max(ttl)_v6'),
(1, 0, 'RTT [ms]', 'IPv4', 'rtt_v4'),
(1, 1, 'RTT [ms]', 'IPv6', 'rtt_v6')]
# create plot grid
ts_fig, ts_axes = plt.subplots(figsize = (8, 2.5), ncols = 2, nrows = 2)
# create plots for all tuples
for (i, j, metric, version, col) in info:
# create plots
bp = deltas.boxplot(column = [col], by = ['year', 'month'], ax = ts_axes[i, j], sym = "",
medianprops = { 'linewidth' : 1.5 }, return_type = 'dict')
# workaround for changing color of median line
for key in bp.keys():
for item in bp[key]['medians']:
item.set_color('red')
# PLOT FORMATTING
ts_axes[i, j].set_title('')
ts_axes[i, j].set_xlabel('')
# labeling of y-axis for either TTL or RTT
if metric == 'TTL':
ts_axes[i, j].set_ylim([-1, 21])
ts_axes[i, j].set_yticks(np.arange(0, 21, 5))
elif metric == 'RTT [ms]':
ts_axes[i, j].set_ylim([-3, 63])
ts_axes[i, j].set_yticks(np.arange(0, 61, 20))
if j==0:
ts_axes[i, j].set_ylabel(metric)
else:
ts_axes[i, j].set_yticklabels("")
# adjust x-axis labeling and ticks
major_ticklabels = ts_axes[i, j].xaxis.get_majorticklabels()
for ticklabel in major_ticklabels:
if i==0:
ticklabel.set_text('')
else:
label = ticklabel.get_text()[1:-1] # ignore ( ) at beginning and end of string
y, m = label.split(', ')
y = y[:-2]
m = m[:-2]
# format months to look nicely
if int(m) in [5]:
label = m.zfill(2) + '/\n' + y
elif int(m) in [2, 8, 11]:
label = m.zfill(2)
else:
label = ''
ticklabel.set_text(label)
# customize grid appearance
ts_axes[i, j].grid(False)
ts_axes[i, j].spines['right'].set_color('none')
ts_axes[i, j].spines['top'].set_color('none')
ts_axes[i, j].yaxis.set_ticks_position('left')
ts_axes[i, j].xaxis.set_ticks_position('bottom')
ts_axes[i, j].spines['bottom'].set_position(('axes', -0.03))
ts_axes[i, j].spines['left'].set_position(('axes', -0.03))
ts_axes[i, j].set_xticklabels(major_ticklabels, rotation = 0)
# hide superfluous tick lines
n = 0
for t in ts_axes[i, j].xaxis.get_ticklines()[::1]:
if n not in range(0, 50, 6):
t.set_visible(False)
n = n + 1
ts_axes[i, j].axvspan(15.75, 21.25, alpha = 0.25, color = 'lightgrey')
ts_axes[i, j].axvline(15.75, color = 'black', linestyle = 'dotted', linewidth = 0.5)
ts_axes[i, j].axvline(21.25, color = 'black', linestyle = 'dotted', linewidth = 0.5)
# set plot titles to finish
ts_axes[0, 0].set_title('IPv4')
ts_axes[0, 1].set_title('IPv6')
# add some space to not look too crammed
plt.subplots_adjust(hspace=0.35)
# remove figure title
ts_fig.suptitle('')
# saving and showing plot
ts_fig.savefig('plots/ts_by_month.pdf', bbox_inches = 'tight')
plt.show()
plt.close('all')
| 0.537041 | 0.832713 |
<center>
<h1>DatatableTon</h1>
💯 datatable exercises
<br>
<br>
<a href='https://github.com/vopani/datatableton/blob/master/LICENSE'>
<img src='https://img.shields.io/badge/license-Apache%202.0-blue.svg?logo=apache'>
</a>
<a href='https://github.com/vopani/datatableton'>
<img src='https://img.shields.io/github/stars/vopani/datatableton?color=yellowgreen&logo=github'>
</a>
<a href='https://twitter.com/vopani'>
<img src='https://img.shields.io/twitter/follow/vopani'>
</a>
</center>
<center>
This is Set 1: Datatable Introduction (Exercises 1-10) of <b>DatatableTon</b>: <i>💯 datatable exercises</i>
<br>
You can find all the exercises and solutions on <a href="https://github.com/vopani/datatableton#exercises-">GitHub</a>
</center>
**Exercise 1: Install the latest version of datatable**
```
!python3 -m pip install -U pip
!python3 -m pip install -U datatable
```
**Exercise 2: Import the datatable package as `dt`**
```
import datatable as dt
```
**Exercise 3: Display the version of the datatable package**
```
dt.__version__
```
**Exercise 4: Create an empty datatable frame and assign it to `data`**
```
data = dt.Frame()
```
**Exercise 5: Create a frame with a column `v1` with integer values from 0 to 9, a column `v2` with values ['Y', 'O', 'U', 'C', 'A', 'N', 'D', 'O', 'I', 'T'] and assign it to `data`**
```
data = dt.Frame(v1=range(10), v2=['Y', 'O', 'U', 'C', 'A', 'N', 'D', 'O', 'I', 'T'])
```
**Exercise 6: Display `data`**
```
data
```
**Exercise 7: Display the first 5 rows and the last 3 rows of `data`**
```
data.head(5)
data.tail(3)
```
**Exercise 8: Display the number of rows and number of columns in `data`**
```
data.nrows
data.ncols
```
**Exercise 9: Display the shape of `data`**
```
data.shape
```
**Exercise 10: Display the column names of `data`**
```
data.names
```
✅ This completes Set 1: Datatable Introduction (Exercises 1-10) of **DatatableTon**: *💯 datatable exercises*
#### Set 02 • Files and Formats • Beginner • Exercises 11-20
| Style | Colab | Kaggle | Binder | GitHub |
| ----- | ----- | ------ | ------ | ------ |
| Exercises | [](https://colab.research.google.com/github/vopani/datatableton/blob/main/notebooks/02_files_and_formats_exercises.ipynb) | [](https://www.kaggle.com/rohanrao/datatableton-files-and-formats-exercises) | [](https://mybinder.org/v2/gh/vopani/datatableton/main?filepath=notebooks%2F02_files_and_formats_exercises.ipynb) | [](https://github.com/vopani/datatableton/blob/main/notebooks/02_files_and_formats_exercises.ipynb) |
| Solutions | [](https://colab.research.google.com/github/vopani/datatableton/blob/main/notebooks/02_files_and_formats_solutions.ipynb) | [](https://www.kaggle.com/rohanrao/datatableton-files-and-formats-solutions) | [](https://mybinder.org/v2/gh/vopani/datatableton/main?filepath=notebooks%2F02_files_and_formats_solutions.ipynb) | [](https://github.com/vopani/datatableton/blob/main/notebooks/02_files_and_formats_solutions.ipynb) |
You can find all the exercises and solutions on [GitHub](https://github.com/vopani/datatableton#exercises-)
|
github_jupyter
|
!python3 -m pip install -U pip
!python3 -m pip install -U datatable
import datatable as dt
dt.__version__
data = dt.Frame()
data = dt.Frame(v1=range(10), v2=['Y', 'O', 'U', 'C', 'A', 'N', 'D', 'O', 'I', 'T'])
data
data.head(5)
data.tail(3)
data.nrows
data.ncols
data.shape
data.names
| 0.370112 | 0.956145 |
Classical probability distributions can be written as a stochastic vector, which can be transformed to another stochastic vector by applying a stochastic matrix. In other words, the evolution of stochastic vectors can be described by a stochastic matrix.
Quantum states also evolve and their evolution is described by unitary matrices. This leads to some interesting properties in quantum computing. Unitary evolution is true for a closed system, that is, a quantum system perfectly isolated from the environment. This is not the case in the quantum computers we have today: these are open quantum systems that evolve differently due to to uncontrolled interactions with the environment. In this notebook, we take a glimpse at both types of evolution.
# Unitary evolution
A unitary matrix has the property that its conjugate transpose is its inverse. Formally, it means that a matrix $U$ is unitary if $UU^\dagger=U^\dagger U=\mathbb{1}$, where $^\dagger$ stands for conjugate transpose, and $\mathbb{1}$ is the identity matrix. A quantum computer is a machine that implements unitary operations.
As an example, we have seen the NOT operation before, which is performed by the X gate in a quantum computer. While the generic discussion on gates will only occur in a subsequent notebook, we can study the properties of the X gate. Its matrix representation is $X = \begin{bmatrix} 0 & 1\\ 1 & 0\end{bmatrix}$. Let's check if it is indeed unitary:
```
import numpy as np
X = np.array([[0, 1], [1, 0]])
print("XX^dagger")
print(X @ X.T.conj())
print("X^daggerX")
print(X.T.conj() @ X)
```
It looks like a legitimate unitary operation. The unitary nature ensures that the $l_2$ norm is preserved, that is, quantum states are mapped to quantum states.
```
print("The norm of the state |0> before applying X")
zero_ket = np.array([[1], [0]])
print(np.linalg.norm(zero_ket))
print("The norm of the state after applying X")
print(np.linalg.norm(X @ zero_ket))
```
Furthermore, since the unitary operation is a matrix, it is linear. Measurements are also represented by matrices. These two observations imply that everything a quantum computer implements is actually linear. If we want to see some form of nonlinearity, that must involve some classical intervention.
Another consequence of the unitary operations is reversibility. Any unitary operation can be reversed. Quantum computing libraries often provide a function to reverse entire circuits. Reversing the X gate is simple: we just apply it again (its conjugate transpose is itself, therefore $X^2=\mathbb{1}$).
```
import numpy as np
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute
from qiskit import Aer
from qiskit.tools.visualization import circuit_drawer
np.set_printoptions(precision=3, suppress=True)
backend_statevector = Aer.get_backend('statevector_simulator')
q = QuantumRegister(1)
c = ClassicalRegister(1)
circuit = QuantumCircuit(q, c)
circuit.x(q[0])
circuit.x(q[0])
job = execute(circuit, backend_statevector)
print(job.result().get_statevector(circuit))
```
which is exactly $|0\rangle$ as we would expect.
In the next notebook, you will learn about classical and quantum many-body systems and the Hamiltonian. In the notebook on adiabatic quantum computing, you will learn that a unitary operation is in fact the Schrödinger equation solved for a Hamiltonian for some duration of time. This connects the computer science way of thinking about gates and unitary operations to actual physics, but there is some learning to be done before we can make that connection. Before that, let us take another look at the interaction with the environment.
# Interaction with the environment: open systems
Actual quantum systems are seldom closed: they constantly interact with their environment in a largely uncontrolled fashion, which causes them to lose coherence. This is true for current and near-term quantum computers too.
<img src="../figures/open_system.svg" alt="A quantum processor as an open quantum system" style="width: 400px;"/>
This also means that their actual time evolution is not described by a unitary matrix as we would want it, but some other operator (the technical name for it is a completely positive trace-preserving map).
Quantum computing libraries often offer a variety of noise models that mimic different types of interaction, and increasing the strength of the interaction with the environment leads to faster decoherence. The timescale for decoherence is often called $T_2$ time. Among a couple of other parameters, $T_2$ time is critically important for the number of gates or the duration of the quantum computation we can perform.
A very cheap way of studying the effects of decoherence is mixing a pure state with the maximally mixed state $\mathbb{1}/2^d$, where $d$ is the number of qubits, with some visibility parameter in $[0,1]$. This way we do not have to specify noise models or any other map modelling decoherence. For instance, we can mix the $|\phi^+\rangle$ state with the maximally mixed state:
```
def mixed_state(pure_state, visibility):
density_matrix = pure_state @ pure_state.T.conj()
maximally_mixed_state = np.eye(4)/2**2
return visibility*density_matrix + (1-visibility)*maximally_mixed_state
ϕ = np.array([[1],[0],[0],[1]])/np.sqrt(2)
print("Maximum visibility is a pure state:")
print(mixed_state(ϕ, 1.0))
print("The state is still entangled with visibility 0.8:")
print(mixed_state(ϕ, 0.8))
print("Entanglement is lost by 0.6:")
print(mixed_state(ϕ, 0.6))
print("Barely any coherence remains by 0.2:")
print(mixed_state(ϕ, 0.2))
```
Another way to look at what happens to a quantum state in an open system is through equilibrium processes. Think of a cup of coffee: left alone, it will equilibrate with the environment, eventually reaching the temperature of the environment. This includes energy exchange. A quantum state does the same thing and the environment has a defined temperature, just like the environment of a cup of coffee.
The equilibrium state is called the thermal state. It has a very specific structure and we will revisit it, but for now, suffice to say that the energy of the samples pulled out of a thermal state follows a Boltzmann distribution. The Boltzmann -- also called Gibbs -- distribution is described as $P(E_i) = \frac {e^{-E_{i}/T}}{\sum _{j=1}^{M}{e^{-E_{j}/T}}}$, where $E_i$ is an energy, and $M$ is the total number of possible energy levels. Temperature enters the definition: the higher the temperature, the closer we are to the uniform distribution. In the infinite temperature limit, it recovers the uniform distribution. At high temperatures, all energy levels have an equal probability. In contrast, at zero temperature, the entire probability mass is concentrated on the lowest energy level, the ground state energy. To get a sense of this, let's plot the Boltzmann distribution with vastly different temperatures:
```
import matplotlib.pyplot as plt
temperatures = [.5, 5, 2000]
energies = np.linspace(0, 20, 100)
fig, ax = plt.subplots()
for i, T in enumerate(temperatures):
probabilities = np.exp(-energies/T)
Z = probabilities.sum()
probabilities /= Z
ax.plot(energies, probabilities, linewidth=3, label = "$T_" + str(i+1)+"$")
ax.set_xlim(0, 20)
ax.set_ylim(0, 1.2*probabilities.max())
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel('Energy')
ax.set_ylabel('Probability')
ax.legend()
```
Here $T_1<T_2<T_3$. Notice that $T_1$ is a low temperature, and therefore it is highly peaked at low energy levels. In contrast, $T_3$ is a very high temperature and the probability distribution is almost completely flat.
|
github_jupyter
|
import numpy as np
X = np.array([[0, 1], [1, 0]])
print("XX^dagger")
print(X @ X.T.conj())
print("X^daggerX")
print(X.T.conj() @ X)
print("The norm of the state |0> before applying X")
zero_ket = np.array([[1], [0]])
print(np.linalg.norm(zero_ket))
print("The norm of the state after applying X")
print(np.linalg.norm(X @ zero_ket))
import numpy as np
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute
from qiskit import Aer
from qiskit.tools.visualization import circuit_drawer
np.set_printoptions(precision=3, suppress=True)
backend_statevector = Aer.get_backend('statevector_simulator')
q = QuantumRegister(1)
c = ClassicalRegister(1)
circuit = QuantumCircuit(q, c)
circuit.x(q[0])
circuit.x(q[0])
job = execute(circuit, backend_statevector)
print(job.result().get_statevector(circuit))
def mixed_state(pure_state, visibility):
density_matrix = pure_state @ pure_state.T.conj()
maximally_mixed_state = np.eye(4)/2**2
return visibility*density_matrix + (1-visibility)*maximally_mixed_state
ϕ = np.array([[1],[0],[0],[1]])/np.sqrt(2)
print("Maximum visibility is a pure state:")
print(mixed_state(ϕ, 1.0))
print("The state is still entangled with visibility 0.8:")
print(mixed_state(ϕ, 0.8))
print("Entanglement is lost by 0.6:")
print(mixed_state(ϕ, 0.6))
print("Barely any coherence remains by 0.2:")
print(mixed_state(ϕ, 0.2))
import matplotlib.pyplot as plt
temperatures = [.5, 5, 2000]
energies = np.linspace(0, 20, 100)
fig, ax = plt.subplots()
for i, T in enumerate(temperatures):
probabilities = np.exp(-energies/T)
Z = probabilities.sum()
probabilities /= Z
ax.plot(energies, probabilities, linewidth=3, label = "$T_" + str(i+1)+"$")
ax.set_xlim(0, 20)
ax.set_ylim(0, 1.2*probabilities.max())
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel('Energy')
ax.set_ylabel('Probability')
ax.legend()
| 0.510252 | 0.991828 |
# Think Bayes
Second Edition
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf
from utils import write_pmf
```
## Distributions
In statistics a **distribution** is a set of values and their
corresponding probabilities.
For example, if you toss a coin, there are two possible outcomes with
approximately equal probabilities.
If you roll a six-sided die, the set of possible values is the numbers 1
to 6, and the probability associated with each value is 1/6.
To represent distributions, we'll use a library called `empiricaldist`.
An "empirical" distribution is based on data, as opposed to a
theoretical distribution.
This library provides a class called `Pmf`, which represents a
**probability mass function**.
`empiricaldist` is available from the Python Package Index (PyPI). You
can [download it here](https://pypi.org/project/empiricaldist/) or
install it with `pip`. For more details, see the Preface.
To use `Pmf` you can import it like this:
```
from empiricaldist import Pmf
```
The following example makes a `Pmf` that represents the outcome of a
coin toss.
```
coin = Pmf()
coin['heads'] = 1/2
coin['tails'] = 1/2
coin
```
The two outcomes have the same probability, $0.5$.
This example makes a `Pmf` that represents the distribution of outcomes
of a six-sided die:
```
die = Pmf()
for x in [1,2,3,4,5,6]:
die[x] = 1
die
```
`Pmf` creates an empty `Pmf` with no values. The `for` loop adds the
values $1$ through $6$, each with "probability" $1$.
In this `Pmf`, the probabilities don't add up to 1, so they are not
really probabilities. We can use `normalize` to make them add up to 1.
```
die.normalize()
```
The return value from `normalize` is the sum of the probabilities before normalizing.
Now we can see that the total is 1 (at least within floating-point error).
```
die
die.sum()
```
Another way make a `Pmf` is to provide a sequence of values.
In this example, every value appears once, so they all have the same probability.
```
die = Pmf.from_seq([1,2,3,4,5,6])
die
```
More generally, values can appear more than once, as in this example:
```
letters = Pmf.from_seq(list('Mississippi'))
write_pmf(letters, 'table02-01')
letters
```
The `Pmf` class inherits from a Pandas `Series`, so anything you can do
with a `Series`, you can also do with a `Pmf`.
For example, you can use the bracket operator to look up a value and
returns the corresponding probability.
```
letters['s']
```
In the word "Mississippi", about 36% of the letters are "s".
However, if you ask for the probability of a value that's not in the
distribution, you get a `KeyError`.
```
try:
letters['t']
except KeyError as e:
print('KeyError')
```
You can also call a `Pmf` as if it were a function, with a value in parentheses.
```
letters('s')
```
If the value is in the distribution the results are the same.
But if the value is not in the distribution, the result is $0$, not an error.
```
letters('t')
```
With parentheses, you can also provide a sequence of values and get a sequence of probabilities.
```
die([1,4,7])
```
As these examples shows, the values in a `Pmf` can be integers or strings. In general, they can be any type that can be stored in the index of a Pandas `Series`.
If you are familiar with Pandas, that will help you work with `Pmf` objects. But I will explain what you need to know as we go along.
## The cookie problem
In this section I'll use a `Pmf` to solve the cookie problem from Section XX.
Here's the statement of the problem again:
> Suppose there are two bowls of cookies.
> Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies.
> Bowl 2 contains 20 of each.
>
> Now suppose you choose one of the bowls at random and, without
> looking, select a cookie at random. The cookie is vanilla. What is
> the probability that it came from Bowl 1?
Here's a `Pmf` that represents the two hypotheses and their prior probabilities:
```
prior = Pmf.from_seq(['Bowl 1', 'Bowl 2'])
prior
```
This distribution, which contains the prior probability for each hypothesis, is called (wait for it) the **prior distribution**.
To update the distribution based on new data (the vanilla cookie),
we multiply the priors by the likelihoods. The likelihood
of drawing a vanilla cookie from Bowl 1 is `3/4`. The likelihood
for Bowl 2 is `1/2`.
```
likelihood_vanilla = [0.75, 0.5]
posterior = prior * likelihood_vanilla
posterior
```
The result is the unnormalized posteriors.
We can use `normalize` to compute the posterior probabilities:
```
posterior.normalize()
```
The return value from `normalize` is the total probability of the data, which is $5/8$.
`posterior`, which contains the posterior probability for each hypothesis, is called (wait now) the **posterior distribution**.
```
posterior
```
From the posterior distribution we can select the posterior probability for Bowl 1:
```
posterior('Bowl 1')
```
And the answer is 0.6.
One benefit of using `Pmf` objects is that it is easy to do successive updates with more data.
For example, suppose you put the first cookie back (so the contents of the bowls don't change) and draw again from the same bowl.
If the second cookie is also vanilla, we can do a second update like this:
```
posterior *= likelihood_vanilla
posterior.normalize()
posterior
```
Now the posterior probability for Bowl 1 is almost 70%.
But suppose we do the same thing again and get a chocolate cookie.
Here's the update.
```
likelihood_chocolate = [0.25, 0.5]
posterior *= likelihood_chocolate
posterior.normalize()
posterior
```
Now the posterior probability for Bowl 1 is about 53%.
After two vanilla cookies and one chocolate, the posterior probabilities are close to 50/50.
## 101 Bowls
Next let's solve a cookie problem with 101 bowls:
* Bowl 0 contains no vanilla cookies,
* Bowl 1 contains 1% vanilla cookies,
* Bowl 2 contains 2% vanilla cookies,
and so on, up to
* Bowl 99 contains 99% vanilla cookies, and
* Bowl 100 contains all vanilla cookies.
As in the previous version, there are only two kinds of cookies, vanilla and chocolate. So Bowl 0 is all chocolate cookies, Bowl 1 is 99% chocolate, and so on.
Suppose we choose a bowl at random, choose a cookie at random, and it turns out to be vanilla. What is the probability that the cookie came from Bowl $x$, for each value of $x$?
To solve this problem, I'll use `np.arange` to represent 101 hypotheses, numbered from 0 to 100.
```
hypos = np.arange(101)
```
The result is a NumPy array, which we can use to make the prior distribution:
```
prior = Pmf(1, hypos)
prior.normalize()
```
As this example shows, we can initialize a `Pmf` with two parameters.
The first parameter is the prior probability; the second parameter is a sequence of values.
In this example, the probabilities are all the same, so we only have to provide one of them; it gets "broadcast" across the hypotheses.
Since all hypotheses have the same prior probability, this distribution is **uniform**.
The likelihood of the data is the fraction of vanilla cookies in each bowl, which we can calculate using `hypos`:
```
likelihood_vanilla = hypos/100
```
Now we can compute the posterior distribution in the usual way:
```
posterior1 = prior * likelihood_vanilla
posterior1.normalize()
```
The following figure shows the prior distribution and the posterior distribution after one vanilla cookie.
```
from utils import decorate, savefig
def decorate_bowls(title):
decorate(xlabel='Bowl #',
ylabel='PMF',
title=title)
prior.plot(label='prior', color='gray')
posterior1.plot(label='posterior')
decorate_bowls('Posterior after one vanilla cookie')
```
The posterior probability of Bowl 0 is 0 because it contains no vanilla cookies.
The posterior probability of Bowl 100 is the highest because it contain the most vanilla cookies.
In between, the shape of the posterior distribution is a line because the the likelihoods are proportional to the bowl numbers.
Now suppose we put the cookie back, draw again from the same bowl, and get another vanilla cookie.
Here's the update after the second cookie:
```
posterior2 = posterior1 * likelihood_vanilla
posterior2.normalize()
```
And here's what the posterior distribution looks like.
```
posterior2.plot(label='posterior')
decorate_bowls('Posterior after two vanilla cookies')
```
%TODO improve this transition...
Because the likelihood function is a line, the posterior after two cookies is a parabola.
At this point the high-numbered bowls are the most likely because they contain the most vanilla cookies, and the low-numbered bowls have been all but eliminated.
Now suppose we draw again and get a chocolate cookie.
Here's the update:
```
likelihood_chocolate = 1 - hypos/100
posterior3 = posterior2 * likelihood_chocolate
posterior3.normalize()
```
And here's the posterior distribution.
```
posterior3.plot(label='posterior')
decorate_bowls('Posterior after 2 vanilla, 1 chocolate')
```
Now Bowl 100 has been eliminated because it contains no chocolare cookies.
But the high-numbered bowls are still more likely than the low-numbered bowls, because we have seen more vanilla cookies than chocolate.
In fact, the peak of the posterior distribution is at Bowl 67, which corresponds to the fraction of vanilla cookies in the data we've observed, $2/3$.
The quantity with the highest posterior probability is called the **MAP**, which stands for "maximum a posteori probability", where "a posteori" is unnecessary Latin for "posterior".
To compute the MAP, we can use the `Series` method `idxmax`:
```
posterior3.idxmax()
```
Or `Pmf` provides a more memorable name for the same thing:
```
posterior3.max_prob()
```
As you might suspect, this example isn't really about bowls; it's about estimating proportions.
Imagine that you have one bowl of cookies.
You don't know what fraction of cookies are vanilla, but you think it is equally likely to be any fraction from 0 to 1.
If you draw three cookies and two are vanilla, what proportion of cookies in the bowl do you think are vanilla?
The posterior distribution we just computed is the answer to that question.
We'll come back to estimating proportions in the next chapter.
But first let's use a `Pmf` to solve the dice problem.
Here's a figure for the book.
```
plt.figure(figsize=(4, 6))
plt.subplot(311)
prior.plot(label='prior', color='gray')
posterior1.plot(label='1 vanilla', color='C0')
plt.ylabel('PMF')
plt.title('101 Bowls')
plt.legend()
plt.subplot(312)
posterior2.plot(label='2 vanilla', color='C1')
plt.ylabel('PMF')
plt.legend()
plt.subplot(313)
posterior3.plot(label='2 vanilla, 1 chocolate', color='C2')
decorate_bowls('')
savefig('fig02-01')
```
## The dice problem
In Section [\[dice\]](#dice){reference-type="ref" reference="dice"} we solved the dice problem using a Bayes table.
Here's the statment of the problem again:
> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die.
> I choose one of the dice at random, roll it, and report that the outcome is a 1.
> What is the probability that I chose the 6-sided die?
Let's solve it again using a `Pmf`.
I'll use integers to represent the hypotheses:
```
hypos = [6, 8, 12]
```
And I can make the prior distribution like this:
```
prior = Pmf(1/3, hypos)
prior
```
As in the previous example, the prior probability gets broadcast across the hypotheses.
Now we can compute the likelihood of the data:
```
likelihood1 = 1/6, 1/8, 1/12
```
And use it to compute the posterior distribution.
```
posterior = prior * likelihood1
posterior.normalize()
write_pmf(posterior, 'table02-02')
posterior
```
The posterior probability for the 6-sided die is $4/9$.
Now suppose I roll the same die again and get a $7$.
Here are the likelihoods:
```
likelihood2 = 0, 1/8, 1/12
```
The likelihood for the 6-sided die is $0$ because it is not possible to get a 7 on a 6-sided die.
The other two likelihoods are the same as in the previous update.
Now we can do the update in the usual way:
```
posterior *= likelihood2
posterior.normalize()
write_pmf(posterior, 'table02-03')
posterior
```
After rolling a 1 and a 7, the posterior probability of the 8-sided die is about 69%.
One note about the `Pmf` class: if you multiply a `Pmf` by a sequence, the result is a `Pmf`:
```
type(prior * likelihood1)
```
If you do it the other way around, the result is a `Series`:
```
type(likelihood1 * prior)
```
We usually want the posterior distribution to be a `Pmf`, so I usually put the prior first. But even if we do it the other way, we can use `Pmf` to convert the result to a `Pmf`:
```
Pmf(likelihood1 * prior)
```
## Updating dice
The following function is a more general version of the update in the previous section:
```
def update_dice(pmf, data):
"""Update a pmf based on new data.
pmf: Pmf of possible dice and their probabilities
data: integer outcome
"""
hypos = pmf.qs
likelihood = 1 / hypos
impossible = (data > hypos)
likelihood[impossible] = 0
pmf *= likelihood
pmf.normalize()
```
The first parameter is a `Pmf` that represents the possible dice and their probabilities.
The second parameter is the outcome of rolling a die.
The first line selects `qs` from the `Pmf`, which is the index of the `Series`; in this example, it represents the hypotheses.
Since the hypotheses are integers, we can use them to compute the likelihoods.
In general, if there are `n` sides on the die, the probability of any possible outcome is `1/n`.
However, we have to check for impossible outcomes!
If the outcome exceeds the hypothetical number of sides on the die, the probability of that outcome is $0$.
`impossible` is a Boolean Series that is `True` for each impossible die.
I use it as an index into `likelihood` to set the corresponding probabilities to $0$.
Finally, I multiply `pmf` by the likelihoods and normalize.
Here's how we can use this function to compute the updates in the previous section.
I start with a fresh copy of the prior distribution:
```
pmf = prior.copy()
pmf
```
And use `update_dice` to do the updates.
```
update_dice(pmf, 1)
update_dice(pmf, 7)
pmf
```
The result is the same.
## Summary
This chapter introduces the `empiricaldist` module, which provides `Pmf`, which we use to represent a set of hypotheses and their probabilities.
We use a `Pmf` to solve the cookie problem and the dice problem, which we saw in the previous chapter.
With a `Pmf` it is easy to perform sequential updates as we see multiple pieces of data.
We also solved a more general version of the cookie problem, with 101 bowls rather than two.
Then we computed the MAP, which is the quantity with the highest posterior probability.
In the next chapter \...
But first you might want to work on the exercises.
## Exercises
**Exercise:** Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die.
I choose one of the dice at random, roll it four times, and get 1, 3, 5, and 7.
What is the probability that I chose the 8-sided die?
```
# Solution goes here
```
**Exercise:** In the previous version of the dice problem, the prior probabilities are the same because the box contains one of each die.
But suppose the box contains 1 die that is 4-sided, 2 dice that are 6-sided, 3 dice that are 8-sided, 4 dice that are 12-sided, and 5 dice that are 20-sided.
I choose a die, roll it, and get a 7.
What is the probability that I chose an 8-sided die?
```
# Solution goes here
```
**Exercise:** Suppose I have two sock drawers.
One contains equal numbers of black and white socks.
The other contains equal numbers of red, green, and blue socks.
Suppose I choose a drawer at random, choose two socks at random, and I tell you that I got a matching pair.
What is the probability that the socks are white?
For simplicity, let's assume that there are so many socks in both drawers that removing one sock makes a negligible change to the proportions.
```
# Solution goes here
# Solution goes here
```
**Exercise:** Here's a problem from [Bayesian Data Analysis](http://www.stat.columbia.edu/~gelman/book/):
> Elvis Presley had a twin brother (who died at birth). What is the probability that Elvis was an identical twin?
Hint: In 1935, about 2/3 of twins were fraternal and 1/3 were identical.
```
# Solution goes here
# Solution goes here
```
|
github_jupyter
|
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf
from utils import write_pmf
from empiricaldist import Pmf
coin = Pmf()
coin['heads'] = 1/2
coin['tails'] = 1/2
coin
die = Pmf()
for x in [1,2,3,4,5,6]:
die[x] = 1
die
die.normalize()
die
die.sum()
die = Pmf.from_seq([1,2,3,4,5,6])
die
letters = Pmf.from_seq(list('Mississippi'))
write_pmf(letters, 'table02-01')
letters
letters['s']
try:
letters['t']
except KeyError as e:
print('KeyError')
letters('s')
letters('t')
die([1,4,7])
prior = Pmf.from_seq(['Bowl 1', 'Bowl 2'])
prior
likelihood_vanilla = [0.75, 0.5]
posterior = prior * likelihood_vanilla
posterior
posterior.normalize()
posterior
posterior('Bowl 1')
posterior *= likelihood_vanilla
posterior.normalize()
posterior
likelihood_chocolate = [0.25, 0.5]
posterior *= likelihood_chocolate
posterior.normalize()
posterior
hypos = np.arange(101)
prior = Pmf(1, hypos)
prior.normalize()
likelihood_vanilla = hypos/100
posterior1 = prior * likelihood_vanilla
posterior1.normalize()
from utils import decorate, savefig
def decorate_bowls(title):
decorate(xlabel='Bowl #',
ylabel='PMF',
title=title)
prior.plot(label='prior', color='gray')
posterior1.plot(label='posterior')
decorate_bowls('Posterior after one vanilla cookie')
posterior2 = posterior1 * likelihood_vanilla
posterior2.normalize()
posterior2.plot(label='posterior')
decorate_bowls('Posterior after two vanilla cookies')
likelihood_chocolate = 1 - hypos/100
posterior3 = posterior2 * likelihood_chocolate
posterior3.normalize()
posterior3.plot(label='posterior')
decorate_bowls('Posterior after 2 vanilla, 1 chocolate')
posterior3.idxmax()
posterior3.max_prob()
plt.figure(figsize=(4, 6))
plt.subplot(311)
prior.plot(label='prior', color='gray')
posterior1.plot(label='1 vanilla', color='C0')
plt.ylabel('PMF')
plt.title('101 Bowls')
plt.legend()
plt.subplot(312)
posterior2.plot(label='2 vanilla', color='C1')
plt.ylabel('PMF')
plt.legend()
plt.subplot(313)
posterior3.plot(label='2 vanilla, 1 chocolate', color='C2')
decorate_bowls('')
savefig('fig02-01')
hypos = [6, 8, 12]
prior = Pmf(1/3, hypos)
prior
likelihood1 = 1/6, 1/8, 1/12
posterior = prior * likelihood1
posterior.normalize()
write_pmf(posterior, 'table02-02')
posterior
likelihood2 = 0, 1/8, 1/12
posterior *= likelihood2
posterior.normalize()
write_pmf(posterior, 'table02-03')
posterior
type(prior * likelihood1)
type(likelihood1 * prior)
Pmf(likelihood1 * prior)
def update_dice(pmf, data):
"""Update a pmf based on new data.
pmf: Pmf of possible dice and their probabilities
data: integer outcome
"""
hypos = pmf.qs
likelihood = 1 / hypos
impossible = (data > hypos)
likelihood[impossible] = 0
pmf *= likelihood
pmf.normalize()
pmf = prior.copy()
pmf
update_dice(pmf, 1)
update_dice(pmf, 7)
pmf
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
| 0.5083 | 0.972623 |
# Chainer MNIST Model Deployment
* Wrap a Chainer MNIST python model for use as a prediction microservice in seldon-core
* Run locally on Docker to test
* Deploy on seldon-core running on minikube
## Dependencies
* [Helm](https://github.com/kubernetes/helm)
* [Minikube](https://github.com/kubernetes/minikube)
* [S2I](https://github.com/openshift/source-to-image)
```bash
pip install seldon-core
pip install chainer==6.2.0
```
## Train locally
```
#!/usr/bin/env python
import argparse
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
import chainerx
# Network definition
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def forward(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
def main():
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--frequency', '-f', type=int, default=-1,
help='Frequency of taking a snapshot')
parser.add_argument('--device', '-d', type=str, default='-1',
help='Device specifier. Either ChainerX device '
'specifier or an integer. If non-negative integer, '
'CuPy arrays with specified device id are used. If '
'negative integer, NumPy arrays are used')
parser.add_argument('--out', '-o', default='result',
help='Directory to output the result')
parser.add_argument('--resume', '-r', type=str,
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
parser.add_argument('--noplot', dest='plot', action='store_false',
help='Disable PlotReport extension')
group = parser.add_argument_group('deprecated arguments')
group.add_argument('--gpu', '-g', dest='device',
type=int, nargs='?', const=0,
help='GPU ID (negative value indicates CPU)')
args = parser.parse_args(args=[])
device = chainer.get_device(args.device)
print('Device: {}'.format(device))
print('# unit: {}'.format(args.unit))
print('# Minibatch-size: {}'.format(args.batchsize))
print('# epoch: {}'.format(args.epoch))
print('')
# Set up a neural network to train
# Classifier reports softmax cross entropy loss and accuracy at every
# iteration, which will be used by the PrintReport extension below.
model = L.Classifier(MLP(args.unit, 10))
model.to_device(device)
device.use()
# Setup an optimizer
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
# Load the MNIST dataset
train, test = chainer.datasets.get_mnist()
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
repeat=False, shuffle=False)
# Set up a trainer
updater = training.updaters.StandardUpdater(
train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
# Evaluate the model with the test dataset for each epoch
trainer.extend(extensions.Evaluator(test_iter, model, device=device))
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
# TODO(niboshi): Temporarily disabled for chainerx. Fix it.
if device.xp is not chainerx:
trainer.extend(extensions.DumpGraph('main/loss'))
# Take a snapshot for each specified epoch
frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)
trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
if args.plot and extensions.PlotReport.available():
trainer.extend(
extensions.PlotReport(['main/loss', 'validation/main/loss'],
'epoch', file_name='loss.png'))
trainer.extend(
extensions.PlotReport(
['main/accuracy', 'validation/main/accuracy'],
'epoch', file_name='accuracy.png'))
# Print selected entries of the log to stdout
# Here "main" refers to the target link of the "main" optimizer again, and
# "validation" refers to the default name of the Evaluator extension.
# Entries other than 'epoch' are reported by the Classifier link, called by
# either the updater or the evaluator.
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
# Print a progress bar to stdout
trainer.extend(extensions.ProgressBar())
if args.resume is not None:
# Resume from a snapshot
chainer.serializers.load_npz(args.resume, trainer)
# Run the training
trainer.run()
if __name__ == '__main__':
main()
```
Wrap model using s2i
```
!s2i build . seldonio/seldon-core-s2i-python3:1.7.0-dev chainer-mnist:0.1
!docker run --name "mnist_predictor" -d --rm -p 5000:5000 chainer-mnist:0.1
```
Send some random features that conform to the contract
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p
!docker rm mnist_predictor --force
```
# Test using Minikube
**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
```
!minikube start --memory 4096
```
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
```
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3:1.7.0-dev chainer-mnist:0.1
!kubectl create -f chainer_mnist_deployment.json
!kubectl rollout status deploy/chainer-mnist-deployment-chainer-mnist-predictor-76478b2
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
```
|
github_jupyter
|
pip install seldon-core
pip install chainer==6.2.0
#!/usr/bin/env python
import argparse
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
import chainerx
# Network definition
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def forward(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
def main():
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--frequency', '-f', type=int, default=-1,
help='Frequency of taking a snapshot')
parser.add_argument('--device', '-d', type=str, default='-1',
help='Device specifier. Either ChainerX device '
'specifier or an integer. If non-negative integer, '
'CuPy arrays with specified device id are used. If '
'negative integer, NumPy arrays are used')
parser.add_argument('--out', '-o', default='result',
help='Directory to output the result')
parser.add_argument('--resume', '-r', type=str,
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
parser.add_argument('--noplot', dest='plot', action='store_false',
help='Disable PlotReport extension')
group = parser.add_argument_group('deprecated arguments')
group.add_argument('--gpu', '-g', dest='device',
type=int, nargs='?', const=0,
help='GPU ID (negative value indicates CPU)')
args = parser.parse_args(args=[])
device = chainer.get_device(args.device)
print('Device: {}'.format(device))
print('# unit: {}'.format(args.unit))
print('# Minibatch-size: {}'.format(args.batchsize))
print('# epoch: {}'.format(args.epoch))
print('')
# Set up a neural network to train
# Classifier reports softmax cross entropy loss and accuracy at every
# iteration, which will be used by the PrintReport extension below.
model = L.Classifier(MLP(args.unit, 10))
model.to_device(device)
device.use()
# Setup an optimizer
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
# Load the MNIST dataset
train, test = chainer.datasets.get_mnist()
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
repeat=False, shuffle=False)
# Set up a trainer
updater = training.updaters.StandardUpdater(
train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
# Evaluate the model with the test dataset for each epoch
trainer.extend(extensions.Evaluator(test_iter, model, device=device))
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
# TODO(niboshi): Temporarily disabled for chainerx. Fix it.
if device.xp is not chainerx:
trainer.extend(extensions.DumpGraph('main/loss'))
# Take a snapshot for each specified epoch
frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)
trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
if args.plot and extensions.PlotReport.available():
trainer.extend(
extensions.PlotReport(['main/loss', 'validation/main/loss'],
'epoch', file_name='loss.png'))
trainer.extend(
extensions.PlotReport(
['main/accuracy', 'validation/main/accuracy'],
'epoch', file_name='accuracy.png'))
# Print selected entries of the log to stdout
# Here "main" refers to the target link of the "main" optimizer again, and
# "validation" refers to the default name of the Evaluator extension.
# Entries other than 'epoch' are reported by the Classifier link, called by
# either the updater or the evaluator.
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
# Print a progress bar to stdout
trainer.extend(extensions.ProgressBar())
if args.resume is not None:
# Resume from a snapshot
chainer.serializers.load_npz(args.resume, trainer)
# Run the training
trainer.run()
if __name__ == '__main__':
main()
!s2i build . seldonio/seldon-core-s2i-python3:1.7.0-dev chainer-mnist:0.1
!docker run --name "mnist_predictor" -d --rm -p 5000:5000 chainer-mnist:0.1
!seldon-core-tester contract.json 0.0.0.0 5000 -p
!docker rm mnist_predictor --force
!minikube start --memory 4096
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3:1.7.0-dev chainer-mnist:0.1
!kubectl create -f chainer_mnist_deployment.json
!kubectl rollout status deploy/chainer-mnist-deployment-chainer-mnist-predictor-76478b2
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
| 0.57678 | 0.892234 |
# Data Visualization - Plotting Data
- its import to explore and understand your data in Data Science
- typically various statistical graphics are plotted to quickly visualize and understand data
- various libraries work together with Pandas DataFrame and Series datastructrue to quickly plot a data table
- `matplotlib` and `seaborn` which builds on matplotlib are two common ones we'll explore in the notebook
- https://matplotlib.org/
- https://seaborn.pydata.org/introduction.html
## create plots with pandas
```python
DataFrame.plot(*args, **kwargs)
```
- make plots of Series or DataFrame using matplotlib by default
- see API details here: [https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html)

- you can save the plots generated on notebooks by: Right-click -> Save Image As
- you can provide explit x and y values or let each plot pick default values based on DataFrame's x and y labels based on chart type/kind
- let's use air_quality_no2.csv dataset from pandas to demonstrate some plotting
```
import pandas as pd
import matplotlib.pyplot as plt
# online raw data URLs
no2_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_no2.csv'
pm2_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_pm25_long.csv'
air_quality_stations_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_stations.csv'
air_qual_parameters_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_parameters.csv'
air_quality = pd.read_csv(no2_url, index_col=0, parse_dates=True)
air_quality.head()
# quick visual check of the data
air_quality.plot()
# plot only the station_paris column
air_quality['station_paris'].plot()
# visually comapre the NO2 values measured in London Vs Paris using scatter ploit
air_quality.plot.scatter(x='station_london', y='station_paris', alpha=0.5)
# let's see all the plotting methods provided in plot module
for method_name in dir(air_quality.plot):
if not method_name.startswith('_'):
print(method_name)
# one liner
[name for name in dir(air_quality.plot) if not name.startswith('_')]
# you can also use tab completion to display all the methods
air_quality.plot.box()
air_quality.plot.barh()
# let's provide figsize for readability
air_quality.plot.barh(figsize=(18, 10))
# create area plot with separate subplots for each feature
axs = air_quality.plot.area(figsize=(12, 4), subplots=True)
# customize, extend and save the resulting plot
fig, axes = plt.subplots(figsize=(12, 4)) # create an empty matplotlib Fig and Axes
air_quality.plot.area(ax=axes) # use pands to put the area plot on the prepared Figure/Axes
axes.set_ylabel("$NO_2$ concentration") # do the customization; use LaTex $ $ syntax
fig.savefig("no2_concerntrations.png") # save the figure
```
## handling time series data
- pandas makes it easy to work with datetime and time series data
- let's work with air_quality_no2_long.csv dataset to demostrate timeseries data
```
no2_long_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_no2_long.csv'
air_quality_long = pd.read_csv(no2_long_url)
air_quality_long.head()
air_quality_long = air_quality_long.rename(columns={"date.utc": "datetime"})
air_quality_long.head()
air_quality_long.city.unique()
# let's change datetime colume datatype to Python datetime instead of plaintext
air_quality_long["datetime"] = pd.to_datetime(air_quality_long["datetime"])
air_quality_long["datetime"]
```
### Note: you can also use pd.read_csv(file, parse_dates=["list of column names"]) to parse data as datetime
```
# find the lasted and oldest dates
air_quality_long["datetime"].max(), air_quality_long['datetime'].min()
# find the delta
air_quality_long["datetime"].max() - air_quality_long['datetime'].min()
# let's add a new column containing only the month to the DataFrame
air_quality_long["month"] = air_quality_long["datetime"].dt.month
air_quality_long.head()
```
## groupby
- grouping data by some column value and finding aggregate information
- find average $NO_2$ concentration for each day of the week for each of the measurement locations
```
air_quality_long.groupby([air_quality_long["datetime"].dt.weekday, "location"])["value"].mean()
```
### plot timeseries
- plot the typical $NO_2$ pattern during the day of time series of all stations together
- what is the average value for each hour of the day?
```
fig, axs = plt.subplots(figsize=(12, 4))
air_quality_long.groupby(air_quality_long["datetime"].dt.hour)["value"].mean().plot(kind='bar', rot=0, ax=axs)
plt.xlabel("Hour of the day")
plt.ylabel("$NO_2 (\mu g/m^3)$")
```
## Reshaping pandas DataFrame
- `pivot()` lets us reshape the data table
- let's use datetime as index and measurement locations as a separate columns
- DatetimeIndex class contains many time series related optimizations: https://pandas.pydata.org/docs/user_guide/timeseries.html#timeseries-datetimeindex
```
no_2 = air_quality_long.pivot(index="datetime", columns="location", values="value")
no_2.head()
# notice values from some indices and columns may not exist and are filled with NaN
no_2.index.year
pd.unique(no_2.index.weekday), pd.unique(no_2.index.day)
```
### Plot between some date range
- create a plot of the $NO_2$ values in the different stations from the 20th May till the end of 21st of May
```
no_2["2019-05-20":"2019-05-21"].plot()
```
## Resampling a time series to another frequency
- aggregate the current hourly time series values to the monthly maximum value in each of the stations
```
monthly_max = no_2.resample("M").max()
monthly_max.head()
# line plot of daily mean NO2 value in each stations
no_2.resample("D").mean().plot(style="-o", figsize=(10, 5));
```
# Seaborn
- [https://seaborn.pydata.org/introduction.html](https://seaborn.pydata.org/introduction.html)
- library for making statistical graphics in Python
- builds on matplotlib and integrates closely with pandas data
- seaborn performs the necessary semantic mapping and statisticall aggregation to produce informative plots
- data-set oriented and declarative which lets you focus on the understanding the plots rather than how to draw them
- provides easy API access to various datasets to experiment with
- Kaggle's Data Visualization mini course dives mostly into Seaborn library

## Installation
- use pip or conda to insall Seaborn library
```bash
pip install seaborn
conda install seaborn
```
## Seaborn plots
### Trends
- a trend is defined as a pattern of change
`sns.lineplot` - Line charts are best to show trends over a period of time, and multiple lines can be used to show trends in more than one group
### Relationship
- There are many different chart types that you can use to understand relationships between variables in your data.
- `sns.relplot` - Scatter plot to visualize relationship between the features
- `sns.barplot` - Bar charts are useful for comparing quantities corresponding to different groups.
- `sns.heatmap` - Heatmaps can be used to find color-coded patterns in tables of numbers.
- `sns.scatterplot` - Scatter plots show the relationship between two continuous variables; if color-coded, we can also show the relationship with a third categorical variable.
- `sns.regplot` - Including a regression line in the scatter plot makes it easier to see any linear relationship between two variables.
- `sns.lmplot` - This command is useful for drawing multiple regression lines, if the scatter plot contains multiple, color-coded groups.
- `sns.swarmplot` - Categorical scatter plots show the relationship between a continuous variable and a categorical variable.
### Distribution
- We visualize distributions to show the possible values that we can expect to see in a variable, along with how likely they are.
- `sns.distplot` - Histograms show the distribution of a single numerical variable.
- `sns.kdeplot` - KDE plots (or 2D KDE plots) show an estimated, smooth distribution of a single numerical variable (or two numerical variables).
`sns.jointplot` - This command is useful for simultaneously displaying a 2D KDE plot with the corresponding KDE plots for each individual variable.
### API reference
- https://seaborn.pydata.org/api.html
- seaborn.relplot
- https://seaborn.pydata.org/generated/seaborn.relplot.html#seaborn.relplot
- Seaborn also provides load_dataset API to load some CSV data it provides
- see the datasets here: [https://github.com/mwaskom/seaborn-data](https://github.com/mwaskom/seaborn-data)
```
import seaborn as sns
# apply the default theme
sns.set_theme()
# loat an example data sets, tips
tips = sns.load_dataset('tips')
tips.info()
tips
# create relation plot to visualize the realtionship between total_bill and
# tip amount between two categories smoker and non-smoker of customers
sns.relplot(data=tips, x="total_bill", y="tip", col="time", hue="smoker", style="smoker", size="size")
```
|
github_jupyter
|
DataFrame.plot(*args, **kwargs)
import pandas as pd
import matplotlib.pyplot as plt
# online raw data URLs
no2_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_no2.csv'
pm2_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_pm25_long.csv'
air_quality_stations_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_stations.csv'
air_qual_parameters_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_parameters.csv'
air_quality = pd.read_csv(no2_url, index_col=0, parse_dates=True)
air_quality.head()
# quick visual check of the data
air_quality.plot()
# plot only the station_paris column
air_quality['station_paris'].plot()
# visually comapre the NO2 values measured in London Vs Paris using scatter ploit
air_quality.plot.scatter(x='station_london', y='station_paris', alpha=0.5)
# let's see all the plotting methods provided in plot module
for method_name in dir(air_quality.plot):
if not method_name.startswith('_'):
print(method_name)
# one liner
[name for name in dir(air_quality.plot) if not name.startswith('_')]
# you can also use tab completion to display all the methods
air_quality.plot.box()
air_quality.plot.barh()
# let's provide figsize for readability
air_quality.plot.barh(figsize=(18, 10))
# create area plot with separate subplots for each feature
axs = air_quality.plot.area(figsize=(12, 4), subplots=True)
# customize, extend and save the resulting plot
fig, axes = plt.subplots(figsize=(12, 4)) # create an empty matplotlib Fig and Axes
air_quality.plot.area(ax=axes) # use pands to put the area plot on the prepared Figure/Axes
axes.set_ylabel("$NO_2$ concentration") # do the customization; use LaTex $ $ syntax
fig.savefig("no2_concerntrations.png") # save the figure
no2_long_url = 'https://raw.githubusercontent.com/pandas-dev/pandas/master/doc/data/air_quality_no2_long.csv'
air_quality_long = pd.read_csv(no2_long_url)
air_quality_long.head()
air_quality_long = air_quality_long.rename(columns={"date.utc": "datetime"})
air_quality_long.head()
air_quality_long.city.unique()
# let's change datetime colume datatype to Python datetime instead of plaintext
air_quality_long["datetime"] = pd.to_datetime(air_quality_long["datetime"])
air_quality_long["datetime"]
# find the lasted and oldest dates
air_quality_long["datetime"].max(), air_quality_long['datetime'].min()
# find the delta
air_quality_long["datetime"].max() - air_quality_long['datetime'].min()
# let's add a new column containing only the month to the DataFrame
air_quality_long["month"] = air_quality_long["datetime"].dt.month
air_quality_long.head()
air_quality_long.groupby([air_quality_long["datetime"].dt.weekday, "location"])["value"].mean()
fig, axs = plt.subplots(figsize=(12, 4))
air_quality_long.groupby(air_quality_long["datetime"].dt.hour)["value"].mean().plot(kind='bar', rot=0, ax=axs)
plt.xlabel("Hour of the day")
plt.ylabel("$NO_2 (\mu g/m^3)$")
no_2 = air_quality_long.pivot(index="datetime", columns="location", values="value")
no_2.head()
# notice values from some indices and columns may not exist and are filled with NaN
no_2.index.year
pd.unique(no_2.index.weekday), pd.unique(no_2.index.day)
no_2["2019-05-20":"2019-05-21"].plot()
monthly_max = no_2.resample("M").max()
monthly_max.head()
# line plot of daily mean NO2 value in each stations
no_2.resample("D").mean().plot(style="-o", figsize=(10, 5));
pip install seaborn
conda install seaborn
import seaborn as sns
# apply the default theme
sns.set_theme()
# loat an example data sets, tips
tips = sns.load_dataset('tips')
tips.info()
tips
# create relation plot to visualize the realtionship between total_bill and
# tip amount between two categories smoker and non-smoker of customers
sns.relplot(data=tips, x="total_bill", y="tip", col="time", hue="smoker", style="smoker", size="size")
| 0.619932 | 0.975762 |
```
import numpy as np
import pandas as pd
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
%matplotlib inline
from glob import glob
all_q = {}
x_dirs = glob('yz/*/')
x_dirs[0].split('/')
'1qtable'.split('1')
for x_dir in x_dirs:
chain_length = x_dir.split('/')[1]
qtables = glob(f'{x_dir}{chain_length}*')
print(qtables)
all_q[chain_length] = {}
for qtable in qtables:
spacing = qtable.split(f'{x_dir}{chain_length}')[1].split('qtable')[0]
with open(qtable) as fp:
#The first 14 lines of the qTable do not contain spectrum data
print(qtable)
for blank in range(0,14):
fp.readline()
wave = []
Q_ext = []
Q_abs = []
Q_sca = []
for k in range(350,801):
line = fp.readline()
ary = line.split(" ")
ary = [a for a in ary if a]
# print(ary[1:5])
ary = np.array(ary[1:5]).astype(np.float)
wave.append(float(ary[0]))
Q_ext.append(float(ary[1]))
Q_abs.append(float(ary[2]))
Q_sca.append(float(ary[3]))
df = pd.DataFrame({'wave': wave, 'Q_ext': Q_ext, 'Q_abs': Q_abs, 'Q_sca': Q_sca})
all_q[chain_length][spacing] = df
all_q.keys()
q = 'Q_ext'
from scipy.interpolate import UnivariateSpline
unreg = all_q['1']['0'].dropna()
spl = UnivariateSpline(unreg['wave'], unreg[q])
wl = np.arange(0.350, 0.800, 0.001)
# inp = ((wl - w_mean)/w_std).reshape(-1, 1)
spl.set_smoothing_factor(0.00001)
preds = spl(wl)
plt.plot(all_q['1']['0']['wave'], all_q['1']['0'][q], 'g')
plt.plot(wl, preds, 'b')
all_q['24']['1'].loc[all_q['24']['1'][q].isnull(), q]
preds[all_q['24']['1'][q].isnull()]
for n in all_q:
for spacing in all_q[n]:
df = all_q[n][spacing]
df_copy = df.dropna()
spl = UnivariateSpline(np.array(df_copy['wave']), np.array(df_copy[q]))
wl = np.arange(0.350, 0.800, 0.0005)
spl.set_smoothing_factor(0.000001)
preds = spl(wl)
all_q[n][spacing] = pd.DataFrame({'wave': wl, q: preds})
all_q['1']['0'][350:370]
df_list = {}
for n in all_q:
n_list = []
for spacing in all_q[n]:
cp = all_q[n][spacing].copy()
cp['spacing'] = float(spacing)
n_list.append(cp)
df = pd.concat(n_list, axis=0)
df_list[n] = df
formatted_df = {}
for n in df_list:
df = df_list[n]
new_df = pd.DataFrame()
for space in [1.0, 2.0, 3.0, 4.0]:
ser = df.loc[df['spacing'] == space, q]
if not ser.empty:
new_df[str(space)] = ser
formatted_df[n] = new_df
df_list['1'].head()
df = df_list['1']
for a in np.arange(0.8, 4.05, 0.05):
df['%.2f' % a] = df[q]
df.drop(['spacing', q], axis=1,).to_csv(f'yz_1_new_interp_{q}.csv')
df = df_list['5']
new_df = pd.DataFrame()
for space in [1.0, 2.0, 3.0, 4.0]:
ser = df.loc[df['spacing'] == space, q]
if not ser.empty:
new_df[str(space)] = ser
from scipy import interpolate
x = {}
for n in range(2,36):
df = formatted_df[str(n)]
y = []
print(n)
for i in range(0, 901):
columns = np.array(df.columns).astype(np.float)
vals = np.array(df.loc[i])
f = interpolate.interp1d(columns, vals, kind='quadratic', fill_value='extrapolate')
df_out = f(np.arange(0.8, 4.05, 0.05))
y.append(df_out)
y = np.array(y)
x[n] = y
def mapper(inp):
return '%.2f' % (0.8 + 0.05 * float(inp))
final = {}
for n in x:
d = pd.DataFrame(x[n])
d = d.rename(columns=mapper)
print(d.shape)
wl_df = pd.DataFrame({'wl' : np.arange(.350, .800, .0005)})
print(wl_df.shape)
out = wl_df.join(d)
print(out)
out.to_csv(f'yz_{n}_new_interp_{q}.csv')
out
from scipy.interpolate import BivariateSpline
from scipy import interpolate
ones = df_list[0][df_list[0]['spacing'] == 1.0].dropna()
twos = df_list[0][df_list[0]['spacing'] == 2.0]
threes = df_list[0][df_list[0]['spacing'] == 3.0]
fours = df_list[0][df_list[0]['spacing'] == 4.0]
# spl = BivariateSpline(ones['wave'], ones['spacing'], ones['Q_abs'], s=0.000001)
# tck = interpolate.bisplrep(ones['wave'], ones['spacing'], ones['Q_abs'], s=0.1)
# znew = interpolate.bisplev(ones['wave'], ones['spacing'], tck)
# wl = np.arange(0.350, 0.800, 0.001)
# preds = spl(ones['wave'], ones['spacing'])
plt.plot(ones['wave'], ones['Q_abs'])
plt.plot(twos['wave'], twos['Q_abs'])
plt.plot(threes['wave'], threes['Q_abs'])
plt.plot(fours['wave'], fours['Q_abs'])
# plt.plot(ones['wave'], znew)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
%matplotlib inline
from glob import glob
all_q = {}
x_dirs = glob('yz/*/')
x_dirs[0].split('/')
'1qtable'.split('1')
for x_dir in x_dirs:
chain_length = x_dir.split('/')[1]
qtables = glob(f'{x_dir}{chain_length}*')
print(qtables)
all_q[chain_length] = {}
for qtable in qtables:
spacing = qtable.split(f'{x_dir}{chain_length}')[1].split('qtable')[0]
with open(qtable) as fp:
#The first 14 lines of the qTable do not contain spectrum data
print(qtable)
for blank in range(0,14):
fp.readline()
wave = []
Q_ext = []
Q_abs = []
Q_sca = []
for k in range(350,801):
line = fp.readline()
ary = line.split(" ")
ary = [a for a in ary if a]
# print(ary[1:5])
ary = np.array(ary[1:5]).astype(np.float)
wave.append(float(ary[0]))
Q_ext.append(float(ary[1]))
Q_abs.append(float(ary[2]))
Q_sca.append(float(ary[3]))
df = pd.DataFrame({'wave': wave, 'Q_ext': Q_ext, 'Q_abs': Q_abs, 'Q_sca': Q_sca})
all_q[chain_length][spacing] = df
all_q.keys()
q = 'Q_ext'
from scipy.interpolate import UnivariateSpline
unreg = all_q['1']['0'].dropna()
spl = UnivariateSpline(unreg['wave'], unreg[q])
wl = np.arange(0.350, 0.800, 0.001)
# inp = ((wl - w_mean)/w_std).reshape(-1, 1)
spl.set_smoothing_factor(0.00001)
preds = spl(wl)
plt.plot(all_q['1']['0']['wave'], all_q['1']['0'][q], 'g')
plt.plot(wl, preds, 'b')
all_q['24']['1'].loc[all_q['24']['1'][q].isnull(), q]
preds[all_q['24']['1'][q].isnull()]
for n in all_q:
for spacing in all_q[n]:
df = all_q[n][spacing]
df_copy = df.dropna()
spl = UnivariateSpline(np.array(df_copy['wave']), np.array(df_copy[q]))
wl = np.arange(0.350, 0.800, 0.0005)
spl.set_smoothing_factor(0.000001)
preds = spl(wl)
all_q[n][spacing] = pd.DataFrame({'wave': wl, q: preds})
all_q['1']['0'][350:370]
df_list = {}
for n in all_q:
n_list = []
for spacing in all_q[n]:
cp = all_q[n][spacing].copy()
cp['spacing'] = float(spacing)
n_list.append(cp)
df = pd.concat(n_list, axis=0)
df_list[n] = df
formatted_df = {}
for n in df_list:
df = df_list[n]
new_df = pd.DataFrame()
for space in [1.0, 2.0, 3.0, 4.0]:
ser = df.loc[df['spacing'] == space, q]
if not ser.empty:
new_df[str(space)] = ser
formatted_df[n] = new_df
df_list['1'].head()
df = df_list['1']
for a in np.arange(0.8, 4.05, 0.05):
df['%.2f' % a] = df[q]
df.drop(['spacing', q], axis=1,).to_csv(f'yz_1_new_interp_{q}.csv')
df = df_list['5']
new_df = pd.DataFrame()
for space in [1.0, 2.0, 3.0, 4.0]:
ser = df.loc[df['spacing'] == space, q]
if not ser.empty:
new_df[str(space)] = ser
from scipy import interpolate
x = {}
for n in range(2,36):
df = formatted_df[str(n)]
y = []
print(n)
for i in range(0, 901):
columns = np.array(df.columns).astype(np.float)
vals = np.array(df.loc[i])
f = interpolate.interp1d(columns, vals, kind='quadratic', fill_value='extrapolate')
df_out = f(np.arange(0.8, 4.05, 0.05))
y.append(df_out)
y = np.array(y)
x[n] = y
def mapper(inp):
return '%.2f' % (0.8 + 0.05 * float(inp))
final = {}
for n in x:
d = pd.DataFrame(x[n])
d = d.rename(columns=mapper)
print(d.shape)
wl_df = pd.DataFrame({'wl' : np.arange(.350, .800, .0005)})
print(wl_df.shape)
out = wl_df.join(d)
print(out)
out.to_csv(f'yz_{n}_new_interp_{q}.csv')
out
from scipy.interpolate import BivariateSpline
from scipy import interpolate
ones = df_list[0][df_list[0]['spacing'] == 1.0].dropna()
twos = df_list[0][df_list[0]['spacing'] == 2.0]
threes = df_list[0][df_list[0]['spacing'] == 3.0]
fours = df_list[0][df_list[0]['spacing'] == 4.0]
# spl = BivariateSpline(ones['wave'], ones['spacing'], ones['Q_abs'], s=0.000001)
# tck = interpolate.bisplrep(ones['wave'], ones['spacing'], ones['Q_abs'], s=0.1)
# znew = interpolate.bisplev(ones['wave'], ones['spacing'], tck)
# wl = np.arange(0.350, 0.800, 0.001)
# preds = spl(ones['wave'], ones['spacing'])
plt.plot(ones['wave'], ones['Q_abs'])
plt.plot(twos['wave'], twos['Q_abs'])
plt.plot(threes['wave'], threes['Q_abs'])
plt.plot(fours['wave'], fours['Q_abs'])
# plt.plot(ones['wave'], znew)
| 0.286469 | 0.281937 |
```
from theano.sandbox import cuda
cuda.use('gpu2')
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
?? BatchNormalization
```
## Setup
```
batch_size=64
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_test = np.expand_dims(X_test,1)
X_train = np.expand_dims(X_train,1)
X_train.shape
y_train[:5]
y_train = onehot(y_train)
y_test = onehot(y_test)
y_train[:5]
mean_px = X_train.mean().astype(np.float32)
std_px = X_train.std().astype(np.float32)
def norm_input(x): return (x-mean_px)/std_px
```
## Linear model
```
def get_lin_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
lm = get_lin_model()
?? image
gen = image.ImageDataGenerator()
batches = gen.flow(X_train, y_train, batch_size=64)
test_batches = gen.flow(X_test, y_test, batch_size=64)
lm.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
lm.optimizer.lr=0.1
lm.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
lm.optimizer.lr=0.01
lm.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
```
## Single dense layer
```
def get_fc_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(512, activation='softmax'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc = get_fc_model()
fc.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
fc.optimizer.lr=0.1
fc.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
fc.optimizer.lr=0.01
fc.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
```
## Basic 'VGG-style' CNN
```
def get_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
Convolution2D(64,3,3, activation='relu'),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=8,
validation_data=test_batches, nb_val_samples=test_batches.N)
```
## Data augmentation
```
model = get_model()
gen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
batches = gen.flow(X_train, y_train, batch_size=64)
test_batches = gen.flow(X_test, y_test, batch_size=64)
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=8,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=14,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.0001
model.fit_generator(batches, batches.N, nb_epoch=10,
validation_data=test_batches, nb_val_samples=test_batches.N)
```
## Batchnorm + data augmentation
```
def get_model_bn():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
```
## Batchnorm + dropout + data augmentation
```
def get_model_bn_do():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
```
## Ensembling
```
def fit_model():
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=18, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
return model
models = [fit_model() for i in range(6)]
path = "data/mnist/"
model_path = path + 'models/'
for i,m in enumerate(models):
m.save_weights(model_path+'cnn-mnist23-'+str(i)+'.pkl')
evals = np.array([m.evaluate(X_test, y_test, batch_size=256) for m in models])
evals.mean(axis=0)
all_preds = np.stack([m.predict(X_test, batch_size=256) for m in models])
all_preds.shape
avg_preds = all_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval()
```
|
github_jupyter
|
from theano.sandbox import cuda
cuda.use('gpu2')
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
?? BatchNormalization
batch_size=64
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
X_test = np.expand_dims(X_test,1)
X_train = np.expand_dims(X_train,1)
X_train.shape
y_train[:5]
y_train = onehot(y_train)
y_test = onehot(y_test)
y_train[:5]
mean_px = X_train.mean().astype(np.float32)
std_px = X_train.std().astype(np.float32)
def norm_input(x): return (x-mean_px)/std_px
def get_lin_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
lm = get_lin_model()
?? image
gen = image.ImageDataGenerator()
batches = gen.flow(X_train, y_train, batch_size=64)
test_batches = gen.flow(X_test, y_test, batch_size=64)
lm.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
lm.optimizer.lr=0.1
lm.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
lm.optimizer.lr=0.01
lm.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
def get_fc_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(512, activation='softmax'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc = get_fc_model()
fc.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
fc.optimizer.lr=0.1
fc.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
fc.optimizer.lr=0.01
fc.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
def get_model():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
Convolution2D(64,3,3, activation='relu'),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=8,
validation_data=test_batches, nb_val_samples=test_batches.N)
model = get_model()
gen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
batches = gen.flow(X_train, y_train, batch_size=64)
test_batches = gen.flow(X_test, y_test, batch_size=64)
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=8,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=14,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.0001
model.fit_generator(batches, batches.N, nb_epoch=10,
validation_data=test_batches, nb_val_samples=test_batches.N)
def get_model_bn():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
def get_model_bn_do():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
def fit_model():
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=18, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
return model
models = [fit_model() for i in range(6)]
path = "data/mnist/"
model_path = path + 'models/'
for i,m in enumerate(models):
m.save_weights(model_path+'cnn-mnist23-'+str(i)+'.pkl')
evals = np.array([m.evaluate(X_test, y_test, batch_size=256) for m in models])
evals.mean(axis=0)
all_preds = np.stack([m.predict(X_test, batch_size=256) for m in models])
all_preds.shape
avg_preds = all_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval()
| 0.715921 | 0.837819 |
<img src="Polygons.png" width="320"/>
# Polygons and polylines
You can draw polygons or polylines on canvases by providing a sequence of points.
The polygons can have transparent colors and they may be filled or not (stroked).
A point can be an [x,y] pair or any but the first point can be a triple of [x,y] pairs representing
the control points of a
<a href="https://www.w3schools.com/tags/canvas_beziercurveto.asp">Bezier curve.</a>
Below for convenience we use reference frames to draw polygons in different places
at different scales using the same point sequence. All drawing methods available
for canvases are also available for reference frames.
```
from jp_doodle import dual_canvas
from IPython.display import display
# In this demonstration we do most of the work in Javascript.
demo = dual_canvas.DualCanvasWidget(width=320, height=220)
display(demo)
demo.js_init("""
// Last entry in points list gives Bezier control points [[2,2], [1,0], [0,0]]
var points = [[5,0], [4,2], [4,4], [3,6], [3,5], [1,4], [2,3], [2,2], [3,1], [[2,2], [1,0], [0,0]]];
// Use reference frames for positioning:
var ul = element.frame_region(-90,0,0,90, 0,0,6,6);
var ll = element.frame_region(-90,-90,0,0, 0,0,6,6);
var ur = element.frame_region(0,0,90,90, 0,0,6,6);
var big = element.frame_region(-130,-100,130,90, 0,0,6,6);
ul.polygon({name: "full filled", points: points, color:"green"});
ll.polygon({name: "stroked closed", points: points, color:"red",
fill: false, close: true, lineWidth: 2});
ur.polygon({name: "stroked open", points: points, color:"cyan",
fill: false, close: false, lineWidth: 7});
big.polygon({name: "transparent", points: points, color:"rgba(100,100,100,0.7)"});
// Fit the figure into the available space
element.fit(null, 10);
element.lower_left_axes();
element.fit()
""")
# python equivalent
import math
demo2 = dual_canvas.DualCanvasWidget(width=320, height=220)
display(demo2)
#element = demo2.element
points = [[5,0], [4,2], [4,4], [3,6], [3,5], [1,4], [2,3], [2,2], [3,1], [0,0]];
ul = demo2.frame_region(-90,0,0,90, 0,0,6,6);
ll = demo2.frame_region(-90,-90,0,0, 0,0,6,6);
ur = demo2.frame_region(0,0,90,90, 0,0,6,6);
big = demo2.frame_region(-130,-100,130,90, 0,0,6,6, name="big");
ul.polygon(name="full filled", points= points, color="green")
ll.polygon(name= "stroked closed", points= points, color="red",
fill= False, close= True, lineWidth=2)
ur.polygon(name= "stroked open", points= points, color="cyan",
fill= False, close= False, lineWidth=7)
# The named frames can be used from Javascript like so
demo2.js_init("""
element.big.polygon({name: "transparent", points: points, color:"rgba(100,100,100,0.7)"});
""", points=points)
# Fit the figure into the available space
demo2.fit(None, 10);
#demo2.save_pixels_to_png_async("Polygons.png")
```
|
github_jupyter
|
from jp_doodle import dual_canvas
from IPython.display import display
# In this demonstration we do most of the work in Javascript.
demo = dual_canvas.DualCanvasWidget(width=320, height=220)
display(demo)
demo.js_init("""
// Last entry in points list gives Bezier control points [[2,2], [1,0], [0,0]]
var points = [[5,0], [4,2], [4,4], [3,6], [3,5], [1,4], [2,3], [2,2], [3,1], [[2,2], [1,0], [0,0]]];
// Use reference frames for positioning:
var ul = element.frame_region(-90,0,0,90, 0,0,6,6);
var ll = element.frame_region(-90,-90,0,0, 0,0,6,6);
var ur = element.frame_region(0,0,90,90, 0,0,6,6);
var big = element.frame_region(-130,-100,130,90, 0,0,6,6);
ul.polygon({name: "full filled", points: points, color:"green"});
ll.polygon({name: "stroked closed", points: points, color:"red",
fill: false, close: true, lineWidth: 2});
ur.polygon({name: "stroked open", points: points, color:"cyan",
fill: false, close: false, lineWidth: 7});
big.polygon({name: "transparent", points: points, color:"rgba(100,100,100,0.7)"});
// Fit the figure into the available space
element.fit(null, 10);
element.lower_left_axes();
element.fit()
""")
# python equivalent
import math
demo2 = dual_canvas.DualCanvasWidget(width=320, height=220)
display(demo2)
#element = demo2.element
points = [[5,0], [4,2], [4,4], [3,6], [3,5], [1,4], [2,3], [2,2], [3,1], [0,0]];
ul = demo2.frame_region(-90,0,0,90, 0,0,6,6);
ll = demo2.frame_region(-90,-90,0,0, 0,0,6,6);
ur = demo2.frame_region(0,0,90,90, 0,0,6,6);
big = demo2.frame_region(-130,-100,130,90, 0,0,6,6, name="big");
ul.polygon(name="full filled", points= points, color="green")
ll.polygon(name= "stroked closed", points= points, color="red",
fill= False, close= True, lineWidth=2)
ur.polygon(name= "stroked open", points= points, color="cyan",
fill= False, close= False, lineWidth=7)
# The named frames can be used from Javascript like so
demo2.js_init("""
element.big.polygon({name: "transparent", points: points, color:"rgba(100,100,100,0.7)"});
""", points=points)
# Fit the figure into the available space
demo2.fit(None, 10);
#demo2.save_pixels_to_png_async("Polygons.png")
| 0.716814 | 0.947575 |
# Path and shape regimes of rising bubbles
## Outline
1. [Starting point](#starting_point)
2. [Data visualization](#data_visualization)
3. [Manual binary classification - creating a functional relationship](#manuel_classification)
4. [Using gradient descent to find the parameters/weights](#gradient_descent)
5. [Using conditional probabilities instead of binary classes](#conditional_probabilities)
6. [Maximum likelihood and cross-entropy](#maximum_likelihood)
7. [Non-linear decision boundaries](#non_linear_boundaries)
8. [The multi-layer perceptron](#multi_layer_perceptron)
9. [Multi-class classification](#multi_class)
1. [One-hot encoding](#one_hot)
2. [Softmax function](#softmax)
3. [Categorial cross-entropy](#categorial_cross_entropy)
4. [A note on the implementation in PyTorch](#note_on_pytorch)
10. [Final notes](#final_notes)
## Starting point<a id="starting_point"></a>
Our goal is to predict the path and shape regime of rising bubbles depending on the Eötvös and Galilei number defined as
$$
Ga = \frac{\sqrt{gR}R}{\nu},\quad \text{and}\quad Eo = \frac{\rho gR^2}{\sigma},
$$
with the variables being $g$ - gravitational constant, $\nu$ - kinematic liquid viscosity, $\rho$ - liquid density, and $R$ - equivalent sphere radius. The Galilei number relates interia, buoyancy, and viscous forces. The Eötvös number relates buoyancy and surface tension forces. The path and shape regimes encountered in the range $Ga\in \left[0, 800\right]$ and $Eo\in \left[0,500\right]$ are:
1. axis-symmetric shape, straight rise
2. asymmetric shape, non-oscillatory rise
3. asymmetric shape, oscillatory rise
4. peripheral breakup
5. central breakup
What we want to do is to find a function that takes $Ga$ and $Eo$ as arguments and maps them to one of the regimes listed above. What we have to build such a function are only some examples of given $Ga$ and $Eo$ and the corresponding regime. In machine-learning terminology,
- the arguments of the function are called **features**, and
- the resulting function output (value) is called **label**.
**Why would we want such a classifier?**
In an abstract sense, one can use such a classifier function to automate decision processes or to combine several smaller algorithms to leverage their collective performance (accuracy, applicability, execution time, etc.). If that explanation was too abstract, here are two more illustrative scenarios:
* The goal is to design a multiphase reactor. The reactor ought to operate in a homogeneous regime. For a given liquid, you want to estimate how large the bubbles sparged into the reactor can be to rise on a non-oscillatory path. Of course, the size could be determined graphically from a plot, but if the process is to be automated (e.g., in a software), a functional relationship between size and regime is required.
* In a real bubble column reactor, bubbles in all kinds of regimes will occur. In a scale-reduced simulation of such a reactor, closure models have to be defined, e.g., the drag coefficient in an Euler-Lagrange solver. The standard approach would be to use simple correlations by *Schiller and Naumann* (particles) or by *Tomiyama* (bubbles). Presumably, the correlations will be used far outside their actual range of validity, but actually, there are many more correlations available for smaller sub-regimes. The classifier can be used to build a unified correlation covering a much broader parameter range with higher accuracy by automatically switching between different suitable correlations.
The following data was extracted from figure 1 in [Tripathi et al.](https://www.nature.com/articles/ncomms7268)
> Tripathi, M. K. et al. Dynamics of an initially spherical bubble rising in quiescent liquid. Nat. Commun. 6:6268 doi: 10.1038/ncomms7268 (2015)
In general, it would be sensible to gather data from as many sources as possible. The data could also be contradictory, e.g., close to decision boundaries. Most classification algorithms are robust enough to handle such data by drawing decision boundaries according to the *majority*.
```
# load and process .csv files
import pandas as pd
# python arrays
import numpy as np
# plotting
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
# machine learning
import sklearn
from sklearn import preprocessing
import torch
from torch import nn, optim
import torch.nn.functional as F
# google colab support
import sys
IN_COLAB = 'google.colab' in sys.modules
# display images in notebook
from IPython.display import Image, display
%matplotlib inline
print("Pandas version: {}".format(pd.__version__))
print("Numpy version: {}".format(np.__version__))
print("ScikitLearn version: {}".format(sklearn.__version__))
print("PyTorch version: {}".format(torch.__version__))
print("Running notebook {}".format("in colab." if IN_COLAB else "locally."))
if not IN_COLAB:
data_path = "../data/path_shape_regimes/"
else:
data_path = "https://raw.githubusercontent.com/AndreWeiner/machine-learning-applied-to-cfd/master/data/path_shape_regimes/"
regimes = ["I", "II", "III", "IV", "V"]
raw_data_files = ["regime_{}.csv".format(regime) for regime in regimes]
files = [pd.read_csv(data_path + file, header=0, names=["Ga", "Eo"]) for file in raw_data_files]
for file, regime in zip(files, regimes):
file["regime"] = regime
data = pd.concat(files, ignore_index=True)
print("Read {} data points".format(data.shape[0]))
data.sample(5)
data.describe()
```
## Data visualization<a id="data_visualization"></a>
To obtain clear stability regions as in figure 1 of the article referenced above, we will work with the **logarithm** of the features $Ga$ and $Eo$ instead of the features itself. Such a pre-processing step is typical for most machine learning application since many algorithms are sensitive to the scale of the features (the algorithms will not work well if one feature ranges from $0$ to $1$ while another one ranges from $0$ to $100$).
```
logData = data[["Ga", "Eo"]].apply(np.log10)
logData["regime"] = data["regime"].copy()
fontsize = 14
markers = ["o", "x", "<", ">", "*"]
plt.figure(figsize=(12, 8))
for regime, marker in zip(regimes, markers):
plt.scatter(logData[logData["regime"] == regime].Ga, logData[logData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
```
## Manuel binary classification - creating a functional relationship<a id="manuel_classification"></a>
First, we try to write a classifier by hand. To simplify things, we focus only on regimes I and II. From the plot above, the data points of regions I and II look almost linearly separable. Therefore, we will define a linear function $z(Ga^\prime, Eo^\prime) = w_1Ga^\prime + w_2Eo^\prime + b$ with the transformed features $Ga^\prime = log(Ga)$ and $Eo^\prime = log(Eo)$ and build a classifier that distinguishes the cases
$$
H(z (Ga^\prime, Eo^\prime)) = \left\{\begin{array}{lr}
0, & \text{if } z \leq 0\\
1, & \text{if } z \gt 0
\end{array}\right.
$$
```
resolution = 0.005 # resolution for plotting region contours
def z_func(logGa, logEo):
'''Compute a weighted linear combination of logGa and logEo.
'''
w1 = 8.0; w2 = 1; b = -12
return w1 * logGa + w2 * logEo + b
def H_func(logGa, logEo):
'''Distinguish between z<=0 and z>0.
'''
return np.heaviside(z_func(logGa, logEo), 0.0)
plt.figure(figsize=(12, 8))
# color predicted region I and II
xx, yy = np.meshgrid(np.arange(0.5, 3.0, resolution), np.arange(-1.2, 1.5, resolution))
prediction = H_func(xx.ravel(), yy.ravel())
plt.contourf(xx, yy, prediction.reshape(xx.shape), cmap=ListedColormap(['C0', 'C1']), alpha=0.3)
# plot data point for region I and II
for regime, marker in zip(regimes[:2], markers[:2]):
plt.scatter(logData[logData["regime"] == regime].Ga, logData[logData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
```
The classifier predicts points of regions I and II in the data set correctly. But there is some space for improvement:
* we had to figure out the parameters $w_1,w_2,b$ for $z$ manually by trail and error
* the decision boundary does not always look ideal; since we do not know the true decision boundary but only some data points, it would be reasonable to search for a boundary which maximizes the distance to the points closest to the boundary
* if we include more regimes, it will become more and more challenging if not impossible to separate the different regions
* the approach only works well for linearly separable data
## Using gradient descent to find the parameters/weights<a id="gradient_descent"></a>
In the previous section, we manually searched for the slope and offset of a linear function to separate the two classes. This process can be automated by defining and solving an optimization problem. One option would be to define a loss function which expresses the prediction quality. Since we are dealing with functions that the computer has to evaluate, it makes sense to convert the different regimes into numeric values. Let's say the value of our numeric label $y$ is $0$ for region I and $1$ for region II. The true label $y_i$ is known for all points $i$ in the data set. The predicted label $\hat{y}_i$ depends on $z_i$ and therefore on the weights $w = \left[w_1, w_2, b\right]$ and the feature vector $X_i=\left[Ga^\prime_i,Eo^\prime_i \right]$. A common loss is the squared difference of true and predicted label $(y-\hat{y})^2$ for all $N$ data points:
$$
L(w) = \frac{1}{2}\sum\limits_{i=1}^N \left(y_i - \hat{y}_i(X_i,w) \right)^2
$$
The prefactor $1/2$ is only for convenience, as will become clear later on. Without the prefactor, the loss function is nothing but the number miss-predictions. The knobs we can turn to minimize the loss are the weights $w$. The most common algorithm to find suitable weights in machine learning is called gradient descent. The idea is to compute the gradient of the loss function w.r.t. the weights and then to change the weights in small steps in negative gradient direction. The gradient of $L$ is
$$
\frac{\partial L}{\partial w} =
\begin{pmatrix}\frac{\partial L}{\partial w_1}\\
\frac{\partial L}{\partial w_2}\\
\frac{\partial L}{\partial b}
\end{pmatrix}=
-\frac{1}{2}\sum\limits_{i=1}^N 2\left(y_i - \hat{y}_i(X_i,w) \right) \frac{\partial \hat{y}_i(X_i,w)}{\partial w} =
-\sum\limits_{i=1}^N \left(y_i - \hat{y}_i(X_i,w) \right) \delta
\begin{pmatrix}Ga^\prime_i\\
Eo^\prime_i\\
1
\end{pmatrix},
$$
with the partial derivate of $\hat{y}_i(X_i,w)$ being
$$
\frac{\partial \hat{y}_i(X_i,w)}{\partial w} = \delta
\begin{pmatrix}\frac{\partial}{\partial w_1} \left(w_1Ga^\prime_i + w_2Eo^\prime_i + b\right)\\
\frac{\partial}{\partial w_2} \left(w_1Ga^\prime_i + w_2Eo^\prime_i + b\right)\\
\frac{\partial}{\partial b} \left(w_1Ga^\prime_i + w_2Eo^\prime_i + b\right)
\end{pmatrix} = \delta
\begin{pmatrix}Ga^\prime_i\\
Eo^\prime_i\\
1
\end{pmatrix}.
$$
The derivative of the [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function), used in the classifier, is the [Dirac distribution](https://en.wikipedia.org/wiki/Dirac_delta_function), which evaluates to one in each point $i$ in the equations above. To update the weights, we change the weights in negative gradient direction by a small fraction of the gradient. The **learning rate** $\eta$ determines how small or large the weight updates will be. The formula to update the weights is
$$
w^{n+1} = w^n - \eta \frac{\partial L(w)}{\partial w} =
\begin{pmatrix}w_1^n\\
w_2^n\\
b^n
\end{pmatrix} + \eta
\sum\limits_{i=1}^N \left(y_i - \hat{y}_i(X_i,w^n) \right)
\begin{pmatrix}Ga^\prime_i\\
Eo^\prime_i\\
1
\end{pmatrix}
$$
```
class SimpleClassifier():
'''Implementation of a simple *perceptron* and the perceptron learning rule.
'''
def __init__(self, eta=0.01, epochs=1000):
self.eta_ = eta
self.epochs_ = epochs
self.weights_ = np.random.rand(3)
self.loss_ = []
def train(self, X, y):
for e in range(self.epochs_):
self.weights_ += self.eta_ * self.lossGradient(X, y)
self.loss_.append(self.loss(X, y))
if self.loss_[-1] < 1.0E-6:
print("Training converged after {} epochs.".format(e))
break
def loss(self, X, y):
return 0.5 * np.sum(np.square(y - self.predict(X)))
def lossGradient(self, X, y):
return np.concatenate((X, np.ones((X.shape[0], 1))), axis=1).T.dot(y - self.predict(X))
def predict(self, X):
return np.heaviside(np.dot(np.concatenate((X, np.ones((X.shape[0], 1))), axis=1), self.weights_), 0.0)
# create reduced data set with regimes I and II and train classifier
reducedData = logData[(logData.regime == "I") | (logData.regime == "II")]
lb = preprocessing.LabelBinarizer() # the LabelBinarizer converts the labels "I" and "II" to 0 and 1
lb.fit(reducedData.regime)
y = lb.transform(reducedData.regime).ravel() # label tensor
X = reducedData[["Ga", "Eo"]].values # feature tensor
classifier = SimpleClassifier()
classifier.train(X, y)
print("Computed weights: w1={:.4f}, w2={:.4f}, b={:.4f}".format(classifier.weights_[0], classifier.weights_[1], classifier.weights_[2]))
# plot loss over epochs
plt.figure(figsize=(12, 4))
plt.plot(range(len(classifier.loss_)), classifier.loss_)
plt.xlabel(r"epoch", fontsize=fontsize)
plt.ylabel(r"loss L(w)", fontsize=fontsize)
plt.show()
plt.figure(figsize=(12, 8))
prediction = classifier.predict(np.vstack((xx.ravel(), yy.ravel())).T)
plt.contourf(xx, yy, prediction.reshape(xx.shape), cmap=ListedColormap(['C0', 'C1']), alpha=0.3)
for regime, marker in zip(regimes[:2], markers[:2]):
plt.scatter(reducedData[reducedData["regime"] == regime].Ga, reducedData[reducedData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
```
Gradient descent is the standard algorithm in machine learning to determine parameters, e.g., of neural networks. The trained classifier learned to predict all points in the training set correctly. However, the result is still not satisfying in that
* some points are very close to the decision boundary,
* the loss does not decrease monotonously because the loss function is not continuous (this cloud lead to convergence problems), and
* the algorithm will undoubtedly fail to converge if the data is non linearly separable.
## Using conditional probabilities instead of binary classes<a id="conditional_probabilities"></a>
To overcome the convergence issues, we need a continuous loss function. The key idea to create continuous loss functions is to consider the odds for a point in the feature space to belong to a certain class instead of making a unique prediction. Before, we used the Heaviside function with $0$ corresponding to region I and $1$ corresponding to region II. Instead, we could consider the probability $p$ for a point to be in region II. Probabilities can have values between zero and one.
In the case of binary classification, the probabilities of both classes together add up to one. For example, a point very far in the orange region in the figure above should have a probability close to one to be orange and a probability close to zero to be blue. In contrast, a point very far in the blue region should have a probability close to zero to be organge and close to one to be blue. A point very close to the decicion boundary should have a probability around $0.5$ for both classes. Note that the probability for a point to be in region I is the same as for not being in region II.
Before, we used the weighted sum of our features, $z$, to describe whether a point is in region I or II. A negative $z$ led to a point being classified as region I, while a positive $z$ corresponded to region II. Now we have to find a way to convert $z$ into probabilities. There are some requirements which such a transformation function should fulfill:
1. it should map any argument to a positive real number because probabilities are always positive
2. the positive real number should be in the range $0...1$
3. it should be differentiable and monotonous because we want to apply gradient descent
These requirements are met, for example, by the sigmoid function $\sigma (z) = \frac{1}{1+e^{-z}}$.
```
def sigmoid(z):
'''Compute the sigmoid function.
'''
return 1.0 / (1.0 + np.exp(-z))
plt.figure(figsize=(12, 4))
plt.plot(np.linspace(-5, 5, 100), sigmoid(np.linspace(-5, 5, 100)))
plt.xlim([-5, 5])
plt.xlabel(r"$z$", fontsize=fontsize)
plt.ylabel(r"$\sigma (z)$", fontsize=fontsize)
plt.show()
def probability(X, w):
'''Compute to probability for the features X to be in region II.
'''
z = np.dot(np.concatenate((X, np.ones((X.shape[0], 1))), axis=1), w)
return 1.0 / (1.0 + np.exp(-z))
plt.figure(figsize=(12, 8))
cm = LinearSegmentedColormap.from_list("blue_to_orange", ['C0', 'C1'], 20)
prob = probability(np.vstack((xx.ravel(), yy.ravel())).T, classifier.weights_)
plt.contour(xx, yy, prob.reshape(xx.shape),levels=np.arange(0.3, 0.8, 0.05), cmap=cm, alpha=0.3, antialiased=True)
plt.contourf(xx, yy, prob.reshape(xx.shape),levels=np.arange(0.3, 0.8, 0.01), cmap=cm, alpha=0.3, antialiased=True)
plt.colorbar().set_label(r"$p(y=1)$", fontsize=16)
for regime, marker in zip(regimes[:2], markers[:2]):
Xr = reducedData[reducedData["regime"] == regime][["Ga", "Eo"]].values
point_prob = probability(Xr, classifier.weights_)
for i, p in enumerate(Xr):
plt.annotate("{:.2f}".format(point_prob[i]), (p[0], p[1]))
plt.scatter(reducedData[reducedData["regime"] == regime].Ga, reducedData[reducedData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
```
## Maximum likelihood and cross-entropy<a id="maximum_likelihood"></a>
So far, we have got one probability for each point $X_i$ in the training set. The question is now how to combine these probabilities such that we obtain an algorithm to compute the weights. One intuitive approach is trying to maximize the likelihood for all points to be classified correctly. This means for a point $X_i$ in region I we want to maximize $p(y_i=0|X_i)$ (the probability of the event $y_i=0$ given the feature vector $X_i$), and for points in region II we want to maximize $p(y_i=1 | X_i)$. Assuming that each data point is an independent event, the combined probability of all $N$ points is their product. However, multiplying thousands or millions of values between zero and one would certainly lead to numerical difficulties. Therefore, it is more useful to take the logarithm of the combined probabilities because
1. the product becomes a summation since $\mathrm{ln}(ab) = \mathrm{ln}(a) + \mathrm{ln}(b)$, and
2. it helps to turn the maximization into a minimization problem.
So how is the maximization turned into a minimization? Probabilities take values between zero and one. If the argument of the logarithm is close to one, the result will be close to zero. If the argument is close to zero, the logarithm will be a large negative number. Combining these aspects results in a special loss function called *binary cross-entropy*:
$$
L(w) = -\frac{1}{N}\sum\limits_{i=1}^N y_i \mathrm{ln}(\hat{y}_i(X_i,w)) + (1-y_i) \mathrm{ln}(1-\hat{y}_i(X_i,w))
\quad \text{with} \quad \hat{y}_i = \sigma (z(X_i,w)).
$$
Since the logarithm of a value between zero and one is negative, the loss is defined as the negative logarithmic probabilities. Taking the mean instead of the sum makes the loss somewhat more independent of the amount of data. High probabilities $\hat{y}_i$ will lead to a small cross entropy. Note that for each point $i$ only one of the terms in the sum contributes to the loss since $y_i\in \{0,1\}$.
Therefore, minimizing the cross-entropy is the same as maximizing the likelihood of all points to be classified correctly, and we can use this minimization problem as a criterion to adjust the model's weights $w$ using gradient descent.
Since our classifier function $\hat{y}$ is still rather simple, we can compute the loss gradient by hand:
$$
\frac{\partial L(w)}{\partial w_j} = -\frac{1}{N}\sum\limits_{i=1}^N
\frac{y_i}{\hat{y}_i} \frac{\partial \hat{y}_i}{\partial w_j}
+ \frac{1-y_i}{1-\hat{y}_i} \left( -\frac{\partial \hat{y}_i}{\partial w_j} \right)
$$
with the partial derivative of the classifier w.r.t. the weigths being
$$
\frac{\partial\hat{y}_i}{\partial w_j} = \frac{\partial \sigma}{\partial z_i} \frac{\partial z_i}{\partial w_j} =
\sigma (z_i) (1-\sigma (z_i))
\begin{pmatrix}Ga^\prime_i\\
Eo^\prime_i\\
1
\end{pmatrix} =
\hat{y}_i (1-\hat{y}_i)
\begin{pmatrix}Ga^\prime_i\\
Eo^\prime_i\\
1
\end{pmatrix}.
$$
Combining the above equations results in
$$
\frac{\partial L}{\partial w} = -\frac{1}{N}\sum\limits_{i=1}^N (y_i - \hat{y}_i)
\begin{pmatrix}Ga^\prime_i\\
Eo^\prime_i\\
1
\end{pmatrix}.
$$
Some hints if you want to understand the computation in all details:
$$
\frac{\mathrm{d}}{\mathrm{d}x}\mathrm{ln}(x) = \frac{1}{x},\quad
\frac{\mathrm{d}}{\mathrm{d}x}\sigma(x) = \sigma (x)(1-\sigma (x)),\quad
\frac{\mathrm{d} f(g(x))}{\mathrm{d} x} = \frac{\mathrm{d}f}{\mathrm{d}g}\frac{\mathrm{d}g}{\mathrm{d}x}
$$
Finally, we need to convert the predicted probability into an actual class (label). As mentioned before, points close the decision boundary have a probability close to $0.5$. So we could say points with $p(y_i=1|X_i) \le 0.5$ belong to region I and points with $p(y_i=1|X_i) > 0.5$ belong to region II. Now we are ready to implement our improved classifier.
```
class LogClassifier():
'''Implemention of a logistic-regression classifier.
'''
def __init__(self, eta=1.0, epochs=10000):
self.eta_ = eta
self.epochs_ = epochs
self.weights_ = np.random.rand(3)
self.loss_ = []
def train(self, X, y, tol):
for e in range(self.epochs_):
self.weights_ += self.eta_ * self.lossGradient(X, y)
self.loss_.append(self.loss(X, y))
if self.loss_[-1] < tol:
print("Training converged after {} epochs.".format(e))
break
def loss(self, X, y):
logProb = y * np.log(self.probability(X)) + (1.0 - y) * np.log(1.0 - self.probability(X))
return - np.mean(logProb)
def lossGradient(self, X, y):
return np.concatenate((X, np.ones((X.shape[0], 1))), axis=1).T.dot(y - self.probability(X)) / X.shape[0]
def probability(self, X):
z = np.dot(np.concatenate((X, np.ones((X.shape[0], 1))), axis=1), self.weights_)
return 1.0 / (1.0 + np.exp(-z))
def predict(self, X):
return np.heaviside(self.probability(X) - 0.5, 0.0)
logClassifier = LogClassifier()
logClassifier.train(X, y, tol=0.1)
print("Computed weights: w1={:.4f}, w2={:.4f}, b={:.4f}".format(classifier.weights_[0], classifier.weights_[1], classifier.weights_[2]))
# plot loss over epochs
plt.figure(figsize=(12, 4))
plt.plot(range(len(logClassifier.loss_)), logClassifier.loss_)
plt.xlabel(r"epoch", fontsize=fontsize)
plt.ylabel(r"loss L(w)", fontsize=fontsize)
plt.show()
plt.figure(figsize=(12, 8))
prediction = logClassifier.predict(np.vstack((xx.ravel(), yy.ravel())).T)
plt.contourf(xx, yy, prediction.reshape(xx.shape), cmap=ListedColormap(['C0', 'C1']), alpha=0.3)
for regime, marker in zip(regimes[:2], markers[:2]):
plt.scatter(reducedData[reducedData["regime"] == regime].Ga, reducedData[reducedData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
```
Great, it works! What you may have observed:
- the loss decreases now monotonically
- the decicion boundary looks slightly better in the sense that some of the points very close to it are a bit farther away now
- the algorithm cloud also converge to the minimum of the loss function if the data was not linearly separable
- we needed more iterations to get to the final weights because the loss gradient decreases with $y-\hat{y}$ as the probabilistic prediction improves
## Non-linear decision boundaries<a id="non_linear_boundaries"></a>
Now we will look at some more complicated case. Region I has two neighboring regions, namely regions II and III. The goal is to separate region I from the other two regions, so it is still a binary classification problem. However, a straight line will not work this time to isolate region I. If you look again at the first plot with all data points of all regions, you may notice that we could draw two lines to solve the problem. The first one will separate region I from region two, and the second one will separate region I from region III. So let's do that first.
```
# reduced data sets
data_I_II = logData[(logData.regime == "I") | (logData.regime == "II")]
data_I_III = logData[(logData.regime == "I") | (logData.regime == "III")]
# The LabelBinarizer converts the labels "I" to 0 and "II/III" to 1
lb_I_II = preprocessing.LabelBinarizer()
lb_I_III = preprocessing.LabelBinarizer()
lb_I_II.fit(data_I_II.regime)
lb_I_III.fit(data_I_III.regime)
# labels and features
y_I_II = lb_I_II.transform(data_I_II.regime).ravel()
y_I_III = lb_I_III.transform(data_I_III.regime).ravel()
X_I_II = data_I_II[["Ga", "Eo"]].values
X_I_III = data_I_III[["Ga", "Eo"]].values
# classifier to separate region I and II
classifier_I_II = LogClassifier()
classifier_I_II.train(X_I_II, y_I_II, tol=0.1)
print("Computed weights: w1={:.4f}, w2={:.4f}, b={:.4f}".format(
classifier_I_II.weights_[0], classifier_I_II.weights_[1], classifier_I_II.weights_[2]))
# classifier to separate region I and III
classifier_I_III = LogClassifier()
classifier_I_III.train(X_I_III, y_I_III, tol=0.05)
print("Computed weights: w1={:.4f}, w2={:.4f}, b={:.4f}".format(
classifier_I_III.weights_[0], classifier_I_III.weights_[1], classifier_I_III.weights_[2]))
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(18, 8))
xxl, yyl = np.meshgrid(np.arange(0.5, 3.0, resolution), np.arange(-1.2, 2.5, resolution))
# region I and II
prediction_I_II = classifier_I_II.predict(np.vstack((xxl.ravel(), yyl.ravel())).T)
ax1.contourf(xxl, yyl, prediction_I_II.reshape(xxl.shape), cmap=ListedColormap(["C0", "C1"]), alpha=0.3)
for regime, marker in zip(regimes[:2], markers[:2]):
ax1.scatter(data_I_II[data_I_II["regime"] == regime].Ga, data_I_II[data_I_II["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
# region I and III
prediction_I_III = classifier_I_III.predict(np.vstack((xxl.ravel(), yyl.ravel())).T)
ax2.contourf(xxl, yyl, prediction_I_III.reshape(xxl.shape), cmap=ListedColormap(["C0", "C2"]), alpha=0.3)
for regime, marker, color in zip(["I", "III"], ["o", "<"], ["C0", "C2"]):
ax2.scatter(data_I_III[data_I_III["regime"] == regime].Ga, data_I_III[data_I_III["regime"] == regime].Eo,
marker=marker, color=color, s=80, label="regime {}".format(regime))
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=2, fontsize=fontsize)
ax2.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=2, fontsize=fontsize)
ax1.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax2.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax1.set_ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
```
We have two linear models which allow to isolate region I in two steps. But what if wanted to have a single model doing a single step to solve the problem? How to combine these linear models?
Both models compute probabilities for a point to be __not__ in region one (because they are programmed this way). The simplest way to combine both models is to add their probabilities. We could also weight the linear models or subtract a constant offset. Since the sum of the probabilities can become larger than one, we also need to map them back to the range $0...1$, for example, using the sigmoid function. The new probability for a point to be in region II or III is
$$
\hat{y}_{i,II,III} = \sigma (w_{21}\hat{y}_{i,II} + w_{22}\hat{y}_{i,III} + b_2) = \sigma (w_{21}\sigma(z_{i,II}) + w_{22}\sigma(z_{i,III}) + b_2)
$$
```
probability_I_II_III = sigmoid(0.9 * classifier_I_II.probability(np.vstack((xxl.ravel(), yyl.ravel())).T)
+ 0.9 * classifier_I_III.probability(np.vstack((xxl.ravel(), yyl.ravel())).T)
- 0.5)
prediction_I_II_III = np.heaviside(probability_I_II_III - 0.5, 0.0)
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(18, 8), gridspec_kw={'width_ratios': [1.2, 1]})
# plot probabilities
cf1 = ax1.contourf(xxl, yyl, probability_I_II_III.reshape(xxl.shape), levels=np.arange(0.3, 0.8, 0.01), cmap=cm, alpha=0.3, antialiased=True)
c1 = ax1.contour(xxl, yyl, probability_I_II_III.reshape(xxl.shape), levels=np.arange(0.3, 0.8, 0.05), cmap=cm, alpha=0.6, antialiased=True)
ax1.clabel(c1, inline=1, fontsize=10)
fig.colorbar(cf1, ax=[ax1]).set_label(r"$p(y_{II,III}=1)$", fontsize=16)
# plot resulting decicion boundary
ax2.contourf(xxl, yyl, prediction_I_II_III.reshape(xxl.shape), cmap=ListedColormap(['C0', 'C1']), alpha=0.3)
# plot data point for region I and II in subplot 2
for regime, marker in zip(regimes[:3], markers[:3]):
ax2.scatter(logData[logData["regime"] == regime].Ga, logData[logData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
ax1.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax1.set_ylabel(r"log(Eo)", fontsize=fontsize)
ax2.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax2.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=3, fontsize=fontsize)
plt.show()
```
The weighted combination of two linear models piped into a non-linear function (sigmoid) creates a new non-linear model with a more complex decision boundary. If we were to draw a graphical representation of the probability function, it might look like the following sketch:
```
if IN_COLAB:
display(Image(url="https://raw.githubusercontent.com/AndreWeiner/machine-learning-applied-to-cfd/master/notebooks/combined_linear_models.png"))
else:
display(Image("combined_linear_models.png"))
```
## The multi-layer percepton<a id="multi_layer_perceptron"></a>
The function we ended up with is called **Multi-Layer-Perceptron** (MLP) or also vanilla **Neural Network**, and the motivation for this naming can be inferred from the sketch above:
- the function arguments form the nodes in the *input-layer*
- the nodes in the *hidden layer* represent linear models
- the arrows connecting two nodes represent a weight (a function parameter)
- each node sums up its weighted inputs and transforms the sum using a so-called *activation function*, which is the sigmoid function in our case
- the nodes with *1* inside represent bias units (offsets of the linear functions)
- for binary classification, there is excatly one node forming the output layer, which is ultimately the proabibility $\hat{y}$
Now you could imagine that it is possible to combine more and more linear models in the hidden layer to form very complex decision boundaries. Also, we could add more hidden layers and combine the output of the previous layer to get an even stronger non-linear transformation. In fact, the non-linear transformation obtained by using multiple hidden layers is one of the key concepts for the success and popularity of neural networks. MLPs or neural networks in general with multiple hidden layers are called **Deep Neural Networks** and the weight optimization based on data sets is called **Deep Learning** (sometimes deep learning is also used as a synonym for all the technology and theory around building and training deep neural networks).
## Multi-class classification<a id="multi_class"></a>
### One-hot encoding<a id="one_hot"></a>
In the binary classification problem, we converted our *String* labels into numeric values using a binary encoding:
| original label | numeric label |
|:--------------:|:-------------:|
| I | 0 |
| II | 1 |
We can follow the same pattern to convert all the other classes into numeric values:
| original label | numeric label |
|:--------------:|:-------------:|
| I | 0 |
| II | 1 |
| III | 2 |
| IV | 3 |
| V | 4 |
Considering what we learned before, there are several problems with such an encoding for multi-class classification. The model should output the probabilities for each class, thus a value between zero and one. Also, the increasing numeric label value suggests a continuous relationship between the different classes which is non-existent. At the latest, when there are more than two neighboring regions, it becomes clear that such an encoding is not practical.
The solution to the encoding problem is to introduce new labels, one for each class (region). Instead of having one label with five classes, one for each region, we expand the data set to five labels with one class per label. This is somewhat similar to looking at the problem as five binary classifications.
| original label | numeric label | $\rightarrow$ | is I? | is II? | is III? | is IV? | is V?|
|:--------------:|:-------------:|:-------------:|:-----:|:------:|:-------:|:------:|:----:|
| I | 0 | $\rightarrow$ | 1 | 0 | 0 | 0 | 0 |
| II | 1 | $\rightarrow$ | 0 | 1 | 0 | 0 | 0 |
| III | 2 | $\rightarrow$ | 0 | 0 | 1 | 0 | 0 |
| IV | 3 | $\rightarrow$ | 0 | 0 | 0 | 1 | 0 |
| V | 4 | $\rightarrow$ | 0 | 0 | 0 | 0 | 1 |
This strategy is called **one-hot encoding**. In *PyTorch*, we don't have to create a one-hot encoded label explicitly. The loss function implementation used later on takes numeric labels as an input and creates the encoding for us. What we have to do before, however, is to convert the region labels (*String*) into numeric labels (*int*). The *sklearn* [LabelEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) provides exactly that functionality.
```
le = preprocessing.LabelEncoder()
y_numeric = le.fit_transform(logData.regime)
sample = logData.regime.sample(5)
for l, ln in zip(sample, le.transform(sample.values)): print("label: {:3s}, numeric label: {:1d}".format(l, ln))
```
### Softmax function<a id="softmax"></a>
The generalization of the sigmoid function to multiple classes, returning a probability vector with one probability for each class, is called **softmax function**. For a class $j$ and $K$ classes, softmax is defined as
$$
p(y_{ij}=1 | X_i) = \frac{e^{z_{ij}}}{\sum_{j=0}^{K-1} e^{z_{ij}}}.
$$
Note that we need the softmax function only for the output layer. For the hidden layers, we can use different activation functions like *sigmoid*, *tanh*, *ReLu*, etc. In a binary classification problem, the softmax function turns back into sigmoid.
### Categorial cross entropy<a id="categorial_cross_entropy"></a>
As softmax is the generalization of sigmoid to multiple classes, categorial cross entropy is the multi-class extension to binary cross entropy. For each data point $i$ and class $j$, it is defined as
$$
L(w) = -\frac{1}{N} \sum\limits_{j=0}^{K-1}\sum\limits_{i=1}^{N} y_{ij} \mathrm{ln}\left( \hat{y}_{ij} \right),
$$
where $y$ is a second order tensor with $N$ rows and $K$ columns. One entry in each row is equal to one while all other entries in that row are zero. The tensor $\hat{y}$ has the same dimensions but contains the class probabilities for all data points. The values in each row sum up to one.
### A note on the implementation in *PyTorch*<a id="note_on_pytorch"></a>
For the final classifier, we will use the popular Deeplearning library [PyTorch](https://pytorch.org/), which comes packed with many useful machine learning algorithms. The following PyTroch-based implementation deviates slightly from the formulas above in that a *log_softmax* function is used while the *NLLLoss* function **what is this?** expects log-probabilities. This distinction is a design choice to avoid the division when evaluating the softmax function (potential underflow).
To find the network weights, we use a gradient descent algorithm enhanced with some empirical rules called [ADAM](https://arxiv.org/abs/1412.6980). The gradient is computed based on [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) (AD). The basic idea is to employ the [chain-rule](https://en.wikipedia.org/wiki/Chain_rule) until we end up with some simple expression of which we know the exact derivative. Consider, for example, the function
$$
f(x) = \mathrm{sin}\left( x^2 \right).
$$
This function could be written as $f(g(x)) = \mathrm{sin}\left( g(x) \right)$ with $g(x)=x^2$. Using the chain rule, we know that the generic derivative w.r.t. $x$ will be
$$
\frac{\mathrm{d}f}{\mathrm{d}x} = \frac{\mathrm{d}f}{\mathrm{d}g}\frac{\mathrm{d}g}{\mathrm{d}x}.
$$
In an AD framework, every basic function like $\mathrm{sin}(x)$ and $x^n$ is implemented together with its derivative, here $\mathrm{cos}(x)$ and $n x^{n-1}$. Employing the chain rule, we can compute the derivative of any combination of these basic functions automatially. In the example above we get
$$
\frac{\mathrm{d}f}{\mathrm{d}x} = \mathrm{cos}\left( x^2 \right) 2x
$$
```
# Simple MLP built with PyTorch
class PyTorchClassifier(nn.Module):
'''Multi-layer perceptron with 3 hidden layers.
'''
def __init__(self, n_features=2, n_classes=5, n_neurons=60, activation=torch.sigmoid):
super().__init__()
self.activation = activation
self.layer_1 = nn.Linear(n_features, n_neurons)
self.layer_2 = nn.Linear(n_neurons, n_neurons)
self.layer_3 = nn.Linear(n_neurons, n_classes)
def forward(self, x):
x = self.activation(self.layer_1(x))
x = self.activation(self.layer_2(x))
return F.log_softmax(self.layer_3(x), dim=1)
regimeClassifier = PyTorchClassifier()
# categorial cross entropy taking logarithmic probabilities
criterion = nn.NLLLoss()
# stochastic gradient decent: ADAM
optimizer = optim.Adam(regimeClassifier.parameters(), lr=0.005)
epochs = 2000
losses = []
# convert feature and label arrays into PyTorch tensors
featureTensor = torch.from_numpy(np.float32(logData[["Ga", "Eo"]].values))
labelTensor = torch.tensor(y_numeric, dtype=torch.long)
for e in range(1, epochs):
optimizer.zero_grad()
# run forward pass through the network
log_prob = regimeClassifier(featureTensor)
# compute cross entropy
loss = criterion(log_prob, labelTensor)
# compute gradient of the loss function w.r.t. to the model weights
loss.backward()
# update weights
optimizer.step()
# keep track and print progress
losses.append(loss.item())
if e % 100 is 0:
print("Training loss after {} epochs: {}".format(e, loss.item()))
if losses[-1] < 4.0E-3: break
# plot loss over epochs
plt.figure(figsize=(12, 4))
plt.plot(range(len(losses)), losses)
plt.xlabel(r"epoch", fontsize=fontsize)
plt.ylabel(r"loss L(w)", fontsize=fontsize)
plt.show()
fig, ax = plt.subplots(figsize=(12, 8))
# color predicted regions
xxf, yyf = np.meshgrid(np.arange(0.7, 2.8, resolution), np.arange(-1.2, 2.6, resolution))
Xf = torch.from_numpy(np.float32(np.vstack((xxf.ravel(), yyf.ravel())).T))
class_prob = regimeClassifier(Xf).exp().detach().numpy()
predictionf = np.argmax(class_prob, axis=1) + 0.01 # addition of small number for plotting
cmap = ListedColormap(["C{:1d}".format(i) for i in range(5)])
ax.contourf(xxf, yyf, predictionf.reshape(xxf.shape), cmap=cmap, alpha=0.3, antialiased=True)
# plot data point for region I and II
for regime, marker in zip(regimes, markers):
ax.scatter(logData[logData["regime"] == regime].Ga, logData[logData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
ax.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax.set_ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
```
## Final notes<a id="final_notes"></a>
The final result looks very convincing, even more than the manually drawn decision boundaries from the original article. However, there are still some points we could improve:
- The training was stopped after the loss decreased below a certain tolerance. Depending on the chosen tolerance, also the final decision boundary will vary. If we have many parameters and train for many epochs, the model can be over-adjusted to data (over-fitting). In that case, the model will have high accuracy on the training data, but it may not generalize very well to new data points of which we do not know the label.
- A common strategy to avoid over-fitting is to split the dataset in training, validation, and test data. Training and validation data are used to train the model and to check when it starts to over-fit (keywords: cross-validation, early stopping). The test set is not used for training but only for the final evaluation.
|
github_jupyter
|
# load and process .csv files
import pandas as pd
# python arrays
import numpy as np
# plotting
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
# machine learning
import sklearn
from sklearn import preprocessing
import torch
from torch import nn, optim
import torch.nn.functional as F
# google colab support
import sys
IN_COLAB = 'google.colab' in sys.modules
# display images in notebook
from IPython.display import Image, display
%matplotlib inline
print("Pandas version: {}".format(pd.__version__))
print("Numpy version: {}".format(np.__version__))
print("ScikitLearn version: {}".format(sklearn.__version__))
print("PyTorch version: {}".format(torch.__version__))
print("Running notebook {}".format("in colab." if IN_COLAB else "locally."))
if not IN_COLAB:
data_path = "../data/path_shape_regimes/"
else:
data_path = "https://raw.githubusercontent.com/AndreWeiner/machine-learning-applied-to-cfd/master/data/path_shape_regimes/"
regimes = ["I", "II", "III", "IV", "V"]
raw_data_files = ["regime_{}.csv".format(regime) for regime in regimes]
files = [pd.read_csv(data_path + file, header=0, names=["Ga", "Eo"]) for file in raw_data_files]
for file, regime in zip(files, regimes):
file["regime"] = regime
data = pd.concat(files, ignore_index=True)
print("Read {} data points".format(data.shape[0]))
data.sample(5)
data.describe()
logData = data[["Ga", "Eo"]].apply(np.log10)
logData["regime"] = data["regime"].copy()
fontsize = 14
markers = ["o", "x", "<", ">", "*"]
plt.figure(figsize=(12, 8))
for regime, marker in zip(regimes, markers):
plt.scatter(logData[logData["regime"] == regime].Ga, logData[logData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
resolution = 0.005 # resolution for plotting region contours
def z_func(logGa, logEo):
'''Compute a weighted linear combination of logGa and logEo.
'''
w1 = 8.0; w2 = 1; b = -12
return w1 * logGa + w2 * logEo + b
def H_func(logGa, logEo):
'''Distinguish between z<=0 and z>0.
'''
return np.heaviside(z_func(logGa, logEo), 0.0)
plt.figure(figsize=(12, 8))
# color predicted region I and II
xx, yy = np.meshgrid(np.arange(0.5, 3.0, resolution), np.arange(-1.2, 1.5, resolution))
prediction = H_func(xx.ravel(), yy.ravel())
plt.contourf(xx, yy, prediction.reshape(xx.shape), cmap=ListedColormap(['C0', 'C1']), alpha=0.3)
# plot data point for region I and II
for regime, marker in zip(regimes[:2], markers[:2]):
plt.scatter(logData[logData["regime"] == regime].Ga, logData[logData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
class SimpleClassifier():
'''Implementation of a simple *perceptron* and the perceptron learning rule.
'''
def __init__(self, eta=0.01, epochs=1000):
self.eta_ = eta
self.epochs_ = epochs
self.weights_ = np.random.rand(3)
self.loss_ = []
def train(self, X, y):
for e in range(self.epochs_):
self.weights_ += self.eta_ * self.lossGradient(X, y)
self.loss_.append(self.loss(X, y))
if self.loss_[-1] < 1.0E-6:
print("Training converged after {} epochs.".format(e))
break
def loss(self, X, y):
return 0.5 * np.sum(np.square(y - self.predict(X)))
def lossGradient(self, X, y):
return np.concatenate((X, np.ones((X.shape[0], 1))), axis=1).T.dot(y - self.predict(X))
def predict(self, X):
return np.heaviside(np.dot(np.concatenate((X, np.ones((X.shape[0], 1))), axis=1), self.weights_), 0.0)
# create reduced data set with regimes I and II and train classifier
reducedData = logData[(logData.regime == "I") | (logData.regime == "II")]
lb = preprocessing.LabelBinarizer() # the LabelBinarizer converts the labels "I" and "II" to 0 and 1
lb.fit(reducedData.regime)
y = lb.transform(reducedData.regime).ravel() # label tensor
X = reducedData[["Ga", "Eo"]].values # feature tensor
classifier = SimpleClassifier()
classifier.train(X, y)
print("Computed weights: w1={:.4f}, w2={:.4f}, b={:.4f}".format(classifier.weights_[0], classifier.weights_[1], classifier.weights_[2]))
# plot loss over epochs
plt.figure(figsize=(12, 4))
plt.plot(range(len(classifier.loss_)), classifier.loss_)
plt.xlabel(r"epoch", fontsize=fontsize)
plt.ylabel(r"loss L(w)", fontsize=fontsize)
plt.show()
plt.figure(figsize=(12, 8))
prediction = classifier.predict(np.vstack((xx.ravel(), yy.ravel())).T)
plt.contourf(xx, yy, prediction.reshape(xx.shape), cmap=ListedColormap(['C0', 'C1']), alpha=0.3)
for regime, marker in zip(regimes[:2], markers[:2]):
plt.scatter(reducedData[reducedData["regime"] == regime].Ga, reducedData[reducedData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
def sigmoid(z):
'''Compute the sigmoid function.
'''
return 1.0 / (1.0 + np.exp(-z))
plt.figure(figsize=(12, 4))
plt.plot(np.linspace(-5, 5, 100), sigmoid(np.linspace(-5, 5, 100)))
plt.xlim([-5, 5])
plt.xlabel(r"$z$", fontsize=fontsize)
plt.ylabel(r"$\sigma (z)$", fontsize=fontsize)
plt.show()
def probability(X, w):
'''Compute to probability for the features X to be in region II.
'''
z = np.dot(np.concatenate((X, np.ones((X.shape[0], 1))), axis=1), w)
return 1.0 / (1.0 + np.exp(-z))
plt.figure(figsize=(12, 8))
cm = LinearSegmentedColormap.from_list("blue_to_orange", ['C0', 'C1'], 20)
prob = probability(np.vstack((xx.ravel(), yy.ravel())).T, classifier.weights_)
plt.contour(xx, yy, prob.reshape(xx.shape),levels=np.arange(0.3, 0.8, 0.05), cmap=cm, alpha=0.3, antialiased=True)
plt.contourf(xx, yy, prob.reshape(xx.shape),levels=np.arange(0.3, 0.8, 0.01), cmap=cm, alpha=0.3, antialiased=True)
plt.colorbar().set_label(r"$p(y=1)$", fontsize=16)
for regime, marker in zip(regimes[:2], markers[:2]):
Xr = reducedData[reducedData["regime"] == regime][["Ga", "Eo"]].values
point_prob = probability(Xr, classifier.weights_)
for i, p in enumerate(Xr):
plt.annotate("{:.2f}".format(point_prob[i]), (p[0], p[1]))
plt.scatter(reducedData[reducedData["regime"] == regime].Ga, reducedData[reducedData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
class LogClassifier():
'''Implemention of a logistic-regression classifier.
'''
def __init__(self, eta=1.0, epochs=10000):
self.eta_ = eta
self.epochs_ = epochs
self.weights_ = np.random.rand(3)
self.loss_ = []
def train(self, X, y, tol):
for e in range(self.epochs_):
self.weights_ += self.eta_ * self.lossGradient(X, y)
self.loss_.append(self.loss(X, y))
if self.loss_[-1] < tol:
print("Training converged after {} epochs.".format(e))
break
def loss(self, X, y):
logProb = y * np.log(self.probability(X)) + (1.0 - y) * np.log(1.0 - self.probability(X))
return - np.mean(logProb)
def lossGradient(self, X, y):
return np.concatenate((X, np.ones((X.shape[0], 1))), axis=1).T.dot(y - self.probability(X)) / X.shape[0]
def probability(self, X):
z = np.dot(np.concatenate((X, np.ones((X.shape[0], 1))), axis=1), self.weights_)
return 1.0 / (1.0 + np.exp(-z))
def predict(self, X):
return np.heaviside(self.probability(X) - 0.5, 0.0)
logClassifier = LogClassifier()
logClassifier.train(X, y, tol=0.1)
print("Computed weights: w1={:.4f}, w2={:.4f}, b={:.4f}".format(classifier.weights_[0], classifier.weights_[1], classifier.weights_[2]))
# plot loss over epochs
plt.figure(figsize=(12, 4))
plt.plot(range(len(logClassifier.loss_)), logClassifier.loss_)
plt.xlabel(r"epoch", fontsize=fontsize)
plt.ylabel(r"loss L(w)", fontsize=fontsize)
plt.show()
plt.figure(figsize=(12, 8))
prediction = logClassifier.predict(np.vstack((xx.ravel(), yy.ravel())).T)
plt.contourf(xx, yy, prediction.reshape(xx.shape), cmap=ListedColormap(['C0', 'C1']), alpha=0.3)
for regime, marker in zip(regimes[:2], markers[:2]):
plt.scatter(reducedData[reducedData["regime"] == regime].Ga, reducedData[reducedData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
plt.xlabel(r"log(Ga)", fontsize=fontsize)
plt.ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
# reduced data sets
data_I_II = logData[(logData.regime == "I") | (logData.regime == "II")]
data_I_III = logData[(logData.regime == "I") | (logData.regime == "III")]
# The LabelBinarizer converts the labels "I" to 0 and "II/III" to 1
lb_I_II = preprocessing.LabelBinarizer()
lb_I_III = preprocessing.LabelBinarizer()
lb_I_II.fit(data_I_II.regime)
lb_I_III.fit(data_I_III.regime)
# labels and features
y_I_II = lb_I_II.transform(data_I_II.regime).ravel()
y_I_III = lb_I_III.transform(data_I_III.regime).ravel()
X_I_II = data_I_II[["Ga", "Eo"]].values
X_I_III = data_I_III[["Ga", "Eo"]].values
# classifier to separate region I and II
classifier_I_II = LogClassifier()
classifier_I_II.train(X_I_II, y_I_II, tol=0.1)
print("Computed weights: w1={:.4f}, w2={:.4f}, b={:.4f}".format(
classifier_I_II.weights_[0], classifier_I_II.weights_[1], classifier_I_II.weights_[2]))
# classifier to separate region I and III
classifier_I_III = LogClassifier()
classifier_I_III.train(X_I_III, y_I_III, tol=0.05)
print("Computed weights: w1={:.4f}, w2={:.4f}, b={:.4f}".format(
classifier_I_III.weights_[0], classifier_I_III.weights_[1], classifier_I_III.weights_[2]))
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(18, 8))
xxl, yyl = np.meshgrid(np.arange(0.5, 3.0, resolution), np.arange(-1.2, 2.5, resolution))
# region I and II
prediction_I_II = classifier_I_II.predict(np.vstack((xxl.ravel(), yyl.ravel())).T)
ax1.contourf(xxl, yyl, prediction_I_II.reshape(xxl.shape), cmap=ListedColormap(["C0", "C1"]), alpha=0.3)
for regime, marker in zip(regimes[:2], markers[:2]):
ax1.scatter(data_I_II[data_I_II["regime"] == regime].Ga, data_I_II[data_I_II["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
# region I and III
prediction_I_III = classifier_I_III.predict(np.vstack((xxl.ravel(), yyl.ravel())).T)
ax2.contourf(xxl, yyl, prediction_I_III.reshape(xxl.shape), cmap=ListedColormap(["C0", "C2"]), alpha=0.3)
for regime, marker, color in zip(["I", "III"], ["o", "<"], ["C0", "C2"]):
ax2.scatter(data_I_III[data_I_III["regime"] == regime].Ga, data_I_III[data_I_III["regime"] == regime].Eo,
marker=marker, color=color, s=80, label="regime {}".format(regime))
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=2, fontsize=fontsize)
ax2.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=2, fontsize=fontsize)
ax1.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax2.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax1.set_ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
probability_I_II_III = sigmoid(0.9 * classifier_I_II.probability(np.vstack((xxl.ravel(), yyl.ravel())).T)
+ 0.9 * classifier_I_III.probability(np.vstack((xxl.ravel(), yyl.ravel())).T)
- 0.5)
prediction_I_II_III = np.heaviside(probability_I_II_III - 0.5, 0.0)
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(18, 8), gridspec_kw={'width_ratios': [1.2, 1]})
# plot probabilities
cf1 = ax1.contourf(xxl, yyl, probability_I_II_III.reshape(xxl.shape), levels=np.arange(0.3, 0.8, 0.01), cmap=cm, alpha=0.3, antialiased=True)
c1 = ax1.contour(xxl, yyl, probability_I_II_III.reshape(xxl.shape), levels=np.arange(0.3, 0.8, 0.05), cmap=cm, alpha=0.6, antialiased=True)
ax1.clabel(c1, inline=1, fontsize=10)
fig.colorbar(cf1, ax=[ax1]).set_label(r"$p(y_{II,III}=1)$", fontsize=16)
# plot resulting decicion boundary
ax2.contourf(xxl, yyl, prediction_I_II_III.reshape(xxl.shape), cmap=ListedColormap(['C0', 'C1']), alpha=0.3)
# plot data point for region I and II in subplot 2
for regime, marker in zip(regimes[:3], markers[:3]):
ax2.scatter(logData[logData["regime"] == regime].Ga, logData[logData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
ax1.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax1.set_ylabel(r"log(Eo)", fontsize=fontsize)
ax2.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax2.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=3, fontsize=fontsize)
plt.show()
if IN_COLAB:
display(Image(url="https://raw.githubusercontent.com/AndreWeiner/machine-learning-applied-to-cfd/master/notebooks/combined_linear_models.png"))
else:
display(Image("combined_linear_models.png"))
le = preprocessing.LabelEncoder()
y_numeric = le.fit_transform(logData.regime)
sample = logData.regime.sample(5)
for l, ln in zip(sample, le.transform(sample.values)): print("label: {:3s}, numeric label: {:1d}".format(l, ln))
# Simple MLP built with PyTorch
class PyTorchClassifier(nn.Module):
'''Multi-layer perceptron with 3 hidden layers.
'''
def __init__(self, n_features=2, n_classes=5, n_neurons=60, activation=torch.sigmoid):
super().__init__()
self.activation = activation
self.layer_1 = nn.Linear(n_features, n_neurons)
self.layer_2 = nn.Linear(n_neurons, n_neurons)
self.layer_3 = nn.Linear(n_neurons, n_classes)
def forward(self, x):
x = self.activation(self.layer_1(x))
x = self.activation(self.layer_2(x))
return F.log_softmax(self.layer_3(x), dim=1)
regimeClassifier = PyTorchClassifier()
# categorial cross entropy taking logarithmic probabilities
criterion = nn.NLLLoss()
# stochastic gradient decent: ADAM
optimizer = optim.Adam(regimeClassifier.parameters(), lr=0.005)
epochs = 2000
losses = []
# convert feature and label arrays into PyTorch tensors
featureTensor = torch.from_numpy(np.float32(logData[["Ga", "Eo"]].values))
labelTensor = torch.tensor(y_numeric, dtype=torch.long)
for e in range(1, epochs):
optimizer.zero_grad()
# run forward pass through the network
log_prob = regimeClassifier(featureTensor)
# compute cross entropy
loss = criterion(log_prob, labelTensor)
# compute gradient of the loss function w.r.t. to the model weights
loss.backward()
# update weights
optimizer.step()
# keep track and print progress
losses.append(loss.item())
if e % 100 is 0:
print("Training loss after {} epochs: {}".format(e, loss.item()))
if losses[-1] < 4.0E-3: break
# plot loss over epochs
plt.figure(figsize=(12, 4))
plt.plot(range(len(losses)), losses)
plt.xlabel(r"epoch", fontsize=fontsize)
plt.ylabel(r"loss L(w)", fontsize=fontsize)
plt.show()
fig, ax = plt.subplots(figsize=(12, 8))
# color predicted regions
xxf, yyf = np.meshgrid(np.arange(0.7, 2.8, resolution), np.arange(-1.2, 2.6, resolution))
Xf = torch.from_numpy(np.float32(np.vstack((xxf.ravel(), yyf.ravel())).T))
class_prob = regimeClassifier(Xf).exp().detach().numpy()
predictionf = np.argmax(class_prob, axis=1) + 0.01 # addition of small number for plotting
cmap = ListedColormap(["C{:1d}".format(i) for i in range(5)])
ax.contourf(xxf, yyf, predictionf.reshape(xxf.shape), cmap=cmap, alpha=0.3, antialiased=True)
# plot data point for region I and II
for regime, marker in zip(regimes, markers):
ax.scatter(logData[logData["regime"] == regime].Ga, logData[logData["regime"] == regime].Eo,
marker=marker, s=80, label="regime {}".format(regime))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=5, fontsize=fontsize)
ax.set_xlabel(r"log(Ga)", fontsize=fontsize)
ax.set_ylabel(r"log(Eo)", fontsize=fontsize)
plt.show()
| 0.561455 | 0.980525 |
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import json
import sys
import os
import scipy
import scipy.io
from scipy import stats
path_root = os.environ.get('DECIDENET_PATH')
path_code = os.path.join(path_root, 'code')
if path_code not in sys.path:
sys.path.append(path_code)
from dn_utils.behavioral_models import load_behavioral_data
plt.style.use('ggplot')
plt.rcParams.update({'font.size': 14})
path_jags = os.path.join(path_root, 'data/main_fmri_study/derivatives/jags')
path_parameter_estimates = os.path.join(path_jags, 'parameter_estimates')
path_vba = os.path.join(path_jags, 'vba')
# Load behavioral data
path_beh = os.path.join(path_root, 'data/main_fmri_study/sourcedata/behavioral')
beh, meta = load_behavioral_data(path_beh)
n_subjects, n_conditions, n_trials, _ = beh.shape
# Load parameter estimates
alpha_pdci = np.load(os.path.join(path_parameter_estimates, 'alpha_pdci_mle_3digits.npy'))
# Load posterior model probabilities for sequential model
pmp = scipy.io.loadmat(
os.path.join(path_vba, 'pmp_HLM_sequential_split.mat'),
squeeze_me=True)['pmp']
```
## Behavioral performance
Behavioral performance is quantified as accuracy – frequency of correct choices during entire task condition. In reward-seeking condition correct choice leads to gain of points whereas in punishment-avoiding condition correcto choice leads to avoiding loss.
- **Test 1**: Is performance above chance level?
- one-sample, one-sided t-test
- $H_0$: Accuracy is at chance level.
- $H_a$: Accuracy is greater than 50%.
- **Test 2**: Do task differ in performance?
- two-sample, two-sided t-test
- $H_0$: Accuracy for reward-seeking and punishment-avoiding condition is equal.
- $H_a$: Accuracy differ between conditions.
```
# Mean accuracy for all subject and both task conditions
won_bool_mean = np.mean(beh[:, :, :, meta['dim4'].index('won_bool')], axis=2)
# Test 1
t_rew_test1, p_rew_test1 = stats.ttest_1samp(won_bool_mean[:, 0], popmean=0.5)
t_pun_test1, p_pun_test1 = stats.ttest_1samp(won_bool_mean[:, 1], popmean=0.5)
print(f'Test 1 (rew): t={t_rew_test1}, p={p_rew_test1 / 2}, accu={np.mean(won_bool_mean[:, 0])}')
print(f'Test 1 (pun): t={t_pun_test1}, p={p_pun_test1 / 2}, accu={np.mean(won_bool_mean[:, 1])}')
# Test 2
t_test2, p_test2 = stats.ttest_rel(won_bool_mean[:, 0], won_bool_mean[:, 1])
print(f'Test 2: t={t_test2}, p={p_test2}')
```
### Reward magnitude infuence on choice
- **Test 3**: Do difference between reward magnitudes affects choice?
- Pearson's correlation
- variable 1: difference in reward magnitude for left and right side
- variable 2: averaged (across subjects and conditions) probability of choosing right side
In `response_probability` array first column corresponds to all unique values of the difference in reward magnitude for left and right side and second column reflects proportion of right side choices for corresponding difference in reward magnitude.
```
magn_rl_diff = beh[:, :, :, meta['dim4'].index('magn_right')] \
- beh[:, :, :, meta['dim4'].index('magn_left')]
response = beh[:, :, :, meta['dim4'].index('response')]
diff_values = np.unique(magn_rl_diff)
response_probability = np.zeros((len(diff_values), 2))
response_probability[:, 0] = diff_values
for i, diff in enumerate(diff_values):
diff_response = response[magn_rl_diff == diff]
diff_response = diff_response[np.nonzero(diff_response)]
response_probability[i, 1] = np.mean((diff_response + 1) / 2)
# Test 3
magn_rl_diff_stat = stats.pearsonr(response_probability[:, 0], response_probability[:, 1])
print('Test 3: r={:.3f}, p={}'.format(magn_rl_diff_stat[0], magn_rl_diff_stat[1]))
x = response_probability[:, 0]
y = response_probability[:, 1]
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 5), facecolor='w')
ax.plot(x, y, 'ko')
ax.plot(x, np.poly1d(np.polyfit(x, y, 1))(x))
ax.set_xlabel('$x_r - x_l$')
ax.set_ylabel('right side choice proportion')
ax.set_ylim([0, 1])
plt.show()
```
### Probability matching behavior
Probability matching behavior results in frequent option switching after incorrect choice. Simple proxy for probability matching behavior is number of reversals – each reversal is single side switching.
- **Test 4**: Do task differ in probability matching behavior?
- two-sample, two-sided t-test
- $H_0$: Mean number of reversals is equal for both conditions.
- $H_a$: Mean number of differs between conditions.
- **Test 5**: Is there a relationship between probability matching behavior and difference in learning rates for positive and negative prediciton error?
- Pearson's correlation
- variable 1: number of reversals
- variable 2: difference in estimaated learning rates for positive and negative PEs $\alpha_{+}-\alpha_{-}$ (PDCI model)
```
# Number of reversals for each participant
def calculate_reversals(response):
'''Calculate number of side switches in subject responses.'''
return len(np.nonzero(np.diff(response[np.nonzero(response)]))[0])
reversals = np.zeros((n_subjects, n_conditions))
for i in range(n_subjects):
for j in range(n_conditions):
reversals[i, j] = calculate_reversals(beh[i, j, :, meta['dim4'].index('response')])
print(f'Mean number of reversals (rew): {np.mean(reversals[:, 0])}')
print(f'Mean number of reversals (pun): {np.mean(reversals[:, 1])}')
print(f'SD for reversals (rew): {np.std(reversals[:, 0])}')
print(f'SD for reversals (pun): {np.std(reversals[:, 1])}')
# Test 4
t_test4, p_test4 = stats.ttest_rel(reversals[:, 0], reversals[:, 1])
print(f'Test 4: t={t_test4}, p={p_test4}')
# Test 5
alpha_diff_reversal_stat = stats.pearsonr(
alpha_pdci[:,0] - alpha_pdci[:,1],
np.mean(reversals, axis=1)
)
print('Test 5: r={:.3f}, p={}'.format(alpha_diff_reversal_stat[0], alpha_diff_reversal_stat[1]))
# Color indicates value alpha-, size indicates goodness-of-fit for the PDCI model
x = alpha_pdci[:,0] - alpha_pdci[:,1]
y = np.mean(reversals, axis=1)
s = 100*(pmp[2, :] / np.max(pmp[2, :])) + 30
c = alpha_pdci[:, 0]
fig, ax = plt.subplots(figsize=(6, 5), facecolor='w')
sc = ax.scatter(
x, y, s=s, c=c,
cmap='bone_r', vmin=0, vmax=1,
linewidth=1, edgecolor='k',
)
plt.colorbar(sc)
ax.plot(x, np.poly1d(np.polyfit(x, y, 1))(x), 'k')
ax.set_xlabel(r'$\alpha_{+} - \alpha_{-}$')
ax.set_ylabel('Mean number of reversals')
ax.set_axisbelow(True)
ax.set_title('Reversal tendency')
ax.grid()
plt.tight_layout()
reversals
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import json
import sys
import os
import scipy
import scipy.io
from scipy import stats
path_root = os.environ.get('DECIDENET_PATH')
path_code = os.path.join(path_root, 'code')
if path_code not in sys.path:
sys.path.append(path_code)
from dn_utils.behavioral_models import load_behavioral_data
plt.style.use('ggplot')
plt.rcParams.update({'font.size': 14})
path_jags = os.path.join(path_root, 'data/main_fmri_study/derivatives/jags')
path_parameter_estimates = os.path.join(path_jags, 'parameter_estimates')
path_vba = os.path.join(path_jags, 'vba')
# Load behavioral data
path_beh = os.path.join(path_root, 'data/main_fmri_study/sourcedata/behavioral')
beh, meta = load_behavioral_data(path_beh)
n_subjects, n_conditions, n_trials, _ = beh.shape
# Load parameter estimates
alpha_pdci = np.load(os.path.join(path_parameter_estimates, 'alpha_pdci_mle_3digits.npy'))
# Load posterior model probabilities for sequential model
pmp = scipy.io.loadmat(
os.path.join(path_vba, 'pmp_HLM_sequential_split.mat'),
squeeze_me=True)['pmp']
# Mean accuracy for all subject and both task conditions
won_bool_mean = np.mean(beh[:, :, :, meta['dim4'].index('won_bool')], axis=2)
# Test 1
t_rew_test1, p_rew_test1 = stats.ttest_1samp(won_bool_mean[:, 0], popmean=0.5)
t_pun_test1, p_pun_test1 = stats.ttest_1samp(won_bool_mean[:, 1], popmean=0.5)
print(f'Test 1 (rew): t={t_rew_test1}, p={p_rew_test1 / 2}, accu={np.mean(won_bool_mean[:, 0])}')
print(f'Test 1 (pun): t={t_pun_test1}, p={p_pun_test1 / 2}, accu={np.mean(won_bool_mean[:, 1])}')
# Test 2
t_test2, p_test2 = stats.ttest_rel(won_bool_mean[:, 0], won_bool_mean[:, 1])
print(f'Test 2: t={t_test2}, p={p_test2}')
magn_rl_diff = beh[:, :, :, meta['dim4'].index('magn_right')] \
- beh[:, :, :, meta['dim4'].index('magn_left')]
response = beh[:, :, :, meta['dim4'].index('response')]
diff_values = np.unique(magn_rl_diff)
response_probability = np.zeros((len(diff_values), 2))
response_probability[:, 0] = diff_values
for i, diff in enumerate(diff_values):
diff_response = response[magn_rl_diff == diff]
diff_response = diff_response[np.nonzero(diff_response)]
response_probability[i, 1] = np.mean((diff_response + 1) / 2)
# Test 3
magn_rl_diff_stat = stats.pearsonr(response_probability[:, 0], response_probability[:, 1])
print('Test 3: r={:.3f}, p={}'.format(magn_rl_diff_stat[0], magn_rl_diff_stat[1]))
x = response_probability[:, 0]
y = response_probability[:, 1]
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 5), facecolor='w')
ax.plot(x, y, 'ko')
ax.plot(x, np.poly1d(np.polyfit(x, y, 1))(x))
ax.set_xlabel('$x_r - x_l$')
ax.set_ylabel('right side choice proportion')
ax.set_ylim([0, 1])
plt.show()
# Number of reversals for each participant
def calculate_reversals(response):
'''Calculate number of side switches in subject responses.'''
return len(np.nonzero(np.diff(response[np.nonzero(response)]))[0])
reversals = np.zeros((n_subjects, n_conditions))
for i in range(n_subjects):
for j in range(n_conditions):
reversals[i, j] = calculate_reversals(beh[i, j, :, meta['dim4'].index('response')])
print(f'Mean number of reversals (rew): {np.mean(reversals[:, 0])}')
print(f'Mean number of reversals (pun): {np.mean(reversals[:, 1])}')
print(f'SD for reversals (rew): {np.std(reversals[:, 0])}')
print(f'SD for reversals (pun): {np.std(reversals[:, 1])}')
# Test 4
t_test4, p_test4 = stats.ttest_rel(reversals[:, 0], reversals[:, 1])
print(f'Test 4: t={t_test4}, p={p_test4}')
# Test 5
alpha_diff_reversal_stat = stats.pearsonr(
alpha_pdci[:,0] - alpha_pdci[:,1],
np.mean(reversals, axis=1)
)
print('Test 5: r={:.3f}, p={}'.format(alpha_diff_reversal_stat[0], alpha_diff_reversal_stat[1]))
# Color indicates value alpha-, size indicates goodness-of-fit for the PDCI model
x = alpha_pdci[:,0] - alpha_pdci[:,1]
y = np.mean(reversals, axis=1)
s = 100*(pmp[2, :] / np.max(pmp[2, :])) + 30
c = alpha_pdci[:, 0]
fig, ax = plt.subplots(figsize=(6, 5), facecolor='w')
sc = ax.scatter(
x, y, s=s, c=c,
cmap='bone_r', vmin=0, vmax=1,
linewidth=1, edgecolor='k',
)
plt.colorbar(sc)
ax.plot(x, np.poly1d(np.polyfit(x, y, 1))(x), 'k')
ax.set_xlabel(r'$\alpha_{+} - \alpha_{-}$')
ax.set_ylabel('Mean number of reversals')
ax.set_axisbelow(True)
ax.set_title('Reversal tendency')
ax.grid()
plt.tight_layout()
reversals
| 0.541409 | 0.772101 |
```
import numpy as np
import pandas as pd
import plotly.express as px
from scipy import stats
from statsmodels.formula.api import ols
from statsmodels.stats.anova import anova_lm as anova
import itertools
from sklearn import linear_model
from numpy import ones,vstack
from numpy.linalg import lstsq
df=pd.read_csv('../data/ames_housing_price_data_v2.csv', index_col=0)
pd.options.display.max_rows=400
typedict = {'PID' : 'nominal',
'SalePrice' : 'continuous',
#Matt
'LotFrontage' : 'continuous',
'LotArea' : 'continuous',
'maybe_LotShape' : 'nominal',
'LandSlope' : 'nominal',
'LandContour' : 'nominal',
'maybe_MSZoning' : 'nominal',
'Street_paved' : 'nominal',
'Alley' : 'nominal',
'Neighborhood' : 'nominal',
'drop_LotConfig' : 'nominal',
'drop_Condition1' : 'nominal',
'drop_Condition2' : 'nominal',
'Foundation' : 'nominal',
'Utilities' : 'nominal',
'Heating' : 'nominal',
'HeatingQC_nom' : 'ordinal',
'CentralAir' : 'nominal',
'Electrical' : 'nominal',
'HeatingQC_ord' : 'ordinal',
'LotShape_com' : 'nominal',
'MSZoning_com' : 'nominal',
'LF_Normal' : 'nominal',
'LF_Near_NS_RR' : 'nominal',
'LF_Near_Positive_Feature' : 'nominal',
'LF_Adjacent_Arterial_St' : 'nominal',
'LF_Near_EW_RR' : 'nominal',
'LF_Adjacent_Feeder_St' : 'nominal',
'LF_Near_Postive_Feature' : 'nominal',
'Heating_com' : 'nominal',
'Electrical_com' : 'nominal',
'LotConfig_com' : 'nominal',
'LotFrontage_log' : 'continuous',
'LotArea_log' : 'continuous',
#Oren
'MiscFeature': 'Nominal',
'Fireplaces': 'Discrete',
'FireplaceQu': 'Ordinal',
'PoolQC': 'Ordinal',
'PoolArea': 'Continuous',
'PavedDrive': 'Nominal',
'ExterQual': 'Ordinal',
'OverallQual': 'Ordinal',
'drop_OverallCond': 'Ordinal',
'MiscVal': 'Continuous',
'YearBuilt': 'Discrete',
'YearRemodAdd': 'Discrete',
'KitchenQual': 'Ordinal',
'Fence': 'Ordinal',
'RoofStyle': 'Nominal',
'RoofMatl': 'Nominal',
'maybe_Exterior1st': 'Nominal',
'drop_Exterior2nd': 'Nominal',
'drop_ExterCond': 'Ordinal',
'maybe_MasVnrType': 'Nominal',
'MasVnrArea': 'Continuous',
#Mo
#Basement
'BsmtQual_ord': 'Ordinal',
'BsmtCond_ord': 'Ordinal',
'BsmtExposure_ord': 'Ordinal',
'BsmtQual_ord_lin': 'Ordinal',
'BsmtCond_ord_lin': 'Ordinal',
'BsmtExposure_ord_lin': 'Ordinal',
'TotalBsmtSF': 'Continuous',
'BSMT_GLQ':'Continuous',
'BSMT_Rec':'Continuous',
'maybe_BsmtUnfSF': 'Continuous',
'maybe_BSMT_ALQ':'Continuous',
'maybe_BSMT_BLQ':'Continuous',
'maybe_BSMT_LwQ':'Continuous',
'drop_BsmtQual': 'Nominal',
'drop_BsmtCond': 'Nominal',
'drop_BsmtExposure': 'Nominal',
'drop_BsmtFinType1': 'Nominal',
'drop_BsmtFinSF1': 'Continuous',
'drop_BsmtFinType2': 'Nominal',
'drop_BsmtFinSF2': 'Continuous',
#Deck
'WoodDeckSF':'Continuous',
'OpenPorchSF':'Continuous',
'ScreenPorch':'Continuous',
'maybe_EnclosedPorch':'Continuous',
'maybe_3SsnPorch':'Continuous',
#Garage
'GarageFinish':'Nominal',
'GarageYrBlt':'Continuous',
'GarageCars':'Ordinal',
'GarageArea':'Continuous',
'GarageType_con':'Nominal',
'maybe_GarageQual':'Nominal',
'maybe_GarageCond':'Nominal',
'drop_GarageType':'Nominal'
}
def EDA_plots(df, features = df.columns, targets = ['SalePrice'], diction = ['typedict']):
# can pass features = [list of features] and targets = [list of targets]
# to get plots and regressions of different variables
for feature in features:
for target in targets:
if feature != target and feature != 'PID':
print('feature: ',feature)
if diction[feature] == 'continuous':
scatter = px.scatter(x = df[f'{feature}'], y = df[f'{target}'])
scatter.update_layout(
title={
'text': f'Scatterplot, {feature} vs {target}',
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
xaxis_title = f'{feature}',
yaxis_title = f'{target}'
)
scatter.show()
if diction[feature] == 'ordinal':
hist = px.histogram(x = df[f'{feature}'])
hist.update_layout(
title={
'text': f'Distribution of {feature}',
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
xaxis_title = f'{feature}',
yaxis_title = 'Frequency'
)
hist.show()
if diction[feature] == 'nominal':
box = px.box(x = df[f'{feature}'], y = df[f'{target}'])
box.update_layout(
title={
'text': f'Boxplot, {feature} vs {target}',
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
xaxis_title = f'{feature}',
yaxis_title = 'Frequency'
)
box.show()
# temp = df[df[f'{feature}'].isna() == False].reset_index(drop = True)
# if type(temp.loc[0, f'{feature}']) != str:
# price_corr = temp[f'{feature}'].corr(temp[f'{target}'])
# print(f'Correlation between {feature} and {target} is {price_corr}')
# linreg = stats.linregress(temp[f'{feature}'], temp[f'{target}'] )
# print(linreg)
# print('r^2 = ',linreg.rvalue**2)
# if type(temp.loc[0, f'{feature}']) == str:
# # this is to see full multiple regression on each value of categorical variable
# # can comment this out
# fit = ols(f'{target} ~ C({feature})', data=temp).fit()
# print(fit.summary())
# # this is to see anova on whether any value of categorical variable is significantly different
# #anova_table = anova(fit, typ=2)
# #print(anova_table)
print()
EDA_plots(df, features = ['LotArea'])
typedict['GrLivArea']
df[(df.index==908154205) | (df.index==902207130)].T
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import plotly.express as px
from scipy import stats
from statsmodels.formula.api import ols
from statsmodels.stats.anova import anova_lm as anova
import itertools
from sklearn import linear_model
from numpy import ones,vstack
from numpy.linalg import lstsq
df=pd.read_csv('../data/ames_housing_price_data_v2.csv', index_col=0)
pd.options.display.max_rows=400
typedict = {'PID' : 'nominal',
'SalePrice' : 'continuous',
#Matt
'LotFrontage' : 'continuous',
'LotArea' : 'continuous',
'maybe_LotShape' : 'nominal',
'LandSlope' : 'nominal',
'LandContour' : 'nominal',
'maybe_MSZoning' : 'nominal',
'Street_paved' : 'nominal',
'Alley' : 'nominal',
'Neighborhood' : 'nominal',
'drop_LotConfig' : 'nominal',
'drop_Condition1' : 'nominal',
'drop_Condition2' : 'nominal',
'Foundation' : 'nominal',
'Utilities' : 'nominal',
'Heating' : 'nominal',
'HeatingQC_nom' : 'ordinal',
'CentralAir' : 'nominal',
'Electrical' : 'nominal',
'HeatingQC_ord' : 'ordinal',
'LotShape_com' : 'nominal',
'MSZoning_com' : 'nominal',
'LF_Normal' : 'nominal',
'LF_Near_NS_RR' : 'nominal',
'LF_Near_Positive_Feature' : 'nominal',
'LF_Adjacent_Arterial_St' : 'nominal',
'LF_Near_EW_RR' : 'nominal',
'LF_Adjacent_Feeder_St' : 'nominal',
'LF_Near_Postive_Feature' : 'nominal',
'Heating_com' : 'nominal',
'Electrical_com' : 'nominal',
'LotConfig_com' : 'nominal',
'LotFrontage_log' : 'continuous',
'LotArea_log' : 'continuous',
#Oren
'MiscFeature': 'Nominal',
'Fireplaces': 'Discrete',
'FireplaceQu': 'Ordinal',
'PoolQC': 'Ordinal',
'PoolArea': 'Continuous',
'PavedDrive': 'Nominal',
'ExterQual': 'Ordinal',
'OverallQual': 'Ordinal',
'drop_OverallCond': 'Ordinal',
'MiscVal': 'Continuous',
'YearBuilt': 'Discrete',
'YearRemodAdd': 'Discrete',
'KitchenQual': 'Ordinal',
'Fence': 'Ordinal',
'RoofStyle': 'Nominal',
'RoofMatl': 'Nominal',
'maybe_Exterior1st': 'Nominal',
'drop_Exterior2nd': 'Nominal',
'drop_ExterCond': 'Ordinal',
'maybe_MasVnrType': 'Nominal',
'MasVnrArea': 'Continuous',
#Mo
#Basement
'BsmtQual_ord': 'Ordinal',
'BsmtCond_ord': 'Ordinal',
'BsmtExposure_ord': 'Ordinal',
'BsmtQual_ord_lin': 'Ordinal',
'BsmtCond_ord_lin': 'Ordinal',
'BsmtExposure_ord_lin': 'Ordinal',
'TotalBsmtSF': 'Continuous',
'BSMT_GLQ':'Continuous',
'BSMT_Rec':'Continuous',
'maybe_BsmtUnfSF': 'Continuous',
'maybe_BSMT_ALQ':'Continuous',
'maybe_BSMT_BLQ':'Continuous',
'maybe_BSMT_LwQ':'Continuous',
'drop_BsmtQual': 'Nominal',
'drop_BsmtCond': 'Nominal',
'drop_BsmtExposure': 'Nominal',
'drop_BsmtFinType1': 'Nominal',
'drop_BsmtFinSF1': 'Continuous',
'drop_BsmtFinType2': 'Nominal',
'drop_BsmtFinSF2': 'Continuous',
#Deck
'WoodDeckSF':'Continuous',
'OpenPorchSF':'Continuous',
'ScreenPorch':'Continuous',
'maybe_EnclosedPorch':'Continuous',
'maybe_3SsnPorch':'Continuous',
#Garage
'GarageFinish':'Nominal',
'GarageYrBlt':'Continuous',
'GarageCars':'Ordinal',
'GarageArea':'Continuous',
'GarageType_con':'Nominal',
'maybe_GarageQual':'Nominal',
'maybe_GarageCond':'Nominal',
'drop_GarageType':'Nominal'
}
def EDA_plots(df, features = df.columns, targets = ['SalePrice'], diction = ['typedict']):
# can pass features = [list of features] and targets = [list of targets]
# to get plots and regressions of different variables
for feature in features:
for target in targets:
if feature != target and feature != 'PID':
print('feature: ',feature)
if diction[feature] == 'continuous':
scatter = px.scatter(x = df[f'{feature}'], y = df[f'{target}'])
scatter.update_layout(
title={
'text': f'Scatterplot, {feature} vs {target}',
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
xaxis_title = f'{feature}',
yaxis_title = f'{target}'
)
scatter.show()
if diction[feature] == 'ordinal':
hist = px.histogram(x = df[f'{feature}'])
hist.update_layout(
title={
'text': f'Distribution of {feature}',
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
xaxis_title = f'{feature}',
yaxis_title = 'Frequency'
)
hist.show()
if diction[feature] == 'nominal':
box = px.box(x = df[f'{feature}'], y = df[f'{target}'])
box.update_layout(
title={
'text': f'Boxplot, {feature} vs {target}',
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
xaxis_title = f'{feature}',
yaxis_title = 'Frequency'
)
box.show()
# temp = df[df[f'{feature}'].isna() == False].reset_index(drop = True)
# if type(temp.loc[0, f'{feature}']) != str:
# price_corr = temp[f'{feature}'].corr(temp[f'{target}'])
# print(f'Correlation between {feature} and {target} is {price_corr}')
# linreg = stats.linregress(temp[f'{feature}'], temp[f'{target}'] )
# print(linreg)
# print('r^2 = ',linreg.rvalue**2)
# if type(temp.loc[0, f'{feature}']) == str:
# # this is to see full multiple regression on each value of categorical variable
# # can comment this out
# fit = ols(f'{target} ~ C({feature})', data=temp).fit()
# print(fit.summary())
# # this is to see anova on whether any value of categorical variable is significantly different
# #anova_table = anova(fit, typ=2)
# #print(anova_table)
print()
EDA_plots(df, features = ['LotArea'])
typedict['GrLivArea']
df[(df.index==908154205) | (df.index==902207130)].T
| 0.37502 | 0.312344 |
# Compare slit profile with reference profile for [O III]
Repeat of previous workbook but for a different line
I first work through all the steps individually, looking at graphs of the intermediate results. This was used while iterating on the algorithm, by swapping out the value of `db` below for different slits that showed problems.
## First try it out by hand for a single slit
```
from pathlib import Path
import yaml
import numpy as np
from numpy.polynomial import Chebyshev
from astropy.io import fits
from astropy.wcs import WCS
import astropy.units as u
from matplotlib import pyplot as plt
import seaborn as sns
import mes_longslit as mes
dpath = Path.cwd().parent / "data"
pvpath = dpath / "pvextract"
pvpath.mkdir(exist_ok=True)
```
List of data for each Ha slit exposure:
```
slit_db_list = yaml.safe_load((dpath / "slits-o3.yml").read_text())
```
Photometric reference image:
```
(photom,) = fits.open(dpath / "regrid" / "ha-imslit-median.fits")
wphot = WCS(photom.header)
```
To start off with, we will analyze a single slit. **This is what we change when we want to try a differnt slit**
```
db = slit_db_list[3]
db
```
Get the HDUs for both the slit spectrum and the image+slit. The spectrum file names are very variable, so we have an `orig_file` entry in the database:
```
spec_hdu = fits.open(dpath / "originals" / db["orig_file"])[0]
```
But the image file names are more regular and can be derived from the `image_id` entry:
```
(im_hdu,) = fits.open(dpath / "wcs" / f"cr{db['image_id']}_b-wcs.fits")
```
There is no sign of any saturated pixels in any of the exposures, so we can miss out that step.
Add in extra fields to database:
- `wa` wavelength axis (1 or 2, fits order) in PV spectrum
- `ij` slit orientation in I+S (1=vertical, 2=horizontal)
```
if db["slit_id"].startswith("N"):
db["wa"] = 2
db["ij"] = 2
else:
db["wa"] = 1
db["ij"] = 1
db["s"] = 1
if "cosmic_rays" in db:
spec_hdu.data = mes.remove_cosmic_rays(
spec_hdu.data, db["cosmic_rays"], np.nanmedian(spec_hdu.data),
)
fig, ax = plt.subplots(figsize=(10, 5))
# ax.imshow(spec_hdu.data[250:700, :],
# #vmin=-3, vmax=30,
# origin="lower");
ax.imshow(spec_hdu.data[:, :], vmin=10, vmax=100, origin="lower")
```
Try to correct for gradient:
```
if "trim" in db:
# Replace non-linear part with NaNs
spec_hdu.data = mes.trim_edges(spec_hdu.data, db["trim"])
# Fit and remove the linear trend along slit
pvmed = np.nanmedian(spec_hdu.data, axis=2 - db["wa"])
s = np.arange(len(pvmed))
pvmed0 = np.nanmedian(pvmed)
sig = np.nanstd(pvmed)
m = np.abs(pvmed - pvmed0) <= db.get("bg_sig", 1) * sig
p = Chebyshev.fit(s[m], pvmed[m], db.get("bg_deg", 1))
if db["wa"] == 1:
spec_hdu.data -= p(s)[:, None]
else:
spec_hdu.data -= p(s)[None, :]
# And replace the NaNs with the median value
spec_hdu.data = mes.trim_edges(
spec_hdu.data, db["trim"], np.nanmedian(spec_hdu.data)
)
fig, ax = plt.subplots()
ax.plot(s[m], pvmed[m])
ax.plot(s, p(s))
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(spec_hdu.data[:, :], vmin=-5, vmax=100, origin="lower")
```
So we are no longer attempting to remove the sky at this stage, but we are trying to remove the light leak or whatever it is that adds a bright background at one end of the slit. This is necessary so that the cross correlation works.
Lines to avoid when calculating the continuum
```
restwavs = {"oiii": 5006.84, "hei": 5015.68}
spec_profile = mes.extract_full_profile_from_pv(
spec_hdu,
wavaxis=db["wa"],
bandwidth=None,
linedict=restwavs,
)
```
This is the position of the slit in pixel coordinates.
```
imslit_profile = mes.extract_slit_profile_from_imslit(
im_hdu.data,
db,
slit_width=2,
)
jslit = np.arange(len(spec_profile))
spec_profile.shape, imslit_profile.shape
spec_profile -= np.median(spec_profile)
imslit_profile -= np.median(imslit_profile)
```
### Find a better way to do the alignment
I am going to experiment with using cross-correlation to estimate the along-slit offset between `spec_profile` and `imslit_profile`:
```
ns = len(spec_profile)
assert len(imslit_profile) == ns, "Incompatible lengths"
```
The above assert would fail if the binning were different between the spectrum and the im+slit, which is something I will have to deal with later.
An array of pixel offsets that matches the result of `np.correlate` in "full" mode:
```
jshifts = np.arange(-(ns - 1), ns)
```
Now calculate the correlation:
```
xcorr = np.correlate(spec_profile, imslit_profile, mode="full")
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(jshifts, xcorr)
# ax.set(xlim=[-300, 300]);
```
That is a very clean result! One high narrow peak, at an offset of roughly 100, exactly where we expect it to be.
```
mm = (np.abs(jshifts) < 110) & (np.abs(jshifts) > 5)
jshift_peak = jshifts[mm][xcorr[mm].argmax()]
jshift_peak
```
That is much better, at least for this example.
```
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(jslit + jshift_peak, imslit_profile / np.max(imslit_profile))
ax.plot(jslit, spec_profile / np.max(spec_profile), alpha=0.7)
ax.set(yscale="linear", ylim=[-0.4, 1])
ax.axhline(0)
jwin_slice = slice(*db["jwin"])
jwin_slice_shift = slice(
jwin_slice.start - jshift_peak,
jwin_slice.stop - jshift_peak,
)
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(jslit[jwin_slice], imslit_profile[jwin_slice_shift])
ax.plot(jslit[jwin_slice], 100 * spec_profile[jwin_slice])
ax.set(
yscale="linear",
# ylim=[0, 1000]
)
```
We need to find the alignment along the slit. Just use the initial guess for now.
```
j0_s = np.average(jslit[jwin_slice], weights=spec_profile[jwin_slice])
j0_i = np.average(
jslit[jwin_slice_shift], weights=(10 + imslit_profile[jwin_slice_shift])
)
db["shift"] = jshift_peak
j0_s, j0_i, db["shift"]
slit_coords = mes.find_slit_coords(db, im_hdu.header, spec_hdu.header)
slit_coords["Dec"].shape, jslit.shape
calib_profile = mes.slit_profile(
slit_coords["RA"],
slit_coords["Dec"],
photom.data,
wphot,
# r = slit_coords["ds"],
)
```
Plot the calibration profile (green) compared with the spec and imslit profiles:
```
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(jslit + db["shift"], imslit_profile / imslit_profile.max())
ax.plot(jslit, spec_profile / spec_profile.max())
ax.plot(jslit, 0.05 * calib_profile)
ax.set(yscale="linear", ylim=[-0.1, 0.1])
```
This is now working fine after I fixed the pixel scale in the calibration image.
We have the ability to look at the profiles at neighboring slit positions in the calibration image. This allows us to see if we might have an error in the `islit` value:
```
neighbors = [-2, -1, 1, 2]
nb_calib_profiles = {}
for nb in neighbors:
nbdb = db.copy()
nbdb["islit"] += nb
nb_slit_coords = mes.find_slit_coords(nbdb, im_hdu.header, spec_hdu.header)
nb_calib_profiles[nb] = mes.slit_profile(
nb_slit_coords["RA"],
nb_slit_coords["Dec"],
photom.data,
wphot,
# nb_slit_coords["ds"],
)
fig, ax = plt.subplots(figsize=(12, 4))
# ax.plot(jslit + db["shift"], (imslit_profile + 20) * 10)
# ax.plot(jslit, spec_profile)
ax.plot(jslit, calib_profile, color="k", lw=2)
for nb in neighbors:
ax.plot(jslit, nb_calib_profiles[nb], label=f"${nb:+d}$")
ax.legend()
ax.set(yscale="linear")
slit_points = (np.arange(len(spec_profile)) - j0_s) * slit_coords["ds"]
jslice0 = slice(int(j0_s) - 20, int(j0_s) + 20)
rat0 = np.nansum(spec_profile[jslice0]) / np.nansum(calib_profile[jslice0])
print("Coarse calibration: ratio =", rat0)
spec_profile[jslice0]
calib_profile[jslice0]
spec_profile /= rat0
figpath = Path.cwd().parent / "figs"
figpath.mkdir(exist_ok=True)
plt_prefix = figpath / f"{db['slit_id']}-calib"
mes.make_three_plots(
spec_profile,
calib_profile,
plt_prefix,
slit_points=slit_points,
neighbors=nb_calib_profiles,
db=db,
sdb=slit_coords,
return_fig=True,
);
```
## Now try the automated way
```
for db in slit_db_list:
print(db)
spec_hdu = fits.open(dpath / "originals" / db["orig_file"])[0]
(im_hdu,) = fits.open(dpath / "wcs" / f"cr{db['image_id']}_b-wcs.fits")
if db["slit_id"].startswith("N"):
db["wa"] = 2
db["ij"] = 2
else:
db["wa"] = 1
db["ij"] = 1
db["s"] = 1
mes.pv_extract(
spec_hdu,
im_hdu,
photom,
db,
restwavs,
pvpath,
neighbors=[-1, 1],
)
db
```
|
github_jupyter
|
from pathlib import Path
import yaml
import numpy as np
from numpy.polynomial import Chebyshev
from astropy.io import fits
from astropy.wcs import WCS
import astropy.units as u
from matplotlib import pyplot as plt
import seaborn as sns
import mes_longslit as mes
dpath = Path.cwd().parent / "data"
pvpath = dpath / "pvextract"
pvpath.mkdir(exist_ok=True)
slit_db_list = yaml.safe_load((dpath / "slits-o3.yml").read_text())
(photom,) = fits.open(dpath / "regrid" / "ha-imslit-median.fits")
wphot = WCS(photom.header)
db = slit_db_list[3]
db
spec_hdu = fits.open(dpath / "originals" / db["orig_file"])[0]
(im_hdu,) = fits.open(dpath / "wcs" / f"cr{db['image_id']}_b-wcs.fits")
if db["slit_id"].startswith("N"):
db["wa"] = 2
db["ij"] = 2
else:
db["wa"] = 1
db["ij"] = 1
db["s"] = 1
if "cosmic_rays" in db:
spec_hdu.data = mes.remove_cosmic_rays(
spec_hdu.data, db["cosmic_rays"], np.nanmedian(spec_hdu.data),
)
fig, ax = plt.subplots(figsize=(10, 5))
# ax.imshow(spec_hdu.data[250:700, :],
# #vmin=-3, vmax=30,
# origin="lower");
ax.imshow(spec_hdu.data[:, :], vmin=10, vmax=100, origin="lower")
if "trim" in db:
# Replace non-linear part with NaNs
spec_hdu.data = mes.trim_edges(spec_hdu.data, db["trim"])
# Fit and remove the linear trend along slit
pvmed = np.nanmedian(spec_hdu.data, axis=2 - db["wa"])
s = np.arange(len(pvmed))
pvmed0 = np.nanmedian(pvmed)
sig = np.nanstd(pvmed)
m = np.abs(pvmed - pvmed0) <= db.get("bg_sig", 1) * sig
p = Chebyshev.fit(s[m], pvmed[m], db.get("bg_deg", 1))
if db["wa"] == 1:
spec_hdu.data -= p(s)[:, None]
else:
spec_hdu.data -= p(s)[None, :]
# And replace the NaNs with the median value
spec_hdu.data = mes.trim_edges(
spec_hdu.data, db["trim"], np.nanmedian(spec_hdu.data)
)
fig, ax = plt.subplots()
ax.plot(s[m], pvmed[m])
ax.plot(s, p(s))
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(spec_hdu.data[:, :], vmin=-5, vmax=100, origin="lower")
restwavs = {"oiii": 5006.84, "hei": 5015.68}
spec_profile = mes.extract_full_profile_from_pv(
spec_hdu,
wavaxis=db["wa"],
bandwidth=None,
linedict=restwavs,
)
imslit_profile = mes.extract_slit_profile_from_imslit(
im_hdu.data,
db,
slit_width=2,
)
jslit = np.arange(len(spec_profile))
spec_profile.shape, imslit_profile.shape
spec_profile -= np.median(spec_profile)
imslit_profile -= np.median(imslit_profile)
ns = len(spec_profile)
assert len(imslit_profile) == ns, "Incompatible lengths"
jshifts = np.arange(-(ns - 1), ns)
xcorr = np.correlate(spec_profile, imslit_profile, mode="full")
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(jshifts, xcorr)
# ax.set(xlim=[-300, 300]);
mm = (np.abs(jshifts) < 110) & (np.abs(jshifts) > 5)
jshift_peak = jshifts[mm][xcorr[mm].argmax()]
jshift_peak
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(jslit + jshift_peak, imslit_profile / np.max(imslit_profile))
ax.plot(jslit, spec_profile / np.max(spec_profile), alpha=0.7)
ax.set(yscale="linear", ylim=[-0.4, 1])
ax.axhline(0)
jwin_slice = slice(*db["jwin"])
jwin_slice_shift = slice(
jwin_slice.start - jshift_peak,
jwin_slice.stop - jshift_peak,
)
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(jslit[jwin_slice], imslit_profile[jwin_slice_shift])
ax.plot(jslit[jwin_slice], 100 * spec_profile[jwin_slice])
ax.set(
yscale="linear",
# ylim=[0, 1000]
)
j0_s = np.average(jslit[jwin_slice], weights=spec_profile[jwin_slice])
j0_i = np.average(
jslit[jwin_slice_shift], weights=(10 + imslit_profile[jwin_slice_shift])
)
db["shift"] = jshift_peak
j0_s, j0_i, db["shift"]
slit_coords = mes.find_slit_coords(db, im_hdu.header, spec_hdu.header)
slit_coords["Dec"].shape, jslit.shape
calib_profile = mes.slit_profile(
slit_coords["RA"],
slit_coords["Dec"],
photom.data,
wphot,
# r = slit_coords["ds"],
)
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(jslit + db["shift"], imslit_profile / imslit_profile.max())
ax.plot(jslit, spec_profile / spec_profile.max())
ax.plot(jslit, 0.05 * calib_profile)
ax.set(yscale="linear", ylim=[-0.1, 0.1])
neighbors = [-2, -1, 1, 2]
nb_calib_profiles = {}
for nb in neighbors:
nbdb = db.copy()
nbdb["islit"] += nb
nb_slit_coords = mes.find_slit_coords(nbdb, im_hdu.header, spec_hdu.header)
nb_calib_profiles[nb] = mes.slit_profile(
nb_slit_coords["RA"],
nb_slit_coords["Dec"],
photom.data,
wphot,
# nb_slit_coords["ds"],
)
fig, ax = plt.subplots(figsize=(12, 4))
# ax.plot(jslit + db["shift"], (imslit_profile + 20) * 10)
# ax.plot(jslit, spec_profile)
ax.plot(jslit, calib_profile, color="k", lw=2)
for nb in neighbors:
ax.plot(jslit, nb_calib_profiles[nb], label=f"${nb:+d}$")
ax.legend()
ax.set(yscale="linear")
slit_points = (np.arange(len(spec_profile)) - j0_s) * slit_coords["ds"]
jslice0 = slice(int(j0_s) - 20, int(j0_s) + 20)
rat0 = np.nansum(spec_profile[jslice0]) / np.nansum(calib_profile[jslice0])
print("Coarse calibration: ratio =", rat0)
spec_profile[jslice0]
calib_profile[jslice0]
spec_profile /= rat0
figpath = Path.cwd().parent / "figs"
figpath.mkdir(exist_ok=True)
plt_prefix = figpath / f"{db['slit_id']}-calib"
mes.make_three_plots(
spec_profile,
calib_profile,
plt_prefix,
slit_points=slit_points,
neighbors=nb_calib_profiles,
db=db,
sdb=slit_coords,
return_fig=True,
);
for db in slit_db_list:
print(db)
spec_hdu = fits.open(dpath / "originals" / db["orig_file"])[0]
(im_hdu,) = fits.open(dpath / "wcs" / f"cr{db['image_id']}_b-wcs.fits")
if db["slit_id"].startswith("N"):
db["wa"] = 2
db["ij"] = 2
else:
db["wa"] = 1
db["ij"] = 1
db["s"] = 1
mes.pv_extract(
spec_hdu,
im_hdu,
photom,
db,
restwavs,
pvpath,
neighbors=[-1, 1],
)
db
| 0.507324 | 0.903932 |
## 1. The World Bank's international debt data
<p>It's not that we humans only take debts to manage our necessities. A country may also take debt to manage its economy. For example, infrastructure spending is one costly ingredient required for a country's citizens to lead comfortable lives. <a href="https://www.worldbank.org">The World Bank</a> is the organization that provides debt to countries.</p>
<p>In this notebook, we are going to analyze international debt data collected by The World Bank. The dataset contains information about the amount of debt (in USD) owed by developing countries across several categories. We are going to find the answers to questions like: </p>
<ul>
<li>What is the total amount of debt that is owed by the countries listed in the dataset?</li>
<li>Which country owns the maximum amount of debt and what does that amount look like?</li>
<li>What is the average amount of debt owed by countries across different debt indicators?</li>
</ul>
<p><img src="https://assets.datacamp.com/production/project_754/img/image.jpg" alt></p>
<p>The first line of code connects us to the <code>international_debt</code> database where the table <code>international_debt</code> is residing. Let's first <code>SELECT</code> <em>all</em> of the columns from the <code>international_debt</code> table. Also, we'll limit the output to the first ten rows to keep the output clean.</p>
```
%%sql
postgresql:///international_debt
SELECT *
FROM international_debt
LIMIT 10;
```
## 2. Finding the number of distinct countries
<p>From the first ten rows, we can see the amount of debt owed by <em>Afghanistan</em> in the different debt indicators. But we do not know the number of different countries we have on the table. There are repetitions in the country names because a country is most likely to have debt in more than one debt indicator. </p>
<p>Without a count of unique countries, we will not be able to perform our statistical analyses holistically. In this section, we are going to extract the number of unique countries present in the table. </p>
```
%%sql
SELECT
COUNT(DISTINCT (country_name)) AS total_distinct_countries
FROM international_debt;
```
## 3. Finding out the distinct debt indicators
<p>We can see there are a total of 124 countries present on the table. As we saw in the first section, there is a column called <code>indicator_name</code> that briefly specifies the purpose of taking the debt. Just beside that column, there is another column called <code>indicator_code</code> which symbolizes the category of these debts. Knowing about these various debt indicators will help us to understand the areas in which a country can possibly be indebted to. </p>
```
%%sql
SELECT DISTINCT(indicator_code) AS distinct_debt_indicators
FROM international_debt
ORDER BY distinct_debt_indicators;
```
## 4. Totaling the amount of debt owed by the countries
<p>As mentioned earlier, the financial debt of a particular country represents its economic state. But if we were to project this on an overall global scale, how will we approach it?</p>
<p>Let's switch gears from the debt indicators now and find out the total amount of debt (in USD) that is owed by the different countries. This will give us a sense of how the overall economy of the entire world is holding up.</p>
```
%%sql
SELECT
ROUND(SUM(debt)/1000000, 2) AS total_debt
FROM international_debt;
```
## 5. Country with the highest debt
<p>"Human beings cannot comprehend very large or very small numbers. It would be useful for us to acknowledge that fact." - <a href="https://en.wikipedia.org/wiki/Daniel_Kahneman">Daniel Kahneman</a>. That is more than <em>3 million <strong>million</strong></em> USD, an amount which is really hard for us to fathom. </p>
<p>Now that we have the exact total of the amounts of debt owed by several countries, let's now find out the country that owns the highest amount of debt along with the amount. <strong>Note</strong> that this debt is the sum of different debts owed by a country across several categories. This will help to understand more about the country in terms of its socio-economic scenarios. We can also find out the category in which the country owns its highest debt. But we will leave that for now. </p>
```
%%sql
SELECT
country_name,
SUM(debt) AS total_debt
FROM international_debt
GROUP BY country_name
ORDER BY total_debt DESC
LIMIT 1;
```
## 6. Average amount of debt across indicators
<p>So, it was <em>China</em>. A more in-depth breakdown of China's debts can be found <a href="https://datatopics.worldbank.org/debt/ids/country/CHN">here</a>. </p>
<p>We now have a brief overview of the dataset and a few of its summary statistics. We already have an idea of the different debt indicators in which the countries owe their debts. We can dig even further to find out on an average how much debt a country owes? This will give us a better sense of the distribution of the amount of debt across different indicators.</p>
```
%%sql
SELECT
indicator_code AS debt_indicator,
indicator_name,
AVG(debt) AS average_debt
FROM international_debt
GROUP BY debt_indicator, indicator_name
ORDER BY average_debt DESC
LIMIT 10;
```
## 7. The highest amount of principal repayments
<p>We can see that the indicator <code>DT.AMT.DLXF.CD</code> tops the chart of average debt. This category includes repayment of long term debts. Countries take on long-term debt to acquire immediate capital. More information about this category can be found <a href="https://datacatalog.worldbank.org/principal-repayments-external-debt-long-term-amt-current-us-0">here</a>. </p>
<p>An interesting observation in the above finding is that there is a huge difference in the amounts of the indicators after the second one. This indicates that the first two indicators might be the most severe categories in which the countries owe their debts.</p>
<p>We can investigate this a bit more so as to find out which country owes the highest amount of debt in the category of long term debts (<code>DT.AMT.DLXF.CD</code>). Since not all the countries suffer from the same kind of economic disturbances, this finding will allow us to understand that particular country's economic condition a bit more specifically. </p>
```
%%sql
SELECT
country_name,
indicator_name
FROM international_debt
WHERE debt = (SELECT
MAX(debt)
FROM international_debt
WHERE indicator_code = 'DT.AMT.DLXF.CD');
```
## 8. The most common debt indicator
<p>China has the highest amount of debt in the long-term debt (<code>DT.AMT.DLXF.CD</code>) category. This is verified by <a href="https://data.worldbank.org/indicator/DT.AMT.DLXF.CD?end=2018&most_recent_value_desc=true">The World Bank</a>. It is often a good idea to verify our analyses like this since it validates that our investigations are correct. </p>
<p>We saw that long-term debt is the topmost category when it comes to the average amount of debt. But is it the most common indicator in which the countries owe their debt? Let's find that out. </p>
```
%%sql
SELECT indicator_code,
COUNT(indicator_code) AS indicator_count
FROM international_debt
GROUP BY indicator_code
ORDER BY indicator_count DESC, indicator_code DESC
LIMIT 20;
```
## 9. Other viable debt issues and conclusion
<p>There are a total of six debt indicators in which all the countries listed in our dataset have taken debt. The indicator <code>DT.AMT.DLXF.CD</code> is also there in the list. So, this gives us a clue that all these countries are suffering from a common economic issue. But that is not the end of the story, but just a part of the story.</p>
<p>Let's change tracks from <code>debt_indicator</code>s now and focus on the amount of debt again. Let's find out the maximum amount of debt that each country has. With this, we will be in a position to identify the other plausible economic issues a country might be going through.</p>
<p>In this notebook, we took a look at debt owed by countries across the globe. We extracted a few summary statistics from the data and unraveled some interesting facts and figures. We also validated our findings to make sure the investigations are correct.</p>
```
%%sql
SELECT country_name,
MAX(debt) AS maximum_debt
FROM international_Debt
GROUP BY country_name
ORDER BY maximum_debt DESC
LIMIT 10;
```
|
github_jupyter
|
%%sql
postgresql:///international_debt
SELECT *
FROM international_debt
LIMIT 10;
%%sql
SELECT
COUNT(DISTINCT (country_name)) AS total_distinct_countries
FROM international_debt;
%%sql
SELECT DISTINCT(indicator_code) AS distinct_debt_indicators
FROM international_debt
ORDER BY distinct_debt_indicators;
%%sql
SELECT
ROUND(SUM(debt)/1000000, 2) AS total_debt
FROM international_debt;
%%sql
SELECT
country_name,
SUM(debt) AS total_debt
FROM international_debt
GROUP BY country_name
ORDER BY total_debt DESC
LIMIT 1;
%%sql
SELECT
indicator_code AS debt_indicator,
indicator_name,
AVG(debt) AS average_debt
FROM international_debt
GROUP BY debt_indicator, indicator_name
ORDER BY average_debt DESC
LIMIT 10;
%%sql
SELECT
country_name,
indicator_name
FROM international_debt
WHERE debt = (SELECT
MAX(debt)
FROM international_debt
WHERE indicator_code = 'DT.AMT.DLXF.CD');
%%sql
SELECT indicator_code,
COUNT(indicator_code) AS indicator_count
FROM international_debt
GROUP BY indicator_code
ORDER BY indicator_count DESC, indicator_code DESC
LIMIT 20;
%%sql
SELECT country_name,
MAX(debt) AS maximum_debt
FROM international_Debt
GROUP BY country_name
ORDER BY maximum_debt DESC
LIMIT 10;
| 0.262464 | 0.989682 |
<a href="https://colab.research.google.com/github/ibaiGorordo/Deeplab-ADE20K-Inference/blob/master/DeepLab_ADE20K_inference_Demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Overview
This colab demonstrates the steps to use the DeepLab model to perform semantic segmentation on a sample input image. Expected outputs are semantic labels overlayed on the sample image.
### About DeepLab
The models used in this colab perform semantic segmentation. Semantic segmentation models focus on assigning semantic labels, such as sky, person, or car, to multiple objects and stuff in a single image.
# Instructions
<h3><a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a> Use a free TPU device</h3>
1. On the main menu, click Runtime and select **Change runtime type**. Set "TPU" as the hardware accelerator.
1. Click Runtime again and select **Runtime > Run All**. You can also run the cells manually with Shift-ENTER.
## Import Libraries
```
import os
from io import BytesIO
import tarfile
import tempfile
from six.moves import urllib
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
%tensorflow_version 1.x
import tensorflow as tf
```
## Import helper methods
These methods help us perform the following tasks:
* Load the latest version of the pretrained DeepLab model
* Load the colormap from the ADE20K dataset
* Adds colors to various labels, such as "pink" for people, "green" for bicycle and more
* Visualize an image, and add an overlay of colors on various regions
```
class DeepLabModel(object):
"""Class to load deeplab model and run inference."""
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
INPUT_SIZE = 513
FROZEN_GRAPH_NAME = 'frozen_inference_graph'
def __init__(self, tarball_path):
"""Creates and loads pretrained deeplab model."""
self.graph = tf.Graph()
graph_def = None
# Extract frozen graph from tar archive.
tar_file = tarfile.open(tarball_path)
for tar_info in tar_file.getmembers():
if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name):
file_handle = tar_file.extractfile(tar_info)
graph_def = tf.GraphDef.FromString(file_handle.read())
break
tar_file.close()
if graph_def is None:
raise RuntimeError('Cannot find inference graph in tar archive.')
with self.graph.as_default():
tf.import_graph_def(graph_def, name='')
self.sess = tf.Session(graph=self.graph)
def run(self, image):
"""Runs inference on a single image.
Args:
image: A PIL.Image object, raw input image.
Returns:
resized_image: RGB image resized from original input image.
seg_map: Segmentation map of `resized_image`.
"""
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(
self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})
seg_map = batch_seg_map[0]
return resized_image, seg_map
def create_ade20k_label_colormap():
"""Creates a label colormap used in ADE20K segmentation benchmark.
Returns:
A colormap for visualizing segmentation results.
"""
colormap = np.asarray([
[0,0,0],
[120, 120, 120],
[180, 120, 120],
[6, 230, 230],
[80, 50, 50],
[4, 200, 3],
[120, 120, 80],
[140, 140, 140],
[204, 5, 255],
[230, 230, 230],
[4, 250, 7],
[224, 5, 255],
[235, 255, 7],
[150, 5, 61],
[120, 120, 70],
[8, 255, 51],
[255, 6, 82],
[143, 255, 140],
[204, 255, 4],
[255, 51, 7],
[204, 70, 3],
[0, 102, 200],
[61, 230, 250],
[255, 6, 51],
[11, 102, 255],
[255, 7, 71],
[255, 9, 224],
[9, 7, 230],
[220, 220, 220],
[255, 9, 92],
[112, 9, 255],
[8, 255, 214],
[7, 255, 224],
[255, 184, 6],
[10, 255, 71],
[255, 41, 10],
[7, 255, 255],
[224, 255, 8],
[102, 8, 255],
[255, 61, 6],
[255, 194, 7],
[255, 122, 8],
[0, 255, 20],
[255, 8, 41],
[255, 5, 153],
[6, 51, 255],
[235, 12, 255],
[160, 150, 20],
[0, 163, 255],
[140, 140, 140],
[250, 10, 15],
[20, 255, 0],
[31, 255, 0],
[255, 31, 0],
[255, 224, 0],
[153, 255, 0],
[0, 0, 255],
[255, 71, 0],
[0, 235, 255],
[0, 173, 255],
[31, 0, 255],
[11, 200, 200],
[255, 82, 0],
[0, 255, 245],
[0, 61, 255],
[0, 255, 112],
[0, 255, 133],
[255, 0, 0],
[255, 163, 0],
[255, 102, 0],
[194, 255, 0],
[0, 143, 255],
[51, 255, 0],
[0, 82, 255],
[0, 255, 41],
[0, 255, 173],
[10, 0, 255],
[173, 255, 0],
[0, 255, 153],
[255, 92, 0],
[255, 0, 255],
[255, 0, 245],
[255, 0, 102],
[255, 173, 0],
[255, 0, 20],
[255, 184, 184],
[0, 31, 255],
[0, 255, 61],
[0, 71, 255],
[255, 0, 204],
[0, 255, 194],
[0, 255, 82],
[0, 10, 255],
[0, 112, 255],
[51, 0, 255],
[0, 194, 255],
[0, 122, 255],
[0, 255, 163],
[255, 153, 0],
[0, 255, 10],
[255, 112, 0],
[143, 255, 0],
[82, 0, 255],
[163, 255, 0],
[255, 235, 0],
[8, 184, 170],
[133, 0, 255],
[0, 255, 92],
[184, 0, 255],
[255, 0, 31],
[0, 184, 255],
[0, 214, 255],
[255, 0, 112],
[92, 255, 0],
[0, 224, 255],
[112, 224, 255],
[70, 184, 160],
[163, 0, 255],
[153, 0, 255],
[71, 255, 0],
[255, 0, 163],
[255, 204, 0],
[255, 0, 143],
[0, 255, 235],
[133, 255, 0],
[255, 0, 235],
[245, 0, 255],
[255, 0, 122],
[255, 245, 0],
[10, 190, 212],
[214, 255, 0],
[0, 204, 255],
[20, 0, 255],
[255, 255, 0],
[0, 153, 255],
[0, 41, 255],
[0, 255, 204],
[41, 0, 255],
[41, 255, 0],
[173, 0, 255],
[0, 245, 255],
[71, 0, 255],
[122, 0, 255],
[0, 255, 184],
[0, 92, 255],
[184, 255, 0],
[0, 133, 255],
[255, 214, 0],
[25, 194, 194],
[102, 255, 0],
[92, 0, 255],
])
return colormap
def label_to_color_image(label):
"""Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with integer type, storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the ADE20K color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color
map maximum entry.
"""
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_ade20k_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image, seg_map):
"""Visualizes input image, segmentation map and overlay view."""
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation map')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(
FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
plt.show()
LABEL_NAMES = np.asarray([
'ignore', 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road',
'bed', 'window', 'grass', 'cabinet', 'sidewalk', 'person', 'ground', 'door',
'table', 'mount', 'plant', 'curtain', 'chair', 'car', 'water', 'picture',
'couch', 'shelf', 'house', 'sea', 'mirror', 'rug', 'field', 'armchair',
'seat', 'fence', 'desk', 'rock', 'clothes', 'lamp', 'bath', 'rail', 'cushion',
'stand', 'box', 'pillar', 'signboard', 'drawers', 'counter', 'sand', 'sink',
'skyscraper', 'fireplace', 'refrigerator', 'cupboard', 'path', 'steps',
'runway', 'case', 'pool', 'pillow', 'screen', 'stairway', 'river',
'bridge', 'bookcase', 'blinds', 'coffeeTable', 'toilet', 'flower', 'book',
'hill', 'bench', 'countertop', 'kitchen Sove', 'tree', 'kitchen',
'computingMachine', 'chair', 'boat', 'bar', 'machine', 'hut', 'bus',
'towel', 'light', 'truck', 'tower', 'chandelier', 'awning', 'streetlight',
'booth', 'displayMonitor', 'airplane', 'dirtTrack', 'apparel', 'pole',
'ground', 'handrail', 'escalator', 'ottoman', 'bottle', 'counter', 'poster',
'stage', 'van', 'ship', 'fountain', 'conveyor', 'canopy', 'washer', 'toy',
'swimmingPool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', 'bag',
'bike', 'cradle', 'oven', 'ball', 'food', 'step', 'container', 'brandLogo',
'oven', 'pot', 'animal', 'bicycle', 'lake', 'dishwasher', 'projectorScreen',
'blanket', 'statue', 'hood', 'sconce', 'vase', 'trafficLight', 'tray',
'GarbageBin', 'fan', 'dock', 'computerMonitor', 'plate', 'monitoringDevice',
'bulletinBoard', 'shower', 'radiator', 'drinkingGlass', 'clock', 'flag'
])
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
```
## Select a pretrained model
We have trained the DeepLab model using various backbone networks. Select one from the MODEL_NAME list.
```
MODEL_NAME = 'mobilenetv2_ade20k_train' # @param ['mobilenetv2_ade20k_train', 'xception65_ade20k_train']
_DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/'
_MODEL_URLS = {
'mobilenetv2_ade20k_train':
'deeplabv3_mnv2_ade20k_train_2018_12_03.tar.gz',
'xception65_ade20k_train':
'deeplabv3_xception_ade20k_train_2018_05_29.tar.gz',
}
_TARBALL_NAME = 'deeplab_model.tar.gz'
model_dir = tempfile.mkdtemp()
tf.gfile.MakeDirs(model_dir)
download_path = os.path.join(model_dir, _TARBALL_NAME)
print('downloading model, this might take a while...')
urllib.request.urlretrieve(_DOWNLOAD_URL_PREFIX + _MODEL_URLS[MODEL_NAME],
download_path)
print('download completed! loading DeepLab model...')
MODEL = DeepLabModel(download_path)
# Reduce image size if mobilenet model
if "mobilenetv2" in MODEL_NAME:
MODEL.INPUT_SIZE = 257
print('model loaded successfully!')
```
## Run on sample images
Select one of sample images (leave `IMAGE_URL` empty) or feed any internet image
url for inference.
Note that this colab uses single scale inference for fast computation,
so the results may slightly differ from the visualizations in the
[README](https://github.com/tensorflow/models/blob/master/research/deeplab/README.md) file,
which uses multi-scale and left-right flipped inputs.
```
SAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3']
IMAGE_URL = '' #@param {type:"string"}
_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/'
'deeplab/g3doc/img/%s.jpg?raw=true')
def run_visualization(url):
"""Inferences DeepLab model and visualizes result."""
try:
f = urllib.request.urlopen(url)
jpeg_str = f.read()
original_im = Image.open(BytesIO(jpeg_str))
except IOError:
print('Cannot retrieve image. Please check url: ' + url)
return
print('running deeplab on image %s...' % url)
resized_im, seg_map = MODEL.run(original_im)
vis_segmentation(resized_im, seg_map)
image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE
run_visualization(image_url)
```
## What's next
* Learn about [Cloud TPUs](https://cloud.google.com/tpu/docs) that Google designed and optimized specifically to speed up and scale up ML workloads for training and inference and to enable ML engineers and researchers to iterate more quickly.
* Explore the range of [Cloud TPU tutorials and Colabs](https://cloud.google.com/tpu/docs/tutorials) to find other examples that can be used when implementing your ML project.
* For more information on running the DeepLab model on Cloud TPUs, see the [DeepLab tutorial](https://cloud.google.com/tpu/docs/tutorials/deeplab).
|
github_jupyter
|
import os
from io import BytesIO
import tarfile
import tempfile
from six.moves import urllib
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
%tensorflow_version 1.x
import tensorflow as tf
class DeepLabModel(object):
"""Class to load deeplab model and run inference."""
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
INPUT_SIZE = 513
FROZEN_GRAPH_NAME = 'frozen_inference_graph'
def __init__(self, tarball_path):
"""Creates and loads pretrained deeplab model."""
self.graph = tf.Graph()
graph_def = None
# Extract frozen graph from tar archive.
tar_file = tarfile.open(tarball_path)
for tar_info in tar_file.getmembers():
if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name):
file_handle = tar_file.extractfile(tar_info)
graph_def = tf.GraphDef.FromString(file_handle.read())
break
tar_file.close()
if graph_def is None:
raise RuntimeError('Cannot find inference graph in tar archive.')
with self.graph.as_default():
tf.import_graph_def(graph_def, name='')
self.sess = tf.Session(graph=self.graph)
def run(self, image):
"""Runs inference on a single image.
Args:
image: A PIL.Image object, raw input image.
Returns:
resized_image: RGB image resized from original input image.
seg_map: Segmentation map of `resized_image`.
"""
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(
self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})
seg_map = batch_seg_map[0]
return resized_image, seg_map
def create_ade20k_label_colormap():
"""Creates a label colormap used in ADE20K segmentation benchmark.
Returns:
A colormap for visualizing segmentation results.
"""
colormap = np.asarray([
[0,0,0],
[120, 120, 120],
[180, 120, 120],
[6, 230, 230],
[80, 50, 50],
[4, 200, 3],
[120, 120, 80],
[140, 140, 140],
[204, 5, 255],
[230, 230, 230],
[4, 250, 7],
[224, 5, 255],
[235, 255, 7],
[150, 5, 61],
[120, 120, 70],
[8, 255, 51],
[255, 6, 82],
[143, 255, 140],
[204, 255, 4],
[255, 51, 7],
[204, 70, 3],
[0, 102, 200],
[61, 230, 250],
[255, 6, 51],
[11, 102, 255],
[255, 7, 71],
[255, 9, 224],
[9, 7, 230],
[220, 220, 220],
[255, 9, 92],
[112, 9, 255],
[8, 255, 214],
[7, 255, 224],
[255, 184, 6],
[10, 255, 71],
[255, 41, 10],
[7, 255, 255],
[224, 255, 8],
[102, 8, 255],
[255, 61, 6],
[255, 194, 7],
[255, 122, 8],
[0, 255, 20],
[255, 8, 41],
[255, 5, 153],
[6, 51, 255],
[235, 12, 255],
[160, 150, 20],
[0, 163, 255],
[140, 140, 140],
[250, 10, 15],
[20, 255, 0],
[31, 255, 0],
[255, 31, 0],
[255, 224, 0],
[153, 255, 0],
[0, 0, 255],
[255, 71, 0],
[0, 235, 255],
[0, 173, 255],
[31, 0, 255],
[11, 200, 200],
[255, 82, 0],
[0, 255, 245],
[0, 61, 255],
[0, 255, 112],
[0, 255, 133],
[255, 0, 0],
[255, 163, 0],
[255, 102, 0],
[194, 255, 0],
[0, 143, 255],
[51, 255, 0],
[0, 82, 255],
[0, 255, 41],
[0, 255, 173],
[10, 0, 255],
[173, 255, 0],
[0, 255, 153],
[255, 92, 0],
[255, 0, 255],
[255, 0, 245],
[255, 0, 102],
[255, 173, 0],
[255, 0, 20],
[255, 184, 184],
[0, 31, 255],
[0, 255, 61],
[0, 71, 255],
[255, 0, 204],
[0, 255, 194],
[0, 255, 82],
[0, 10, 255],
[0, 112, 255],
[51, 0, 255],
[0, 194, 255],
[0, 122, 255],
[0, 255, 163],
[255, 153, 0],
[0, 255, 10],
[255, 112, 0],
[143, 255, 0],
[82, 0, 255],
[163, 255, 0],
[255, 235, 0],
[8, 184, 170],
[133, 0, 255],
[0, 255, 92],
[184, 0, 255],
[255, 0, 31],
[0, 184, 255],
[0, 214, 255],
[255, 0, 112],
[92, 255, 0],
[0, 224, 255],
[112, 224, 255],
[70, 184, 160],
[163, 0, 255],
[153, 0, 255],
[71, 255, 0],
[255, 0, 163],
[255, 204, 0],
[255, 0, 143],
[0, 255, 235],
[133, 255, 0],
[255, 0, 235],
[245, 0, 255],
[255, 0, 122],
[255, 245, 0],
[10, 190, 212],
[214, 255, 0],
[0, 204, 255],
[20, 0, 255],
[255, 255, 0],
[0, 153, 255],
[0, 41, 255],
[0, 255, 204],
[41, 0, 255],
[41, 255, 0],
[173, 0, 255],
[0, 245, 255],
[71, 0, 255],
[122, 0, 255],
[0, 255, 184],
[0, 92, 255],
[184, 255, 0],
[0, 133, 255],
[255, 214, 0],
[25, 194, 194],
[102, 255, 0],
[92, 0, 255],
])
return colormap
def label_to_color_image(label):
"""Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with integer type, storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the ADE20K color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color
map maximum entry.
"""
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_ade20k_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image, seg_map):
"""Visualizes input image, segmentation map and overlay view."""
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation map')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(
FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
plt.show()
LABEL_NAMES = np.asarray([
'ignore', 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road',
'bed', 'window', 'grass', 'cabinet', 'sidewalk', 'person', 'ground', 'door',
'table', 'mount', 'plant', 'curtain', 'chair', 'car', 'water', 'picture',
'couch', 'shelf', 'house', 'sea', 'mirror', 'rug', 'field', 'armchair',
'seat', 'fence', 'desk', 'rock', 'clothes', 'lamp', 'bath', 'rail', 'cushion',
'stand', 'box', 'pillar', 'signboard', 'drawers', 'counter', 'sand', 'sink',
'skyscraper', 'fireplace', 'refrigerator', 'cupboard', 'path', 'steps',
'runway', 'case', 'pool', 'pillow', 'screen', 'stairway', 'river',
'bridge', 'bookcase', 'blinds', 'coffeeTable', 'toilet', 'flower', 'book',
'hill', 'bench', 'countertop', 'kitchen Sove', 'tree', 'kitchen',
'computingMachine', 'chair', 'boat', 'bar', 'machine', 'hut', 'bus',
'towel', 'light', 'truck', 'tower', 'chandelier', 'awning', 'streetlight',
'booth', 'displayMonitor', 'airplane', 'dirtTrack', 'apparel', 'pole',
'ground', 'handrail', 'escalator', 'ottoman', 'bottle', 'counter', 'poster',
'stage', 'van', 'ship', 'fountain', 'conveyor', 'canopy', 'washer', 'toy',
'swimmingPool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', 'bag',
'bike', 'cradle', 'oven', 'ball', 'food', 'step', 'container', 'brandLogo',
'oven', 'pot', 'animal', 'bicycle', 'lake', 'dishwasher', 'projectorScreen',
'blanket', 'statue', 'hood', 'sconce', 'vase', 'trafficLight', 'tray',
'GarbageBin', 'fan', 'dock', 'computerMonitor', 'plate', 'monitoringDevice',
'bulletinBoard', 'shower', 'radiator', 'drinkingGlass', 'clock', 'flag'
])
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
MODEL_NAME = 'mobilenetv2_ade20k_train' # @param ['mobilenetv2_ade20k_train', 'xception65_ade20k_train']
_DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/'
_MODEL_URLS = {
'mobilenetv2_ade20k_train':
'deeplabv3_mnv2_ade20k_train_2018_12_03.tar.gz',
'xception65_ade20k_train':
'deeplabv3_xception_ade20k_train_2018_05_29.tar.gz',
}
_TARBALL_NAME = 'deeplab_model.tar.gz'
model_dir = tempfile.mkdtemp()
tf.gfile.MakeDirs(model_dir)
download_path = os.path.join(model_dir, _TARBALL_NAME)
print('downloading model, this might take a while...')
urllib.request.urlretrieve(_DOWNLOAD_URL_PREFIX + _MODEL_URLS[MODEL_NAME],
download_path)
print('download completed! loading DeepLab model...')
MODEL = DeepLabModel(download_path)
# Reduce image size if mobilenet model
if "mobilenetv2" in MODEL_NAME:
MODEL.INPUT_SIZE = 257
print('model loaded successfully!')
SAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3']
IMAGE_URL = '' #@param {type:"string"}
_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/'
'deeplab/g3doc/img/%s.jpg?raw=true')
def run_visualization(url):
"""Inferences DeepLab model and visualizes result."""
try:
f = urllib.request.urlopen(url)
jpeg_str = f.read()
original_im = Image.open(BytesIO(jpeg_str))
except IOError:
print('Cannot retrieve image. Please check url: ' + url)
return
print('running deeplab on image %s...' % url)
resized_im, seg_map = MODEL.run(original_im)
vis_segmentation(resized_im, seg_map)
image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE
run_visualization(image_url)
| 0.7324 | 0.985977 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pymc3 as pm
import numpy.random as npr
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
# Introduction
Let's say there are three bacteria species that characterize the gut, and we hypothesize that they are ever so shifted off from one another, but we don't know how (i.e. ignore the data-generating distribution below). Can we figure out the proportion parameters and their uncertainty?
# Generate Synthetic Data
In the synthetic dataset generated below, we pretend that every patient is one sample, and we are recording the number of sequencing reads corresponding to some OTUs (bacteria). Each row is one sample (patient), and each column is one OTU (sample).
## Proportions
Firstly, let's generate the ground truth proportions that we will infer later on.
```
def proportion(arr):
arr = np.asarray(arr)
return arr / arr.sum()
healthy_proportions = proportion([10, 16, 2])
healthy_proportions
sick_proportions = proportion([10, 27, 15])
sick_proportions
```
## Data
Now, given the proportions, let's generate data. Here, we are assuming that there are 10 patients per cohort (10 sick patients and 10 healthy patients), and that the number of counts in total is 50.
```
n_data_points = 10
def make_healthy_multinomial(arr):
n_sequencing_reads = 50 # npr.poisson(lam=50)
return npr.multinomial(n_sequencing_reads, healthy_proportions)
def make_sick_multinomial(arr):
n_sequencing_reads = 50 # npr.poisson(lam=50)
return npr.multinomial(n_sequencing_reads, sick_proportions)
# Generate healthy data
healthy_reads = np.zeros((n_data_points, 3))
healthy_reads = np.apply_along_axis(make_healthy_multinomial, axis=1, arr=healthy_reads)
# Generate sick reads
sick_reads = np.zeros((n_data_points, 3))
sick_reads = np.apply_along_axis(make_sick_multinomial, axis=1, arr=sick_reads)
# Make pandas dataframe
healthy_df = pd.DataFrame(healthy_reads)
healthy_df.columns = ['bacteria1', 'bacteria2', 'bacteria3']
healthy_df = pm.floatX(healthy_df)
sick_df = pd.DataFrame(sick_reads)
sick_df.columns = ['bacteria1', 'bacteria2', 'bacteria3']
sick_df = pm.floatX(sick_df)
healthy_df.dtypes
sick_df.dtypes
```
# Model Construction
Here's an implementation of the model - Dirichlet prior with Multinomial likelihood.
There are 3 classes of bacteria, so the Dirichlet distribution serves as the prior probability mass over each of the classes in the multinomial distribution.
The multinomial distribution serves as the likelihood function.
```
with pm.Model() as dirichlet_model:
proportions_healthy = pm.Dirichlet('proportions_healthy',
a=np.array([1.0] * 3).astype('float32'),
shape=(3,), testval=[0.1, 0.1, 0.1])
proportions_sick = pm.Dirichlet('proportions_sick',
a=np.array([1.0] * 3).astype('float32'),
shape=(3,), testval=[0.1, 0.1, 0.1])
healthy_like = pm.Multinomial('like_healthy',
n=50,
p=proportions_healthy,
observed=healthy_df.values)
sick_like = pm.Multinomial('like_sick',
n=50,
p=proportions_sick,
observed=sick_df.values)
```
## Sampling
```
with dirichlet_model:
dirichlet_trace = pm.sample(draws=10000, start=pm.find_MAP(), step=pm.Metropolis())
pm.traceplot(dirichlet_trace)
```
# Results
```
pm.forestplot(dirichlet_trace, ylabels=['healthy_bacteria1',
'healthy_bacteria2',
'healthy_bacteria3',
'sick_bacteria1',
'sick_bacteria2',
'sick_bacteria3'])
healthy_proportions, sick_proportions
```
They match up with the original synthetic percentages!
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pymc3 as pm
import numpy.random as npr
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
def proportion(arr):
arr = np.asarray(arr)
return arr / arr.sum()
healthy_proportions = proportion([10, 16, 2])
healthy_proportions
sick_proportions = proportion([10, 27, 15])
sick_proportions
n_data_points = 10
def make_healthy_multinomial(arr):
n_sequencing_reads = 50 # npr.poisson(lam=50)
return npr.multinomial(n_sequencing_reads, healthy_proportions)
def make_sick_multinomial(arr):
n_sequencing_reads = 50 # npr.poisson(lam=50)
return npr.multinomial(n_sequencing_reads, sick_proportions)
# Generate healthy data
healthy_reads = np.zeros((n_data_points, 3))
healthy_reads = np.apply_along_axis(make_healthy_multinomial, axis=1, arr=healthy_reads)
# Generate sick reads
sick_reads = np.zeros((n_data_points, 3))
sick_reads = np.apply_along_axis(make_sick_multinomial, axis=1, arr=sick_reads)
# Make pandas dataframe
healthy_df = pd.DataFrame(healthy_reads)
healthy_df.columns = ['bacteria1', 'bacteria2', 'bacteria3']
healthy_df = pm.floatX(healthy_df)
sick_df = pd.DataFrame(sick_reads)
sick_df.columns = ['bacteria1', 'bacteria2', 'bacteria3']
sick_df = pm.floatX(sick_df)
healthy_df.dtypes
sick_df.dtypes
with pm.Model() as dirichlet_model:
proportions_healthy = pm.Dirichlet('proportions_healthy',
a=np.array([1.0] * 3).astype('float32'),
shape=(3,), testval=[0.1, 0.1, 0.1])
proportions_sick = pm.Dirichlet('proportions_sick',
a=np.array([1.0] * 3).astype('float32'),
shape=(3,), testval=[0.1, 0.1, 0.1])
healthy_like = pm.Multinomial('like_healthy',
n=50,
p=proportions_healthy,
observed=healthy_df.values)
sick_like = pm.Multinomial('like_sick',
n=50,
p=proportions_sick,
observed=sick_df.values)
with dirichlet_model:
dirichlet_trace = pm.sample(draws=10000, start=pm.find_MAP(), step=pm.Metropolis())
pm.traceplot(dirichlet_trace)
pm.forestplot(dirichlet_trace, ylabels=['healthy_bacteria1',
'healthy_bacteria2',
'healthy_bacteria3',
'sick_bacteria1',
'sick_bacteria2',
'sick_bacteria3'])
healthy_proportions, sick_proportions
| 0.486332 | 0.956796 |
# Exampville Mode Choice
Discrete choice modeling is at the heart of many transportion planning models.
In this example, we will examine the development of a mode choice model for
Exampville, an entirely fictional town built for the express purpose of
demostrating the use of discrete choice modeling tools for transportation
planning.
In this notebook, we will walk through the creation of a tour mode choice model.
This example will assume the reader is familiar with the mathematical basics of discrete choice
modeling generally, and will focus on the technical aspects of estimating the parameters
of a discrete choice model in Python using [Larch](https://larch.newman.me).
```
import larch, numpy, pandas, os
```
To begin, we'll load some raw data. The Exampville data
contains a set of files similar to what we might find for
a real travel survey: network skims, and tables of households,
persons, and tours.
```
import larch.exampville
```
The skims data is a file in [openmatrix](https://github.com/osPlanning/omx/wiki) format, which contains a
series of two-dimensional arrays describing zone-to-zone transportation
level-of-service attributes. We can see below that this file contains
data on travel times and costs by auto, transit, biking, and walking, for
travel to and from each of 40 travel analysis zones in Exampville.
Ideally, these skim values represent the "observed" travel times and costs
for trips between each pair of zones, but generally these matrixes are
approximations of these real values generated by a base-case transportation
model.
```
skims = larch.OMX( larch.exampville.files.skims, mode='r' )
skims
```
The other files are simple `csv` text files, containing row-wise
data about households, persons, and tours, as might be contained in survey
results from a household travel survey conducted in Exampville.
```
hh = pandas.read_csv( larch.exampville.files.hh )
pp = pandas.read_csv( larch.exampville.files.person )
tour = pandas.read_csv( larch.exampville.files.tour )
```
Let's check out what's in each table.
```
hh.info()
pp.info()
tour.info()
```
## Preprocessing
To use these data tables for mode choice modeling, we'll need to
filter them so they only include relevant rows, and merge
them into a unified composite dataset.
### Filtering
Mode choice models are often created seperately for each tour purpose.
We can review the purposes contained in our data by using the `statistics`
method, which Larch adds to pandas.Series objects:
```
tour.TOURPURP.statistics()
```
As we can see above, in Exampville, there are only two purposes for tours. These
purposes are defined as:
- work (purpose=1) and
- non-work (purpose=2).
We want to first estimate a mode choice model for work tours,
so we’ll begin by creating a working dataframe,
filtering the tours data to exclude non-work tours:
```
df = tour[tour.TOURPURP == 1]
df.info()
```
### Merging
We can then merge data from the three survey tables using the usual
`pandas` syntax for merging.
```
df = df.merge(hh, on='HHID').merge(pp, on=('HHID', 'PERSONID'))
```
Merging the skims data is more complicated, as we want to select not only
the correct row, but also the correct column for each observation. We could do this
by transforming the skims data in such a way that every origin-destination
pair is on its own row, but for very large zone systems this can be
inefficient. Larch provides a more efficient method to directly extract
a DataFrame with the right information, based on the two-dimensional structure
of the skims.
Our zone numbering system starts with zone 1, as is common for many TAZ numbering
systems seen in practice. But, for looking up data in the skims matrix using Larch,
we'll need to use zero-based numbering that is standard in Python. So we'll create two new
TAZ-index columns to assist this process.
```
df["HOMETAZi"] = df["HOMETAZ"] - 1
df["DTAZi"] = df["DTAZ"] - 1
```
For this tour mode choice model, we can pick values
out of the skims for the known O-D of the tour, so
that we have access to the level-of-service data for
each possible mode serving that O-D pair. We'll
use the `get_rc_dataframe` method of the `larch.OMX`
object, which lets us give the a list of the index for the
production and attraction (row and column) value we
want to use.
```
los_data = skims.get_rc_dataframe(
df["HOMETAZi"], df["DTAZi"],
)
los_data
df = df.join(los_data)
```
We can review the `df` frame to see what variables are now included.
```
df.info()
```
```
# For clarity, we can define numbers as names for modes
DA = 1
SR = 2
Walk = 3
Bike = 4
Transit = 5
dfs = larch.DataFrames(
co=df,
alt_codes=[DA,SR,Walk,Bike,Transit],
alt_names=['DA','SR','Walk','Bike','Transit'],
ch_name='TOURMODE',
)
```
## Model Definition
Now we are ready to create our model. We'll create a `larch.Model` object
to do so, and link it to the data we just created.
```
m = larch.Model(dataservice=dfs)
m.title = "Exampville Work Tour Mode Choice v1"
```
We will explicitly define the set of utility functions
we want to use. Because the DataFrames we are using to
serve data to this model contains exclusively `idco` format
data, we'll use the `utility_co` mapping, which allows us
to define a unique utility function for each alternative.
Each utility function must be expressed as a linear-in-parameters
function to combine raw or pre-computed data values with
named model parameters. To facilitate writing these functions,
Larch provides two special classes: parameter references (`P`) and
data references (`X`).
```
from larch import P, X
```
Parameter and data references can be defined using either a function-like notation,
or a attribute-like notation.
```
P('NamedParameter')
X.NamedDataValue
```
In either case, if the named value contains any spaces or non-alphanumeric characters,
it must be given in function-like notation only, as Python will not accept
those characters in the attribute-like form.
```
P('Named Parameter')
```
Data references can name an exact column that appears in the `DataFrames` as
defined above, or can include simple transformations of that data, so long
as these transformations can be done without regard to any estimated parameter.
For example, we can use the log of income:
```
X("log(INCOME)")
```
To write a linear-in-parameters utility function, we simply multiply together
a parameter reference and a data reference, and then optionally add that
to one or more similar terms.
```
P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_COST
```
It is permissible to omit the data reference on a term
(in which case it is implicitly set to 1.0).
```
P.ASC + P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_COST
```
We can then combine these to write utility functions for each
alternative in the Exampville data:
```
m.utility_co[DA] = (
+ P.InVehTime * X.AUTO_TIME
+ P.Cost * X.AUTO_COST # dollars per mile
)
m.utility_co[SR] = (
+ P.ASC_SR
+ P.InVehTime * X.AUTO_TIME
+ P.Cost * (X.AUTO_COST * 0.5) # dollars per mile, half share
+ P("LogIncome:SR") * X("log(INCOME)")
)
m.utility_co[Walk] = (
+ P.ASC_Walk
+ P.NonMotorTime * X.WALK_TIME
+ P("LogIncome:Walk") * X("log(INCOME)")
)
m.utility_co[Bike] = (
+ P.ASC_Bike
+ P.NonMotorTime * X.BIKE_TIME
+ P("LogIncome:Bike") * X("log(INCOME)")
)
m.utility_co[Transit] = (
+ P.ASC_Transit
+ P.InVehTime * X.TRANSIT_IVTT
+ P.OutVehTime * X.TRANSIT_OVTT
+ P.Cost * X.TRANSIT_FARE
+ P("LogIncome:Transit") * X('log(INCOME)')
)
```
To write a nested logit model, we'll attach some nesting nodes to the
model's `graph`. Each `new_node` allows us to define the set of
codes for the child nodes (elemental alternatives, or lower level nests)
as well as giving the new nest a name and assigning a logsum parameter.
The return value of this method is the node code for the newly created
nest, which then can potenially be used as a child code when creating
a higher level nest. We do this below, adding the 'Car' nest into the
'Motor' nest.
```
Car = m.graph.new_node(parameter='Mu:Car', children=[DA,SR], name='Car')
NonMotor = m.graph.new_node(parameter='Mu:NonMotor', children=[Walk,Bike], name='NonMotor')
Motor = m.graph.new_node(parameter='Mu:Motor', children=[Car,Transit], name='Motor')
```
Let's visually check on the nesting structure.
```
m.graph
```
The tour mode choice model's choice variable is indicated by
the code value in 'TOURMODE', and this can be
defined for the model using `choice_co_code`.
```
m.choice_co_code = 'TOURMODE'
```
We can also give a dictionary of availability conditions based
on values in the `idco` data, using the `availability_co_vars`
attribute. Alternatives that are always available can be indicated
by setting the criterion to 1. For alternative that are only sometimes
available, we can give an availability criteria in the same manner as
for a data reference described above: either by giving the name of
a variable in the data, or an expression that can be evaluated using
the data alone. In the case of availability criteria, these will be
tranformed to boolean (true/false) values, so data that evaluates as
0 will be unavailable, and data that evaluates as non-zero will be
available (including, perhaps counterintuitively, negative numbers).
```
m.availability_co_vars = {
DA: 'AGE >= 16',
SR: 1,
Walk: 'WALK_TIME < 60',
Bike: 'BIKE_TIME < 60',
Transit: 'TRANSIT_FARE>0',
}
```
Then let's prepare this data for estimation. Even though the
data is already in memory, the `load_data` method is used to
pre-process the data, extracting the required values, pre-computing
the values of fixed expressions, and assembling the results into
contiguous arrays suitable for computing the log likelihood values
efficiently.
## Model Estimation
```
m.load_data()
```
We can check on some important statistics of this loaded and preprocessed data even
before we estimate the model.
```
m.dataframes.choice_avail_summary()
m.dataframes.data_co.statistics()
```
If we are satisfied with the statistics we see above, we
can go ahead and estimate the model. Estimation is done
using maximium likelihood techniques, relying on the `scipy.optimize`
library for providing a variety of algorithms for solving
this non-linear optimization problem.
For nested logit models, the 'SLSQP' method often works well.
```
result = m.maximize_loglike(method='slsqp')
```
After we find the best fitting parameters, we can compute
some variance-covariance statistics, incuding standard errors of
the estimates and t statistics, using `calculate_parameter_covariance`.
```
m.calculate_parameter_covariance()
```
Then we can review the results in a variety of report tables.
```
m.parameter_summary()
m.estimation_statistics()
```
## Save and Report Model
If we are satisified with this model, or if we just want to record
it as part of our workflow while exploring different model
structures, we can write the model out to a report. To do so,
we can instantiatie a `larch.Reporter` object.
```
report = larch.Reporter(title=m.title)
```
Then, we can push section headings and report pieces into the
report using the "<<" operator.
```
report << '# Parameter Summary' << m.parameter_summary()
report << "# Estimation Statistics" << m.estimation_statistics()
report << "# Utility Functions" << m.utility_functions()
```
Once we have assembled the report, we can save the file to
disk as an HTML report containing the content previously
assembled. Attaching the model itself to the report as
metadata can be done within the `save` method, which will
allow us to directly reload the same model again later.
```
report.save(
'/tmp/exampville_mode_choice.html',
overwrite=True,
metadata=m,
)
```
Note: if you get a `FileNotFound` error when saving, make sure that
you are saving the file into a directory that exists. The example
here should work fine on macOS or Linux, but the `/tmp` directory
does not exist by default on Windows.
You can also save a model as an Excel file, which will automatically
include several worksheets summarizing the parameters, data, utility
functions, and other features of the model.
```
import larch.util.excel
m.to_xlsx("/tmp/exampville_mode_choice.xlsx")
```
|
github_jupyter
|
import larch, numpy, pandas, os
import larch.exampville
skims = larch.OMX( larch.exampville.files.skims, mode='r' )
skims
hh = pandas.read_csv( larch.exampville.files.hh )
pp = pandas.read_csv( larch.exampville.files.person )
tour = pandas.read_csv( larch.exampville.files.tour )
hh.info()
pp.info()
tour.info()
tour.TOURPURP.statistics()
df = tour[tour.TOURPURP == 1]
df.info()
df = df.merge(hh, on='HHID').merge(pp, on=('HHID', 'PERSONID'))
df["HOMETAZi"] = df["HOMETAZ"] - 1
df["DTAZi"] = df["DTAZ"] - 1
los_data = skims.get_rc_dataframe(
df["HOMETAZi"], df["DTAZi"],
)
los_data
df = df.join(los_data)
df.info()
# For clarity, we can define numbers as names for modes
DA = 1
SR = 2
Walk = 3
Bike = 4
Transit = 5
dfs = larch.DataFrames(
co=df,
alt_codes=[DA,SR,Walk,Bike,Transit],
alt_names=['DA','SR','Walk','Bike','Transit'],
ch_name='TOURMODE',
)
m = larch.Model(dataservice=dfs)
m.title = "Exampville Work Tour Mode Choice v1"
from larch import P, X
P('NamedParameter')
X.NamedDataValue
P('Named Parameter')
X("log(INCOME)")
P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_COST
P.ASC + P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_COST
m.utility_co[DA] = (
+ P.InVehTime * X.AUTO_TIME
+ P.Cost * X.AUTO_COST # dollars per mile
)
m.utility_co[SR] = (
+ P.ASC_SR
+ P.InVehTime * X.AUTO_TIME
+ P.Cost * (X.AUTO_COST * 0.5) # dollars per mile, half share
+ P("LogIncome:SR") * X("log(INCOME)")
)
m.utility_co[Walk] = (
+ P.ASC_Walk
+ P.NonMotorTime * X.WALK_TIME
+ P("LogIncome:Walk") * X("log(INCOME)")
)
m.utility_co[Bike] = (
+ P.ASC_Bike
+ P.NonMotorTime * X.BIKE_TIME
+ P("LogIncome:Bike") * X("log(INCOME)")
)
m.utility_co[Transit] = (
+ P.ASC_Transit
+ P.InVehTime * X.TRANSIT_IVTT
+ P.OutVehTime * X.TRANSIT_OVTT
+ P.Cost * X.TRANSIT_FARE
+ P("LogIncome:Transit") * X('log(INCOME)')
)
Car = m.graph.new_node(parameter='Mu:Car', children=[DA,SR], name='Car')
NonMotor = m.graph.new_node(parameter='Mu:NonMotor', children=[Walk,Bike], name='NonMotor')
Motor = m.graph.new_node(parameter='Mu:Motor', children=[Car,Transit], name='Motor')
m.graph
m.choice_co_code = 'TOURMODE'
m.availability_co_vars = {
DA: 'AGE >= 16',
SR: 1,
Walk: 'WALK_TIME < 60',
Bike: 'BIKE_TIME < 60',
Transit: 'TRANSIT_FARE>0',
}
m.load_data()
m.dataframes.choice_avail_summary()
m.dataframes.data_co.statistics()
result = m.maximize_loglike(method='slsqp')
m.calculate_parameter_covariance()
m.parameter_summary()
m.estimation_statistics()
report = larch.Reporter(title=m.title)
report << '# Parameter Summary' << m.parameter_summary()
report << "# Estimation Statistics" << m.estimation_statistics()
report << "# Utility Functions" << m.utility_functions()
report.save(
'/tmp/exampville_mode_choice.html',
overwrite=True,
metadata=m,
)
import larch.util.excel
m.to_xlsx("/tmp/exampville_mode_choice.xlsx")
| 0.338186 | 0.989662 |
### Analyze_rotated_stable_points - evaluate bias in rotated DEMs using selected unchanged points
These points were picked on hopefully stable points in mostly flat places: docks, lawns, bare spots in middens. Also, the yurt roofs. Typically, 3 to 5 points were picked on most features.
```
import pandas as pd
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
%matplotlib inline
# define some functions
def pcoord(x, y):
"""
Convert x, y to polar coordinates r, az (geographic convention)
r,az = pcoord(x, y)
"""
r = np.sqrt( x**2 + y**2 )
az=np.degrees( np.arctan2(x, y) )
# az[where(az<0.)[0]] += 360.
az = (az+360.)%360.
return r, az
def xycoord(r, az):
"""
Convert r, az [degrees, geographic convention] to rectangular coordinates
x,y = xycoord(r, az)
"""
x = r * np.sin(np.radians(az))
y = r * np.cos(np.radians(az))
return x, y
def box2UTMh(x, y, x0, y0, theta):
'''
2D rotation and translation of x, y
Input:
x, y - row vectors of original coordinates (must be same size)
x0, y0 - Offset (location of x, y = (0,0) in new coordinate system)
theta - Angle of rotation (degrees, CCW from x-axis == Cartesian coorinates)
Returns:
xr, yr - rotated, offset coordinates
'''
thetar = np.radians(theta)
c, s = np.cos(thetar), np.sin(thetar)
# homogenous rotation matrix
Rh = np.array(((c, -s, 0.),\
(s, c, 0.),\
(0., 0., 1.)))
# homogenous translation matrix
Th = np.array(((1., 0., x0),\
(0., 1., y0),\
(0., 0., 1.)))
# homogenous input x,y
xyh = np.vstack((x,y,np.ones_like(x)))
# perform rotation and translation
xyrh=np.matmul(np.matmul(Th,Rh),xyh)
xr = xyrh[0,:]
yr = xyrh[1,:]
return xr, yr
# coordinates for the whole-island box.
# old version
r = {'name':"ncorebx_v6","e0": 378500.,"n0": 3856350.,"xlen": 36000.,"ylen": 1100.,"dxdy": 1.,"theta": 42.}
# new, enlarged version
r = {'name':"ncorebx_v6","e0": 378489.457,"n0": 3855740.501,"xlen": 36650.,"ylen": 1500.,"dxdy": 1.,"theta": 42.}
# Convert origin to UTM
xu,yu = box2UTMh(0.,0.,r['e0'],r['n0'],r['theta'])
print(xu,yu)
# reverse the calc to find the origin (UTM =0,0) in box coordinates.
# First, just do the rotation to see where Box = 0,0 falls
xb0,yb0 = box2UTMh(xu,yu,0.,0.,-r['theta'])
print(xb0,yb0)
# Then put in negative values for the offset
xb,yb = box2UTMh(xu,yu,-xb0,-yb0,-r['theta'])
print(xb,yb)
# Read in the list of stable points. Elevations were picked from the DEMs with Global Mapper. Elevations for Sep are from
# the old _crop version of the DEM.... .nc version read in below is from the newer _v3 version.
df=pd.read_csv("C:\\crs\\proj\\2019_DorianOBX\\Santa_Cruz_Products\\stable_points\\All_points.csv",header = 0)
# convert UTM X, Y to rotated coords xrl, yrl
X = df["X"].values
Y = df["Y"].values
#TODO: why does this return a list of arrays?
xrl,yrl = box2UTMh(X,Y,-xb0,-yb0,-r['theta'])
# this fixes it...probably should fix box2UTMh
xrot = np.concatenate(xrl).ravel()
yrot = np.concatenate(yrl).ravel()
df["xr"] = xrot
df["yr"] = yrot
# read in the multi-map .nc file
# Dates for DEMs
dates = ([\
"Aug 30 2019",\
"Sep 12-13 2019",\
"Oct 11 2019",\
"Nov 26 2019",\
"Feb 8-9 2020",\
"May 8-9 2020",\
"Aug 2 2020",\
"Aug 5-9 2020",\
"Sep 28 2020"])
# Offsets for DEM elevation corrections
# These are the mean anomolies when this script is run with zero offsets
# Aug anom 0.022233
# Sep anom -0.002373
# Oct anom -0.004543
# Nov anom -0.015317
# Medians
# Aug anom 0.011046
# Sep anom -0.011883
# Oct anom -0.010752
# Nov anom -0.020816
# Std. dev
# Aug anom 0.052332
# Sep anom 0.059450
# Oct anom 0.050611
# Nov anom 0.069708
# Same values when the offset is applied
# Mean
# Aug anom 1.636533e-07
# Sep anom 4.703386e-08
# Oct anom 2.423299e-07
# Nov anom -4.530171e-07
# Median
# Aug anom -0.011187
# Sep anom -0.009510
# Oct anom -0.006209
# Nov anom -0.005499
# Std dev
# Aug anom 0.052332
# Sep anom 0.059450
# Oct anom 0.050611
# Nov anom 0.069708
# ALERT - change this line to offset final results or not
#offset = np.array([-0.022233, 0.002373, 0.004543, 0.015317])
offset = np.array([0., 0., 0., 0., 0., 0., 0., 0., 0])
fn = r['name']+'.nc'
dsa = xr.open_dataset(fn)
dsaa = np.squeeze(dsa.to_array())
nmaps,ncross,nalong=np.shape(dsaa)
print('nmaps, ncross, nalong: ',nmaps,ncross,nalong)
# Correct for uniform offsets
for i in range(0,nmaps):
dsaa[i,:,:] = dsaa[i,:,:] + offset[i]
# Use rotated coordinates as indices into the maps to get elevations.
ix = df["xr"].values.astype(np.int64)
iy = df["yr"].values.astype(np.int64)
nx = len(ix)
zr = np.ones((nx,nmaps))
for j in range(0, nmaps):
for i in range(0, nx):
zr[i,j] = dsaa[j,iy[i],ix[i]].values
zr[np.abs(zr)>10]=np.nan
anom = np.nan*np.ones_like(zr)
for i in range(0,nx):
anom[i,:]=zr[i,:]-np.mean(zr[i,:])
anom = anom[np.where(~np.isnan(anom))].reshape((nx-9,9))
print(np.shape(anom))
mean_anom = np.mean(anom,0)
std_anom = np.std(anom,0)
mean_anom
ixa = np.tile(ix,(nmaps,1)).T
for i in range(0,nx):
plt.scatter(ixa[i,:],anom[i,:],s=18,c=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22'])
sum(np.isnan(anom))
anom
# looks like three points are outside the domain. Drop them from the dataframe
dfc = df[df['zr Sep']>-10.].reset_index(drop=True)
# Calculate the average for each row for the four maps (don't average in the lidar)
col = dfc.loc[: , "zr Aug":"zr Nov"]
dfc['mean']=col.mean(axis=1)
# Calculate the anomoly for each point
dfc['gnd50 anom']=dfc['lidar_gnd50']-dfc['mean']
dfc['all90 anom']=dfc['lidar all90']-dfc['mean']
dfc['first50 anom']=dfc['lidar_first50']-dfc['mean']
dfc['Aug anom']=dfc['zr Aug']-dfc['mean']
dfc['Sep anom']=dfc['zr Sep']-dfc['mean']
dfc['Oct anom']=dfc['zr Oct']-dfc['mean']
dfc['Nov anom']=dfc['zr Nov']-dfc['mean']
df_anom = dfc.loc[:,"gnd50 anom":"Nov anom"].copy()
%run -i CoreBx_funcs
print(df_anom.mean())
print(df_anom.median())
print(df_anom.std())
stat_summary(df_anom['Aug anom'].values,iprint=True)
stat_summary(df_anom['Sep anom'].values,iprint=True)
stat_summary(df_anom['Oct anom'].values,iprint=True)
stat_summary(df_anom['Nov anom'].values,iprint=True)
plt.figure(figsize=(5,3))
plt.plot(dfc['xr'],dfc['Aug anom'],'o',alpha=.5,label='Aug')
plt.plot(dfc['xr'],dfc['Sep anom'],'o',alpha=.5,label='Sep')
plt.plot(dfc['xr'],dfc['Oct anom'],'o',alpha=.5,label='Oct')
plt.plot(dfc['xr'],dfc['Nov anom'],'o',alpha=.5,label='Nov')
plt.legend()
plt.ylabel('Anomaly (m)')
plt.xlabel('Alongshore Distance (m)')
plt.savefig('unchanged_pts_along_anom_adjusted_dems.png',dpi=200,bbox_inches = 'tight')
plt.figure(figsize=(3,3))
plt.plot(dfc['Aug'],dfc['Aug anom'],'o',alpha=.5,label='Aug')
plt.plot(dfc['Sep'],dfc['Sep anom'],'o',alpha=.5,label='Sep')
plt.plot(dfc['Oct'],dfc['Oct anom'],'o',alpha=.5,label='Oct')
plt.plot(dfc['Nov'],dfc['Nov anom'],'o',alpha=.5,label='Nov')
plt.legend()
plt.ylabel('Anomaly (m)')
plt.xlabel('Elevation (m NAVD88)')
plt.savefig('unchanged_pts_elev_anom_adjusted_dems.png',bbox_inches = 'tight',dpi=200)
# boxplot of anomolies
fig, ax =plt.subplots(figsize=(5,4))
boxprops = dict(linestyle='-', linewidth=3, color='k')
medianprops = dict(linestyle='-', linewidth=3, color='k')
bp=df_anom.boxplot(figsize=(6,5),grid=True,boxprops=boxprops, medianprops=medianprops)
plt.ylabel('Difference from Four-Map Mean (m)')
plt.ylim((-0.5,1.5))
plt.ylabel('Anomaly (m)')
ax.set_xticklabels(["Gnd50","All90","First50","Aug","Sep","Oct","Nov"])
plt.xlabel('Map')
plt.savefig('unchanged_pts_boxplot_adjusted_dems.png',dpi=200,bbox_inches = 'tight')
#plt.savefig('offset_corrected_pts_boxplot_adjusted_dems.png',dpi=200,bbox_inches = 'tight')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
%matplotlib inline
# define some functions
def pcoord(x, y):
"""
Convert x, y to polar coordinates r, az (geographic convention)
r,az = pcoord(x, y)
"""
r = np.sqrt( x**2 + y**2 )
az=np.degrees( np.arctan2(x, y) )
# az[where(az<0.)[0]] += 360.
az = (az+360.)%360.
return r, az
def xycoord(r, az):
"""
Convert r, az [degrees, geographic convention] to rectangular coordinates
x,y = xycoord(r, az)
"""
x = r * np.sin(np.radians(az))
y = r * np.cos(np.radians(az))
return x, y
def box2UTMh(x, y, x0, y0, theta):
'''
2D rotation and translation of x, y
Input:
x, y - row vectors of original coordinates (must be same size)
x0, y0 - Offset (location of x, y = (0,0) in new coordinate system)
theta - Angle of rotation (degrees, CCW from x-axis == Cartesian coorinates)
Returns:
xr, yr - rotated, offset coordinates
'''
thetar = np.radians(theta)
c, s = np.cos(thetar), np.sin(thetar)
# homogenous rotation matrix
Rh = np.array(((c, -s, 0.),\
(s, c, 0.),\
(0., 0., 1.)))
# homogenous translation matrix
Th = np.array(((1., 0., x0),\
(0., 1., y0),\
(0., 0., 1.)))
# homogenous input x,y
xyh = np.vstack((x,y,np.ones_like(x)))
# perform rotation and translation
xyrh=np.matmul(np.matmul(Th,Rh),xyh)
xr = xyrh[0,:]
yr = xyrh[1,:]
return xr, yr
# coordinates for the whole-island box.
# old version
r = {'name':"ncorebx_v6","e0": 378500.,"n0": 3856350.,"xlen": 36000.,"ylen": 1100.,"dxdy": 1.,"theta": 42.}
# new, enlarged version
r = {'name':"ncorebx_v6","e0": 378489.457,"n0": 3855740.501,"xlen": 36650.,"ylen": 1500.,"dxdy": 1.,"theta": 42.}
# Convert origin to UTM
xu,yu = box2UTMh(0.,0.,r['e0'],r['n0'],r['theta'])
print(xu,yu)
# reverse the calc to find the origin (UTM =0,0) in box coordinates.
# First, just do the rotation to see where Box = 0,0 falls
xb0,yb0 = box2UTMh(xu,yu,0.,0.,-r['theta'])
print(xb0,yb0)
# Then put in negative values for the offset
xb,yb = box2UTMh(xu,yu,-xb0,-yb0,-r['theta'])
print(xb,yb)
# Read in the list of stable points. Elevations were picked from the DEMs with Global Mapper. Elevations for Sep are from
# the old _crop version of the DEM.... .nc version read in below is from the newer _v3 version.
df=pd.read_csv("C:\\crs\\proj\\2019_DorianOBX\\Santa_Cruz_Products\\stable_points\\All_points.csv",header = 0)
# convert UTM X, Y to rotated coords xrl, yrl
X = df["X"].values
Y = df["Y"].values
#TODO: why does this return a list of arrays?
xrl,yrl = box2UTMh(X,Y,-xb0,-yb0,-r['theta'])
# this fixes it...probably should fix box2UTMh
xrot = np.concatenate(xrl).ravel()
yrot = np.concatenate(yrl).ravel()
df["xr"] = xrot
df["yr"] = yrot
# read in the multi-map .nc file
# Dates for DEMs
dates = ([\
"Aug 30 2019",\
"Sep 12-13 2019",\
"Oct 11 2019",\
"Nov 26 2019",\
"Feb 8-9 2020",\
"May 8-9 2020",\
"Aug 2 2020",\
"Aug 5-9 2020",\
"Sep 28 2020"])
# Offsets for DEM elevation corrections
# These are the mean anomolies when this script is run with zero offsets
# Aug anom 0.022233
# Sep anom -0.002373
# Oct anom -0.004543
# Nov anom -0.015317
# Medians
# Aug anom 0.011046
# Sep anom -0.011883
# Oct anom -0.010752
# Nov anom -0.020816
# Std. dev
# Aug anom 0.052332
# Sep anom 0.059450
# Oct anom 0.050611
# Nov anom 0.069708
# Same values when the offset is applied
# Mean
# Aug anom 1.636533e-07
# Sep anom 4.703386e-08
# Oct anom 2.423299e-07
# Nov anom -4.530171e-07
# Median
# Aug anom -0.011187
# Sep anom -0.009510
# Oct anom -0.006209
# Nov anom -0.005499
# Std dev
# Aug anom 0.052332
# Sep anom 0.059450
# Oct anom 0.050611
# Nov anom 0.069708
# ALERT - change this line to offset final results or not
#offset = np.array([-0.022233, 0.002373, 0.004543, 0.015317])
offset = np.array([0., 0., 0., 0., 0., 0., 0., 0., 0])
fn = r['name']+'.nc'
dsa = xr.open_dataset(fn)
dsaa = np.squeeze(dsa.to_array())
nmaps,ncross,nalong=np.shape(dsaa)
print('nmaps, ncross, nalong: ',nmaps,ncross,nalong)
# Correct for uniform offsets
for i in range(0,nmaps):
dsaa[i,:,:] = dsaa[i,:,:] + offset[i]
# Use rotated coordinates as indices into the maps to get elevations.
ix = df["xr"].values.astype(np.int64)
iy = df["yr"].values.astype(np.int64)
nx = len(ix)
zr = np.ones((nx,nmaps))
for j in range(0, nmaps):
for i in range(0, nx):
zr[i,j] = dsaa[j,iy[i],ix[i]].values
zr[np.abs(zr)>10]=np.nan
anom = np.nan*np.ones_like(zr)
for i in range(0,nx):
anom[i,:]=zr[i,:]-np.mean(zr[i,:])
anom = anom[np.where(~np.isnan(anom))].reshape((nx-9,9))
print(np.shape(anom))
mean_anom = np.mean(anom,0)
std_anom = np.std(anom,0)
mean_anom
ixa = np.tile(ix,(nmaps,1)).T
for i in range(0,nx):
plt.scatter(ixa[i,:],anom[i,:],s=18,c=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22'])
sum(np.isnan(anom))
anom
# looks like three points are outside the domain. Drop them from the dataframe
dfc = df[df['zr Sep']>-10.].reset_index(drop=True)
# Calculate the average for each row for the four maps (don't average in the lidar)
col = dfc.loc[: , "zr Aug":"zr Nov"]
dfc['mean']=col.mean(axis=1)
# Calculate the anomoly for each point
dfc['gnd50 anom']=dfc['lidar_gnd50']-dfc['mean']
dfc['all90 anom']=dfc['lidar all90']-dfc['mean']
dfc['first50 anom']=dfc['lidar_first50']-dfc['mean']
dfc['Aug anom']=dfc['zr Aug']-dfc['mean']
dfc['Sep anom']=dfc['zr Sep']-dfc['mean']
dfc['Oct anom']=dfc['zr Oct']-dfc['mean']
dfc['Nov anom']=dfc['zr Nov']-dfc['mean']
df_anom = dfc.loc[:,"gnd50 anom":"Nov anom"].copy()
%run -i CoreBx_funcs
print(df_anom.mean())
print(df_anom.median())
print(df_anom.std())
stat_summary(df_anom['Aug anom'].values,iprint=True)
stat_summary(df_anom['Sep anom'].values,iprint=True)
stat_summary(df_anom['Oct anom'].values,iprint=True)
stat_summary(df_anom['Nov anom'].values,iprint=True)
plt.figure(figsize=(5,3))
plt.plot(dfc['xr'],dfc['Aug anom'],'o',alpha=.5,label='Aug')
plt.plot(dfc['xr'],dfc['Sep anom'],'o',alpha=.5,label='Sep')
plt.plot(dfc['xr'],dfc['Oct anom'],'o',alpha=.5,label='Oct')
plt.plot(dfc['xr'],dfc['Nov anom'],'o',alpha=.5,label='Nov')
plt.legend()
plt.ylabel('Anomaly (m)')
plt.xlabel('Alongshore Distance (m)')
plt.savefig('unchanged_pts_along_anom_adjusted_dems.png',dpi=200,bbox_inches = 'tight')
plt.figure(figsize=(3,3))
plt.plot(dfc['Aug'],dfc['Aug anom'],'o',alpha=.5,label='Aug')
plt.plot(dfc['Sep'],dfc['Sep anom'],'o',alpha=.5,label='Sep')
plt.plot(dfc['Oct'],dfc['Oct anom'],'o',alpha=.5,label='Oct')
plt.plot(dfc['Nov'],dfc['Nov anom'],'o',alpha=.5,label='Nov')
plt.legend()
plt.ylabel('Anomaly (m)')
plt.xlabel('Elevation (m NAVD88)')
plt.savefig('unchanged_pts_elev_anom_adjusted_dems.png',bbox_inches = 'tight',dpi=200)
# boxplot of anomolies
fig, ax =plt.subplots(figsize=(5,4))
boxprops = dict(linestyle='-', linewidth=3, color='k')
medianprops = dict(linestyle='-', linewidth=3, color='k')
bp=df_anom.boxplot(figsize=(6,5),grid=True,boxprops=boxprops, medianprops=medianprops)
plt.ylabel('Difference from Four-Map Mean (m)')
plt.ylim((-0.5,1.5))
plt.ylabel('Anomaly (m)')
ax.set_xticklabels(["Gnd50","All90","First50","Aug","Sep","Oct","Nov"])
plt.xlabel('Map')
plt.savefig('unchanged_pts_boxplot_adjusted_dems.png',dpi=200,bbox_inches = 'tight')
#plt.savefig('offset_corrected_pts_boxplot_adjusted_dems.png',dpi=200,bbox_inches = 'tight')
| 0.499512 | 0.894698 |
# Calculating NDVI: Part 2
This exercise follows on from the previous section. In the [previous part of this exercise](../session_4/03_calculate_ndvi_part_2.ipynb), you constructed a notebook to resample a year's worth of Sentinel-2 data into quarterly time steps.
In this section, you will conitnue from where you ended in the previous exercise. Most of the code will remain unchanged, but we will introduce a new measurement to the existing measurements which will enable us to calculate and plot NDVI.
## Open and run notebook
If you are following directly on from the last section, you can skip this step. If you have closed your Sandbox browser tab or disconnected from the Internet between exercises, follow these steps to ensure correct package imports and connection to the datacube.
1. Navigate to the **Training** folder.
2. Double-click `Calculate_ndvi.ipynb`. It will open in the Launcher.
3. Select **Kernel -> Restart Kernel and Clear All Outputs…**.
4. When prompted, select **Restart**.
## Making changes to the load cell
Make the following changes below to modify the load cell.
### Adding `nir` measurement
To calculate NDVI, we need to load Sentinel-2's near-infrared band. In the Sandbox, it is called `nir`.
To add the band, modify the `load_ard` cell according to the step below:
1. Add `nir` to the measurements array.
```
measurements = ['red', 'green', 'blue', 'nir']
```
If you completed the above step, your `load_ard` cell should look like:
sentinel_2_ds = load_ard(
dc=dc,
products=["s2_l2a"],
x=x, y=y,
time=("2019-01", "2019-12"),
output_crs="EPSG:6933",
measurements=['red', 'green', 'blue', 'nir'],
resolution=(-10, 10),
group_by='solar_day')
### Running the notebook
1. Select **Kernel -> Restart Kernel and Run All Cells…**.
2. When prompted, select **Restart**.
The notebook may take a little while to run. Check all the cells have run successfully with no error messages.
Did you noticed any additional data variables to the `sentinel_2_ds`?
<img align="middle" src="../_static/session_4/4a.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="100%">
### Creating a new cell
After successfully running the notebook, this cell will be the last cell:
`geomedian_resample`
<img align="middle" src="../_static/session_4/4-nir.PNG" alt="The geomedian_resample dataset with NIR band." width="100%">
Notice it now contains the NIR band data, which is the data we just loaded.
Follow the steps below to create a new cell.
1. Make sure the last cell is selected.
2. Press the `Esc` key, then follow it by pressing the `B` key. A new cell will be created below the current cell.
Use the method above to create a new cell.
## Calculate NDVI
One of the most commonly used remote sensing indices is the Normalised Difference Vegetation Index or `NDVI`.
This index uses the ratio of the red and near-infrared (NIR) bands to identify live green vegetation.
The formula for NDVI is:
$$
\begin{aligned}
\text{NDVI} & = \frac{(\text{NIR} - \text{Red})}{(\text{NIR} + \text{Red})} \\
\end{aligned}
$$
When interpreting this index, high values indicate vegetation, and low values indicate soil or water.
### Define NDVI formula
In a new cell, calculate the NDVI for the resampled geomedian dataset. To make it simpler, you can store the red and near-infrared bands in new variables, then calculate the NDVI using those variables, as shown below:
```
nir = geomedian_resample.nir
red = geomedian_resample.red
NDVI = (nir - red) / (nir + red)
```
Run the cell using `Shift + Enter`.
### Plot NDVI for each geomedian
Our calculation is now stored in the `NDVI` variable. To visualise it, we can attach the `.plot()` method, which will give us an image of the NDVI for each geomedian in our dataset. We can then customise the plot by passing parameters to the `.plot()` method, as shown below:
```
NDVI.plot(col='time', vmin=-0.50, vmax=0.8, cmap='RdYlGn')
```
Run the cell using `Shift + Enter`
* `col='time'` tells the plot that we want to display one image for each time step in our dataset.
* `vmin=-0.50` tells the plot to display all values below `-0.50` as the same colour. This can help keep contrast in the images (remember that NDVI can take values from -1 to 1).
* `vmax=0.8` tells the plot to display all values above `0.8` as the same colour. This can help keep contrast in the images (remember that NDVI can take values from -1 to 1).
* `cmap='RdYlGn'` tells the plot to display the NDVI values using a colour scale that ranges from red for low values to green for high values. This helps us because healthy vegetation shows up as green, and non-vegetation shows up as red.
If you implement the NDVI plotting code correctly, you should see the image below:
<img align="middle" src="../_static/session_4/5.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="100%">
In the image above, vegetation shows up as green (NDVI > 0).
Sand shows up as yellow (NDVI ~ 0) and water shows up as red (NDVI < 0).
### Plot time series of the NDVI area
While it is useful to see the NDVI values over the whole area in the plots above, it can sometimes be useful to calculate summary statistics, such as the mean NDVI for each geomedian. This can quickly reveal trends in vegetation health across time.
To calculate the mean NDVI, we can apply the `.mean()` method to our NDVI variable. We can also then apply the `.plot()` method to see the result, as shown below:
```
NDVI.mean(dim=['x', 'y']).plot(size=6)
```
Run the cell using `Shift + Enter`
* `NDVI.mean(dim=['x', 'y'])` calculates the mean over all pixels, indicated by `dim=['x', 'y']`. To instead calculate the mean over all times, you would write `dim=['time']`.
* `NDVI.mean(dim=['x', 'y']).plot(size=6)` calculates the mean over all pixels, then plots the result. The `size=6` argument specifies the size of the plot.
If you implement the calculation and plotting code correctly, you should see the image below:
<img align="middle" src="../_static/session_4/6.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="100%">
Rather than a spatial view of NDVI at each time step, we see a single value (the mean NDVI) for each time.
If you would like to add a title and y-axis label to this plot, you can add the following code below the command to calculate and plot the mean:
```
NDVI.mean(dim=['x', 'y']).plot(size=6)
plt.title('Quarterly Trend in NDVI')
plt.ylabel('Mean NDVI')
```
<img align="middle" src="../_static/session_4/7.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="100%">
Run the cell using `Shift + Enter`.
## Conclusion
Congratulations! You have successfully calculated and visualised the NDVI for a series of geomedian composite images.
If you'd like to experiment futher, try running the code with different areas. Did you learn anything interesting to share with us?
|
github_jupyter
|
measurements = ['red', 'green', 'blue', 'nir']
```
If you completed the above step, your `load_ard` cell should look like:
sentinel_2_ds = load_ard(
dc=dc,
products=["s2_l2a"],
x=x, y=y,
time=("2019-01", "2019-12"),
output_crs="EPSG:6933",
measurements=['red', 'green', 'blue', 'nir'],
resolution=(-10, 10),
group_by='solar_day')
### Running the notebook
1. Select **Kernel -> Restart Kernel and Run All Cells…**.
2. When prompted, select **Restart**.
The notebook may take a little while to run. Check all the cells have run successfully with no error messages.
Did you noticed any additional data variables to the `sentinel_2_ds`?
<img align="middle" src="../_static/session_4/4a.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="100%">
### Creating a new cell
After successfully running the notebook, this cell will be the last cell:
`geomedian_resample`
<img align="middle" src="../_static/session_4/4-nir.PNG" alt="The geomedian_resample dataset with NIR band." width="100%">
Notice it now contains the NIR band data, which is the data we just loaded.
Follow the steps below to create a new cell.
1. Make sure the last cell is selected.
2. Press the `Esc` key, then follow it by pressing the `B` key. A new cell will be created below the current cell.
Use the method above to create a new cell.
## Calculate NDVI
One of the most commonly used remote sensing indices is the Normalised Difference Vegetation Index or `NDVI`.
This index uses the ratio of the red and near-infrared (NIR) bands to identify live green vegetation.
The formula for NDVI is:
$$
\begin{aligned}
\text{NDVI} & = \frac{(\text{NIR} - \text{Red})}{(\text{NIR} + \text{Red})} \\
\end{aligned}
$$
When interpreting this index, high values indicate vegetation, and low values indicate soil or water.
### Define NDVI formula
In a new cell, calculate the NDVI for the resampled geomedian dataset. To make it simpler, you can store the red and near-infrared bands in new variables, then calculate the NDVI using those variables, as shown below:
Run the cell using `Shift + Enter`.
### Plot NDVI for each geomedian
Our calculation is now stored in the `NDVI` variable. To visualise it, we can attach the `.plot()` method, which will give us an image of the NDVI for each geomedian in our dataset. We can then customise the plot by passing parameters to the `.plot()` method, as shown below:
Run the cell using `Shift + Enter`
* `col='time'` tells the plot that we want to display one image for each time step in our dataset.
* `vmin=-0.50` tells the plot to display all values below `-0.50` as the same colour. This can help keep contrast in the images (remember that NDVI can take values from -1 to 1).
* `vmax=0.8` tells the plot to display all values above `0.8` as the same colour. This can help keep contrast in the images (remember that NDVI can take values from -1 to 1).
* `cmap='RdYlGn'` tells the plot to display the NDVI values using a colour scale that ranges from red for low values to green for high values. This helps us because healthy vegetation shows up as green, and non-vegetation shows up as red.
If you implement the NDVI plotting code correctly, you should see the image below:
<img align="middle" src="../_static/session_4/5.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="100%">
In the image above, vegetation shows up as green (NDVI > 0).
Sand shows up as yellow (NDVI ~ 0) and water shows up as red (NDVI < 0).
### Plot time series of the NDVI area
While it is useful to see the NDVI values over the whole area in the plots above, it can sometimes be useful to calculate summary statistics, such as the mean NDVI for each geomedian. This can quickly reveal trends in vegetation health across time.
To calculate the mean NDVI, we can apply the `.mean()` method to our NDVI variable. We can also then apply the `.plot()` method to see the result, as shown below:
Run the cell using `Shift + Enter`
* `NDVI.mean(dim=['x', 'y'])` calculates the mean over all pixels, indicated by `dim=['x', 'y']`. To instead calculate the mean over all times, you would write `dim=['time']`.
* `NDVI.mean(dim=['x', 'y']).plot(size=6)` calculates the mean over all pixels, then plots the result. The `size=6` argument specifies the size of the plot.
If you implement the calculation and plotting code correctly, you should see the image below:
<img align="middle" src="../_static/session_4/6.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="100%">
Rather than a spatial view of NDVI at each time step, we see a single value (the mean NDVI) for each time.
If you would like to add a title and y-axis label to this plot, you can add the following code below the command to calculate and plot the mean:
| 0.94672 | 0.991381 |
[](https://www.pythonista.io)
# Expresiones con operadores en Python.
Los operadores son signos o palabras reservadas que el intérprete de Python identifica dentro de sus sintaxis para realizar una acción (operación) específica.
```
<objeto 1> <operador> <objeto 2>
```
```
<objeto 1> <operador 1> <objeto 2> <operador 2> .... <operador n-1> <objeto n>
```
Donde:
* ```<objeto i>``` es un objeto compatible con el operador que lo relaciona con otro objeto.
* ```<operador i>``` es un operador válido.
**Ejemplos:**
Las siguientes celdas muestran ejemplos de expresiones con operadores.
```
1 + 1
15 * 4 + 1 / 3 ** 5
```
## Operadores aritméticos.
Los operadores arítmétricos permiten realizarf operaciones de esta índole con objetos numéricos.
|Operador|Descripción|
|:------:|:---------:|
|```+``` |Suma |
|```-``` |Resta |
|```*``` |Multiplicación|
|```**```|Exponente |
|```/``` |División |
|```//```|División entera|
|```%``` |Residuo |
**Nota:** En Python 3, las divisiones entre objetos de tipo ```int``` dan por resultado un objeto tipo ```float```.
**Nota:** Aún cuando no afecta a la sintaxis, el uso de espacios entre los operadores aritméticos mejora la comprensión de las operaciones. Para mayor referencia remitirse al [PEP-8](https://www.python.org/dev/peps/pep-0008/).
**Ejemplos:**
* La siguiente celda contiene una expresión con el operador ```+```.
```
3 + 2
```
* La siguiente celda contiene una expresión con el operador ```-```.
```
3 - 2
```
* La siguiente celda contiene una expresión con el operador ```*```.
```
3 * 2
```
* La siguiente celda contiene una expresión con el operador ```**``` que elevará ```3``` al cuadrado.
```
3 ** 2
```
* La siguiente celda contiene una expresión con el operador ```**``` elevando ```3``` a una potencia fraccionaria.
```
3 ** 0.5
```
* La siguiente celda contiene una expresión con el operador ```/```.
```
3 / 2
```
* La siguiente celda contiene una expresión con el operador ```//```.
```
3 // 2
```
* La siguiente celda contiene una expresión con el operador ```%```.
```
3 % 2
```
### Reglas de precedencia en operaciones aritméticas.
Los operadores aritméticos se apegan a la siguiente regla de precedencia siguiendo un orden de izquierda a derecha:
1. Paréntesis.
2. Exponente.
3. Multiplicación.
4. División.
5. Suma.
6. Sustracción.
**Ejemplos:**
* La siguiente celda realizará una operación entre números enteros apegada a las reglas de precedencia descitas previamente:
1. Se ejecutará ```4 ** 2```, lo que dará por resultado ```12 * 5 + 2 / 16```.
* Se ejecutará ```12 * 5```, lo que dará por resultado ```60 + 2 / 16```.
* Se ejecutará ```2 / 16```, lo que dará por resultado ```60 + 0.125```.
```
12 * 5 + 2 / 4 ** 2
```
* Las siguientes celdas incluyen paréntesis, los cuales permiten agrupar las operaciones aritméticas.
```
(12 * 5) + (2 / (4 ** 2))
(12 * 5) + (2 / 4) ** 2
(12 * (5 + 2) / 3) ** 2
```
### División entre enteros en Python 2.
En Python 3 las divisiones entre objetos de tipo *int* dan como resultado un objeto de tipo *float*. En Python 2 las divisiones entre objetos de tipo *int* dan por resultado la parte entera de la división.
**Ejemplos:**
```
>>> 3 / 4
0
>>> 10 / 5
2
>>>
```
## Operadores para colecciones ordenadas.
Los objetos de tipo ```str```, ```bytes```, ```bytearray``` ```list``` y ```tuple``` permiten utilizar los siguientes operadores.
|Operador|Descripción|
|:------:|:---------:|
|```+``` |Concatenación|
|```*``` |Repetición|
### El operador de concatenación ```+```.
Este operador se utiliza para unir una después de otra a colecciones del mismo tipo en una nueva colección.
```
<colección 1> + <colección 2>
```
**Ejemplos:**
* Las siguientes celdas ejemplifican operaciones válidas con el operador ```+```.
```
"hola" + "mundo"
[1, 2, 3] + ['uno', 'dos', 'tres']
```
* La siguiente celda intentará utilizar el operador ```+``` con colecciones que soportan a dicho operador pero son de distinto tipo, lo cual desencadenará un error de tipo ```TypeError```.
```
[1, 2, 3] + ('uno', 'dos', 'tres')
```
* La siguiente celda concatenará dos objetos de tipo ```tuple```.
```
(1, 2, 3) + ('uno', 'dos', 'tres')
```
* La siguiente celda intentará utilizar el operador ```+``` con el objeto ```3``` que no soporta a dicho operador, lo cual desencadenará un error de tipo ```TypeError```.
```
'hola' + 3
```
### El operador de repetición ```*```.
Este operador se utiliza para crear una colección que contiene el contenido de una colección que se repite un número determinado de veces.
```
<colección > * <n>
```
Donde:
* ```<n>``` es un objeto de tipo ```int``` con valor positivo.
**Ejemplos:**
* Las siguientes celdas ejemplificarán el uso del operador ```*``` para colecciones.
```
1 * [1, 2, 3]
'hola' * 3
3 * 'hola'
(None,) * 6
```
* La siguiente celda intentará aplicar el operador ```*``` a dos objetos de tipo ```list```, desencadenando un error de tipo ```TypeError```.
```
[1, 2, 3] * [3]
```
## Operadores de asignación.
Los operadores de asignación se utilizan para enlazar a un nombre con un objeto/valor en el espacio de nombres.
El operador de asignación ```=``` es el más conocido, sin embargo existen otros operadores de asignación tales como:
|Operador|Expresión|Equivale a|
|:------:|:-----:|:-----------:|
|```=```|```x = y```|```x = y```|
|```+=```|```x += y```|```x = x + y```|
|```-=```|```x -= y```|```x = x - y```|
|```*=```|```x * = y```|```x = x * y```|
|```**=```|```x **= y```|```x = x ** y```|
|```/=```|```x /= y```|```x = x / y```|
|```//=```|```x //= y```|```x = x // y```|
|```%=```|```x %= y```|```x = x % y```|
Donde:
* ```x``` es un nombre.
* ```y``` es un objeto complatible con el operador.
**Ejemplos:**
* La siguiente celda utilizará al operador ```=``` para asignarle el nombre ```x``` al objeto ```2```.
```
x = 2
```
* La siguiente celda ejecutará una operación similar a ```x = x + 3```.
```
x += 3
```
* Ahora ```x``` es ```5```.
```
x
```
* La siguiente celda ejecutará una operación similar a ```x = x ** 3```.
```
x **= 3
```
* Ahora ```x``` es ```125```.
```
x
```
* La siguiente celda ejecutará una operación similar a ```x = x // 14.2```.
```
x //= 14.2
```
* Ahora ```x``` es ```8.0```.
```
x
```
* La siguiente celda ejecutará una operación similar a ```x = x % 1.5```.
```
x %= 1.5
```
* Ahora ```x``` es ```0.5```.
```
x
```
* La siguiente celda creará al objeto ```(1, 2, 3)``` al que se le asignará el nombre de ```tupla```.
```
tupla = (1, 2, 3)
id(tupla)
```
* La siguiente celda realizará una operación de concatenación usando el operador ```+```.
```
tupla += ('cuatro', 'cinco', 'seis')
```
* Ahora ```tupla``` corresponde al objeto ```(1, 2, 3, 'cuatro', 'cinco', 'seis')```
```
tupla
id(tupla)
```
## Expresiones lógicas.
Las expresiones lógicas permiten evaluar una condición, la cual da por resultado un valor *Verdadero* (```True```) en caso de que dicha condición se cumpla o *Falso* (```False```) en caso de que no sea así.
### Operadores de evaluación.
Estos operadores comparan dos expresiones. El resultado de esta evaluación es un objeto de tipo ```bool```.
|Operador|Evalúa |
|:------:|:---------------|
|```==``` |```a == b``` ¿a igual a b?|
|```!=``` |```a != b``` ¿a distinta de b?|
|```>``` |```a > b``` ¿a mayor que b?|
|```<``` |```a < b``` ¿a menor que b?|
|```>=``` |```a >= b``` ¿a mayor o igual que b?|
|```<=``` |```a <= b``` ¿a menor o igual que b?|
**Ejemplos:**
* La siguiente celda evalúa si los valores de los objetos ```"hola"``` y ```'hola'``` son iguales. El resultado es ```True```.
```
"hola" == 'hola'
```
* La siguiente celda evalúa si los valores de los objetos ```"hola"``` y ```'Hola'``` son distintos. El resultado es ```True```.
```
"hola" != 'Hola'
```
* La siguiente celda evalúa si el valor de```5``` es mayor que ```3``` . El resultado es ```True```.
```
5 > 3
```
* La siguiente celda evalúa si el valor de```5``` es menor o igual que ```3``` . El resultado es ```False```.
```
5 <= 3
```
* La siguiente celda evalúa si el resultado de la expresión ```2 * 9 ** 0.5``` es igual a ```6``` . El resultado es ```True```.
```
2 * 9 ** 0.5 == 6
```
* La siguiente celda evalúa si el resultado de la expresión ```(2 * 9) ** 0.5``` es igual a ```6``` . El resultado es ```False```.
```
(2 * 9) ** 0.5 == 6
```
### Operadores de identidad.
Los operadores ```is``` e ```is not``` evalúan si un identificador se refiere exactamente al mismo objeto o pertenece a un tipo.
|Operador |Evalúa|
|:---------:|:----:|
|```is``` |```a is b``` Equivale a ```id(a) == id(b)```|
|```is not``` |```a is not b``` Equivale a ```id(a) != id(b)```|
**Ejemplos:**
* La siguiente celda le asignará los nombres ```a```y ```b``` al objeto ```45```.
```
a = b = 45
```
* La siguiente celda evaluará si los nombres ```a``` y ```b``` hacen referencia al mismo objeto. El resultado es ```True```.
```
a is b
```
* La siguiente celda evaluará si el resultado de la expresión ```type("Hola")``` es el objeto ```str```. El resultado es ```True```.
```
type("Hola") is str
```
* La siguiente celda evaluará si el resultado de la expresión ```type("Hola")``` NO es el objeto ```complex```. El resultado es ```True```.
```
type("Hola") is not complex
```
* Las siguientes celdas ilustran que ```True``` y ```1``` tienen el mismo valor, pero no son el mismo objeto.
```
True == 1
True is 1
```
### Operadores de pertenencia.
Los operadores ```in``` y ```not in``` evalúan si un objeto se encuentra dentro de una colección.
**Ejemplos:**
* La siguiente celda evaulará si el objeto ```'a'``` se encuentra dentro del objeto ```'Hola'```. El resultado es ```True```.
```
'a' in 'Hola'
```
* La siguiente celda evaulará si el objeto ```'z'``` se encuentra dentro del objeto ```'Hola'```. El resultado es ```False```.
```
'z' in 'Hola'
```
* La siguiente celda evaulará si el objeto ```'la'``` NO se encuentra dentro del objeto ```'Hola'```. El resultado es ```False```.
```
'la' not in 'Hola'
```
* La siguiente celda evaulará si el objeto ```'z'``` NO se encuentra dentro del objeto ```'Hola'```. El resultado es ```True```.
```
'z' not in 'Hola'
```
### Álgebra booleana y tablas de la verdad.
El álgebra booleana o [Algebra de Boole](https://es.wikipedia.org/wiki/%C3%81lgebra_de_Boole) es una rama de las matemáticas que permite crear estructuras lógicas mendiante valores booleanos.
En el caso de Python, los objetos de tipo ```bool```, ```True``` y ```False```, son los valores con los que se pueden realizar operaciones con lógica booleana.
### Operaciones booleanos.
Un operación booleana permite definir un operador lógico, el cual relaciona a dos valores booleanos con un valor específico.
### Tablas de la verdad.
Son tablas que describen los posibles resultados de aplicar un operador a dos valores booleanos.
#### El operador lógico ```OR```.
Este operador da por resultado ```True``` en caso de que al menos uno de los valores sea ```True```. Solamente regresa ```False``` si ambos valores son ```False```.
|OR| True|False|
|:--:|:--:|:--:|
|**True**|True|True|
|**False**|True|False|
Este operador corresponde a la palabra reservada ```or``` en Python.
```
<valor 1> or <valor 2>
```
Donde:
* ```<valor 1>``` y ```<valor 2>``` son generalmente valores de tipo ```bool```.
#### El operador lógico ```AND```.
Este operador dar por resultado ```True``` en caso de que ambos valores sean ```True```. El resultado es ```False``` en los demás casos.
|AND| True|False|
|:--:|:--:|:--:|
|**True**|True|False|
|**False**|False|False|
Este operador corresponde a la palabra reservada ```and``` en Python.
```
<valor 1> and <valor 2>
```
Donde:
* ```<valor 1>``` y ```<valor 2>``` son generalmente valores de tipo ```bool```.
### El operador ```NOT```.
El operador ```NOT``` hace que el valor booleano a su derecha inmediata cambie al valor contrario.
Este operador corresponde a la palabra reservada ```not``` en Python.
```
not <valor>
```
Donde:
* ```<valor>``` es un valor de tipo ```bool```.
### El operador ```XOR```.
Este operador dar por resultado ```True``` en caso de que ambos valores sean distintos. El resultado es ```False``` en casos de que ambos valores sean iguales.
|XOR| True|False|
|:--:|:--:|:--:|
|**True**|False|True|
|**False**|True|False|
No hay un operador lógico en Python para ```XOR```.
### Operadores lógicos de Python.
Estos operadores permiten la realización de las siguientes operaciones lógicas. Por lo general se realizan con objetos de tipo ```bool```, pero Python también permite operaciones lógicas con otros tipos de datos y expresiones.
|Operador|Evalúa|
|:------:|:----:|
|```or``` |*a or b* ¿Se cumplen a o b?|
|```and``` |*a and b* ¿Se comple a y b?|
|```not```|*not x* Contrario a x|
**Nota:** Las operaciones lógicas se ejecutan de izquierda a derecha, pero pueden ser agrupadas usando paréntesis.
**Ejemplos:**
* La expresión de la siguiente celda dará por resultado ```True```.
```
True or True
```
* La expresión de la siguiente celda dará por resultado ```True```.
```
False or True
```
* La expresión de la siguiente celda dará por resultado ```False```.
```
False or False
```
* La expresión de la siguiente celda dará por resultado ```False```.
```
False and False
```
* La expresión de la siguiente celda dará por resultado ```True```.
```
True and True
```
* La expresión de la siguiente celda dará por resultado ```False```.
```
not True
```
* La expresión de la siguiente celda dará por resultado ```True```.
```
not False or True
```
* La expresión de la siguiente celda dará por resultado ```False```.
```
not (False or True)
```
* La siguiente celda evalúa primero la expresión ```15 == 3``` y el resultado es el valor que el operador ```or``` utilizará.
```
15 == 3 or False
```
* La siguiente celda evalúa primero las expresiones ```15 > 3``` y ```15 <= 20``` el resultado de cada una será usado por el operador```and```.
```
15 > 3 and 15 <= 20
```
### Operadores lógicos relacionados con otros objetos de Python.
Es posible usar los operadores lógicos de Python con otros objetos que no sean de tipo ```bool``` bajo las siguientes premisas:
* El ```0``` equivale a ```False```.
* El valor```None``` equivale a ```False```.
* Una colección vacía equivale a ```False```.
* Cualquier otro objeto equivale a ```True```.
**Nota:** En algunos casos, el resultado de estas operaciones no es un valor de tipo ```bool```.
**Ejemplos:**
* La expresión de la siguiente celda da por resultado ```False```.
```
None or False
```
* La expresión de la siguiente celda da por resultado ```True```.
```
True or None
```
* La siguiente celda dará por resultado ```0```.
```
True and 0
```
* Para que el resultado de la expresión de la celda anterior sea de tipo ```bool``` es necesario convertirla explícitamente con la función ```bool()```.
```
bool(True and 0)
```
* La expresión de la siguiente celda regresará ```123```.
```
'Hola' and 123
```
* La expresión de la siuiente celda regresará ```'Hola'```.
```
123 and 'Hola'
```
## Operadores de bits.
Las operaciones de bits son cálculos que implican a cada bit que conforma a un número representado de forma binaria.
Los operadores de bits ```|```, ```^``` y ```&``` realizan operaciones idénticas a los operadores lógicos, pero para cada bit.
|Operador | Descripción |
|:----------:|:-----------:|
| ```\|```|OR |
| ```^``` | XOR |
| ```&``` | AND |
| ```<<``` | Mover x bits a la izquierda |
| ```>>``` | Mover x bits a la iderecha |
### Tablas de operadores de bits.
| \||1|0|
|:--:|:--:|:--:|
|**1**|1|1|
|**0**|1|0|
|^|1|0|
|:--:|:--:|:--:|
|**1**|0|1|
|**0**|1|0|
|&|1|0|
|:--:|:--:|:--:|
|**1**|1|0|
|**0**|0|0|
**Ejemplos:**
* Se definirán los objetos ```a``` con valor igual a ```13```y ```b``` con valor igual a ```26```.
```
a = 0b01101
b = 0b11010
a
b
```
* Se utilizará el operador ```|``` para cada bit de ```a``` y ```b```. La operación es la siguiente:
```
a = 01101
b = 11010
| ______
11111
```
El resultado es ```31```.
```
a | b
0b11111
```
* Se utilizará el operador ```^``` para cada bit de ```a``` y ```b```. La operación es la siguiente:
```
a = 01101
b = 11010
^ ______
10111
```
El resultado es ```23```.
```
a ^ b
0b10111
```
* Se utilizará el operador ```&``` para cada bit de ```a``` y ```b```. La operación es la siguiente:
```
a = 01101
b = 11010
& ______
01000
```
El resultado es ```8```.
```
a & b
0b01000
```
* La siguiente celda moverá los bits de ```a```, 3 posiciones a la izquierda. Es decir, añádirá 3 ceros a la derecha de su representación binaria de la siguiente forma.
```
a = 01101
a << 3
01101000
```
El resultado es ```104```.
```
a << 3
0b01101000
```
* La siguiente celda moverá los bits de ```b```, 2 posiciones a la derfecha. Es decir, eliminará las 2 últimas posiciones a la derecha de su representación binaria de la siguiente forma.
```
b = 11010
b >> 2
110
```
El resultado es ```6```.
```
b >> 2
0b110
```
### Operadores de bits con objetos de tipo ```bool```.
Los operadores de bits también pueden ser usados para crear expresiones que implica an objetos de tipo ```bool```.
**Ejemplos:**
* En la expresión de la siguiente celda, el operador ```|``` se comporta como ```or```. El resultado es ```True```.
```
True | False
```
* En la expresión de la siguiente celda, el operador ```|``` se comporta como ```and```. El resultado es ```False```.
```
True & False
```
* La siguiente celda da por resultado ```False```.
```
True ^ True
```
* Las expresiónes de las siguientes celdas toman a ```True``` como a ```1```.
```
True >> 2
True << 4
```
## Operador ternario.
El operador ternario evalúa una expresión lógica con una sintaxis como la que se describe a continuación.
```
<expresión 1> if <expresión lógica> else <expresión 2>
```
Donde:
* ```<expresión lógica>``` es una expresión lógica.
* ```<expresion 1>``` es la expresión que se ejecutará en caso de que la expresión lógica de pro resultado ```True```.
* ```<expresion 2>``` es la expresión que se ejecutará en caso de que la expresión lógica de pro resultado ```False```.
**Ejemplo:**
* Se le asginará al objeto ```1126```el nombre ```numero```.
```
numero = 1126
```
* El residuo de dividir ```numero``` entre ```2``` es cero. Esto implica que ```numero``` es par.
```
numero % 2
```
* La expresión lógica ```numero % 2 == 0``` da por resultado ```True```.
```
numero % 2 == 0
```
* La siguiente expresión utiliza un operador ternario que regresa la cadena de caracteres ```"par"``` en caso de que el objeto con nombre ```numero``` sea divisible entre ```2``` o rergesará la cadena de caracteres ```"non"``` en caso contrario.
```
"par" if numero % 2 == 0 else "non"
```
## Operador de atributo.
En vista de que todo es un objeto en Python, es posible acceder a los atributos y métodos de un objeto mediante el operador de atributo el cual corresponde a un punto ```.```.
### Atributos.
En el paradigma de programación orientada a objetos, un atributo es un valor que está asociado a un objeto mediante un nombre. En el caso de Python, los atributos son objetos asociados a un objeto mediante a un nombre.
Para acceder al atributo de un objeto se utiliza la siguiente sintaxis:
```
<objeto>.<atributo>
```
Donde:
* ```<objeto>``` es cualquier objeto de Python.
* ```<atributo>``` es el nombre de un aributo que posee el objeto.
### Métodos.
Los métodos son una especie de atributo, pero con la capacidad de ejecutar ciertas instrucciones del mismo modo que una función.
Para poder ejecutar un método se usa la siguiente sintaxis:
```
<objeto>.<método>(<argumento 1>, <argumento 2>,..., <argumento n>)
```
Donde:
* ```<objeto>``` es cualquier objeto de Python.
* ```<atributo>``` es el nombre de un método que posee el objeto.
* ```<argumento i>``` es un objeto que se ingresa al método para que pueda ser ejecutado. Un método puede no requerir argumentos.
**Ejemplos:**
* Los objetos de tipo ```complex``` contiene a los atributos:
* ```real```, el cual corresponde a un objeto de tipo ```float``` que contiene al valor del componente real del número complejo.
* ```imag```, el cual corresponde a un objeto de tipo ```float``` que contiene al valor del componente imaginario del número complejo.
* Del mismo modo, los objetos de tipo ```complex``` contiene el método ```conjugate()```, el cual calcula y regresa al número complejo conjugado, el cual también es de tipo ```complex```.
* La siguiente celda regresará el atributo ```real``` del objeto ```(15-23j)``` de tipo ```complex```. El resultado es ```15.0```.
```
(15-23j).real
```
* La siguiente celda regresará el atributo ```imag``` del objeto ```(15-23j)``` de tipo ```complex```. El resultado es ```-23.0```.
```
(15-23j).imag
```
* El método ```(15-13j).conjugate``` es de tipo ```function```.
**Nota:** Las funciones (objetos de tipo ```function```) se estudiarán mas adelante.
```
(15-13j).conjugate
```
* La siguiente celda regresará el número conjugado del objeto ```(15-23j)``` ejecutando el método ```conjugate()```. El resultado es ```(15+23j)```.
```
(15-23j).conjugate()
```
* Los objetos de tipo ```float``` cuentan con el método ```__int__()```, el cual regresa el valor entero trunco del objeto de tipo ```float``` como un objeto de tipo ```int```.
* La siguiente celda ejecutará el método ```__int__()``` del objeto ```-12.3```. El resultado es ```-12```.
```
-12.3.__int__()
```
* Los objetos de tipo ```float``` e ```int``` cuentan con el método ```__abs__()```, el cual regresa el valor absoluto del objeto.
* La siguiente celda ejecutará el método ```__abs__()``` del objeto ```-12```. El resultado es ```12```.
```
(-12).__abs__()
```
### Concatenacion de atributos.
El operador ```.``` puede utilizarse con la siguiente sintaxis.
```
<objeto>.<atributo 1>.<atributo 2>. ... .<atributo n>
```
Donde:
* ```<atributo i>``` puede ser un atributo/método que se aplica al objeto que regrese la invocación o ejecución del atributo/método previo.
**Ejemplo:**
* La siguiente celda realizará lo siguiente:
* Obtendrá el atributo ```real``` del objeto ```(-15.456-13.23j)``` el cual será el objeto de tipo ```float``` cuyo valor es de ```-15.456```.
* A partir del atributo ```(-15.456-13.23j).real``` se ejecutará el método ```__int__()```, el cual regresará al objeto de tipo ```int``` cuyo valor es de ```-15```.
* A partir del resultado de ejecutar el método ```(-15.456-13.23j).real.__int__()```, se ejecutará el método ```__abs__()``` el cual regresará al objeto de tipo ```int``` cuyo valor es de ```15```.
```
(-15.456-13.23j).real.__int__().__abs__()
```
## La función ```eval()```.
La función ```eval()``` evalúa un objeto de tipo ```str``` como si fuera una expresión.
```
eval(<objeto tipo str>)
```
Si el texto a evaluar no es una expresión válida, ```eval()``` generará un mensaje de error.
**Ejemplos:**
* Las siguientes celdas ejemplifican el uso de la función ```eval()```.
```
eval("12 * 300")
eval("0x11 + 0x10010")
eval("12 > 5")
eval("type('Hola')")
eval("print('hola')")
```
* La expresión dentro de la cadena de caracteres que se ingresa como argumento para la función ```eval()``` en la siguiente celda hace referencia al nombre ```indefinido```, el cual no está definido y se desencadenará un error de tipo ```NameError```.
```
eval("indefinido * 3")
```
* La cadena de caracteres usada como argumento para la función ```eval()``` en la siguiente celda no es una expresión válida y se generará un error de tipo ```SyntaxError```.
```
eval("Hola Mundo")
```
<p style="text-align: center"><a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Licencia Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/80x15.png" /></a><br />Esta obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>
<p style="text-align: center">© José Luis Chiquete Valdivieso. 2020.</p>
|
github_jupyter
|
<objeto 1> <operador> <objeto 2>
<objeto 1> <operador 1> <objeto 2> <operador 2> .... <operador n-1> <objeto n>
1 + 1
15 * 4 + 1 / 3 ** 5
3 + 2
3 - 2
3 * 2
3 ** 2
3 ** 0.5
3 / 2
3 // 2
3 % 2
12 * 5 + 2 / 4 ** 2
(12 * 5) + (2 / (4 ** 2))
(12 * 5) + (2 / 4) ** 2
(12 * (5 + 2) / 3) ** 2
>>> 3 / 4
0
>>> 10 / 5
2
>>>
<colección 1> + <colección 2>
"hola" + "mundo"
[1, 2, 3] + ['uno', 'dos', 'tres']
[1, 2, 3] + ('uno', 'dos', 'tres')
(1, 2, 3) + ('uno', 'dos', 'tres')
'hola' + 3
<colección > * <n>
1 * [1, 2, 3]
'hola' * 3
3 * 'hola'
(None,) * 6
[1, 2, 3] * [3]
x = 2
x += 3
x
x **= 3
x
x //= 14.2
x
x %= 1.5
x
tupla = (1, 2, 3)
id(tupla)
tupla += ('cuatro', 'cinco', 'seis')
## Expresiones lógicas.
Las expresiones lógicas permiten evaluar una condición, la cual da por resultado un valor *Verdadero* (```True```) en caso de que dicha condición se cumpla o *Falso* (```False```) en caso de que no sea así.
### Operadores de evaluación.
Estos operadores comparan dos expresiones. El resultado de esta evaluación es un objeto de tipo ```bool```.
|Operador|Evalúa |
|:------:|:---------------|
|```==``` |```a == b``` ¿a igual a b?|
|```!=``` |```a != b``` ¿a distinta de b?|
|```>``` |```a > b``` ¿a mayor que b?|
|```<``` |```a < b``` ¿a menor que b?|
|```>=``` |```a >= b``` ¿a mayor o igual que b?|
|```<=``` |```a <= b``` ¿a menor o igual que b?|
**Ejemplos:**
* La siguiente celda evalúa si los valores de los objetos ```"hola"``` y ```'hola'``` son iguales. El resultado es ```True```.
* La siguiente celda evalúa si los valores de los objetos ```"hola"``` y ```'Hola'``` son distintos. El resultado es ```True```.
* La siguiente celda evalúa si el valor de```5``` es mayor que ```3``` . El resultado es ```True```.
* La siguiente celda evalúa si el valor de```5``` es menor o igual que ```3``` . El resultado es ```False```.
* La siguiente celda evalúa si el resultado de la expresión ```2 * 9 ** 0.5``` es igual a ```6``` . El resultado es ```True```.
* La siguiente celda evalúa si el resultado de la expresión ```(2 * 9) ** 0.5``` es igual a ```6``` . El resultado es ```False```.
### Operadores de identidad.
Los operadores ```is``` e ```is not``` evalúan si un identificador se refiere exactamente al mismo objeto o pertenece a un tipo.
|Operador |Evalúa|
|:---------:|:----:|
|```is``` |```a is b``` Equivale a ```id(a) == id(b)```|
|```is not``` |```a is not b``` Equivale a ```id(a) != id(b)```|
**Ejemplos:**
* La siguiente celda le asignará los nombres ```a```y ```b``` al objeto ```45```.
* La siguiente celda evaluará si los nombres ```a``` y ```b``` hacen referencia al mismo objeto. El resultado es ```True```.
* La siguiente celda evaluará si el resultado de la expresión ```type("Hola")``` es el objeto ```str```. El resultado es ```True```.
* La siguiente celda evaluará si el resultado de la expresión ```type("Hola")``` NO es el objeto ```complex```. El resultado es ```True```.
* Las siguientes celdas ilustran que ```True``` y ```1``` tienen el mismo valor, pero no son el mismo objeto.
### Operadores de pertenencia.
Los operadores ```in``` y ```not in``` evalúan si un objeto se encuentra dentro de una colección.
**Ejemplos:**
* La siguiente celda evaulará si el objeto ```'a'``` se encuentra dentro del objeto ```'Hola'```. El resultado es ```True```.
* La siguiente celda evaulará si el objeto ```'z'``` se encuentra dentro del objeto ```'Hola'```. El resultado es ```False```.
* La siguiente celda evaulará si el objeto ```'la'``` NO se encuentra dentro del objeto ```'Hola'```. El resultado es ```False```.
* La siguiente celda evaulará si el objeto ```'z'``` NO se encuentra dentro del objeto ```'Hola'```. El resultado es ```True```.
### Álgebra booleana y tablas de la verdad.
El álgebra booleana o [Algebra de Boole](https://es.wikipedia.org/wiki/%C3%81lgebra_de_Boole) es una rama de las matemáticas que permite crear estructuras lógicas mendiante valores booleanos.
En el caso de Python, los objetos de tipo ```bool```, ```True``` y ```False```, son los valores con los que se pueden realizar operaciones con lógica booleana.
### Operaciones booleanos.
Un operación booleana permite definir un operador lógico, el cual relaciona a dos valores booleanos con un valor específico.
### Tablas de la verdad.
Son tablas que describen los posibles resultados de aplicar un operador a dos valores booleanos.
#### El operador lógico ```OR```.
Este operador da por resultado ```True``` en caso de que al menos uno de los valores sea ```True```. Solamente regresa ```False``` si ambos valores son ```False```.
|OR| True|False|
|:--:|:--:|:--:|
|**True**|True|True|
|**False**|True|False|
Este operador corresponde a la palabra reservada ```or``` en Python.
Donde:
* ```<valor 1>``` y ```<valor 2>``` son generalmente valores de tipo ```bool```.
#### El operador lógico ```AND```.
Este operador dar por resultado ```True``` en caso de que ambos valores sean ```True```. El resultado es ```False``` en los demás casos.
|AND| True|False|
|:--:|:--:|:--:|
|**True**|True|False|
|**False**|False|False|
Este operador corresponde a la palabra reservada ```and``` en Python.
Donde:
* ```<valor 1>``` y ```<valor 2>``` son generalmente valores de tipo ```bool```.
### El operador ```NOT```.
El operador ```NOT``` hace que el valor booleano a su derecha inmediata cambie al valor contrario.
Este operador corresponde a la palabra reservada ```not``` en Python.
Donde:
* ```<valor>``` es un valor de tipo ```bool```.
### El operador ```XOR```.
Este operador dar por resultado ```True``` en caso de que ambos valores sean distintos. El resultado es ```False``` en casos de que ambos valores sean iguales.
|XOR| True|False|
|:--:|:--:|:--:|
|**True**|False|True|
|**False**|True|False|
No hay un operador lógico en Python para ```XOR```.
### Operadores lógicos de Python.
Estos operadores permiten la realización de las siguientes operaciones lógicas. Por lo general se realizan con objetos de tipo ```bool```, pero Python también permite operaciones lógicas con otros tipos de datos y expresiones.
|Operador|Evalúa|
|:------:|:----:|
|```or``` |*a or b* ¿Se cumplen a o b?|
|```and``` |*a and b* ¿Se comple a y b?|
|```not```|*not x* Contrario a x|
**Nota:** Las operaciones lógicas se ejecutan de izquierda a derecha, pero pueden ser agrupadas usando paréntesis.
**Ejemplos:**
* La expresión de la siguiente celda dará por resultado ```True```.
* La expresión de la siguiente celda dará por resultado ```True```.
* La expresión de la siguiente celda dará por resultado ```False```.
* La expresión de la siguiente celda dará por resultado ```False```.
* La expresión de la siguiente celda dará por resultado ```True```.
* La expresión de la siguiente celda dará por resultado ```False```.
* La expresión de la siguiente celda dará por resultado ```True```.
* La expresión de la siguiente celda dará por resultado ```False```.
* La siguiente celda evalúa primero la expresión ```15 == 3``` y el resultado es el valor que el operador ```or``` utilizará.
* La siguiente celda evalúa primero las expresiones ```15 > 3``` y ```15 <= 20``` el resultado de cada una será usado por el operador```and```.
### Operadores lógicos relacionados con otros objetos de Python.
Es posible usar los operadores lógicos de Python con otros objetos que no sean de tipo ```bool``` bajo las siguientes premisas:
* El ```0``` equivale a ```False```.
* El valor```None``` equivale a ```False```.
* Una colección vacía equivale a ```False```.
* Cualquier otro objeto equivale a ```True```.
**Nota:** En algunos casos, el resultado de estas operaciones no es un valor de tipo ```bool```.
**Ejemplos:**
* La expresión de la siguiente celda da por resultado ```False```.
* La expresión de la siguiente celda da por resultado ```True```.
* La siguiente celda dará por resultado ```0```.
* Para que el resultado de la expresión de la celda anterior sea de tipo ```bool``` es necesario convertirla explícitamente con la función ```bool()```.
* La expresión de la siguiente celda regresará ```123```.
* La expresión de la siuiente celda regresará ```'Hola'```.
## Operadores de bits.
Las operaciones de bits son cálculos que implican a cada bit que conforma a un número representado de forma binaria.
Los operadores de bits ```|```, ```^``` y ```&``` realizan operaciones idénticas a los operadores lógicos, pero para cada bit.
|Operador | Descripción |
|:----------:|:-----------:|
| ```\|```|OR |
| ```^``` | XOR |
| ```&``` | AND |
| ```<<``` | Mover x bits a la izquierda |
| ```>>``` | Mover x bits a la iderecha |
### Tablas de operadores de bits.
| \||1|0|
|:--:|:--:|:--:|
|**1**|1|1|
|**0**|1|0|
|^|1|0|
|:--:|:--:|:--:|
|**1**|0|1|
|**0**|1|0|
|&|1|0|
|:--:|:--:|:--:|
|**1**|1|0|
|**0**|0|0|
**Ejemplos:**
* Se definirán los objetos ```a``` con valor igual a ```13```y ```b``` con valor igual a ```26```.
* Se utilizará el operador ```|``` para cada bit de ```a``` y ```b```. La operación es la siguiente:
El resultado es ```31```.
* Se utilizará el operador ```^``` para cada bit de ```a``` y ```b```. La operación es la siguiente:
El resultado es ```23```.
* Se utilizará el operador ```&``` para cada bit de ```a``` y ```b```. La operación es la siguiente:
El resultado es ```8```.
* La siguiente celda moverá los bits de ```a```, 3 posiciones a la izquierda. Es decir, añádirá 3 ceros a la derecha de su representación binaria de la siguiente forma.
El resultado es ```104```.
* La siguiente celda moverá los bits de ```b```, 2 posiciones a la derfecha. Es decir, eliminará las 2 últimas posiciones a la derecha de su representación binaria de la siguiente forma.
El resultado es ```6```.
### Operadores de bits con objetos de tipo ```bool```.
Los operadores de bits también pueden ser usados para crear expresiones que implica an objetos de tipo ```bool```.
**Ejemplos:**
* En la expresión de la siguiente celda, el operador ```|``` se comporta como ```or```. El resultado es ```True```.
* En la expresión de la siguiente celda, el operador ```|``` se comporta como ```and```. El resultado es ```False```.
* La siguiente celda da por resultado ```False```.
* Las expresiónes de las siguientes celdas toman a ```True``` como a ```1```.
## Operador ternario.
El operador ternario evalúa una expresión lógica con una sintaxis como la que se describe a continuación.
Donde:
* ```<expresión lógica>``` es una expresión lógica.
* ```<expresion 1>``` es la expresión que se ejecutará en caso de que la expresión lógica de pro resultado ```True```.
* ```<expresion 2>``` es la expresión que se ejecutará en caso de que la expresión lógica de pro resultado ```False```.
**Ejemplo:**
* Se le asginará al objeto ```1126```el nombre ```numero```.
* El residuo de dividir ```numero``` entre ```2``` es cero. Esto implica que ```numero``` es par.
* La expresión lógica ```numero % 2 == 0``` da por resultado ```True```.
* La siguiente expresión utiliza un operador ternario que regresa la cadena de caracteres ```"par"``` en caso de que el objeto con nombre ```numero``` sea divisible entre ```2``` o rergesará la cadena de caracteres ```"non"``` en caso contrario.
## Operador de atributo.
En vista de que todo es un objeto en Python, es posible acceder a los atributos y métodos de un objeto mediante el operador de atributo el cual corresponde a un punto ```.```.
### Atributos.
En el paradigma de programación orientada a objetos, un atributo es un valor que está asociado a un objeto mediante un nombre. En el caso de Python, los atributos son objetos asociados a un objeto mediante a un nombre.
Para acceder al atributo de un objeto se utiliza la siguiente sintaxis:
Donde:
* ```<objeto>``` es cualquier objeto de Python.
* ```<atributo>``` es el nombre de un aributo que posee el objeto.
### Métodos.
Los métodos son una especie de atributo, pero con la capacidad de ejecutar ciertas instrucciones del mismo modo que una función.
Para poder ejecutar un método se usa la siguiente sintaxis:
Donde:
* ```<objeto>``` es cualquier objeto de Python.
* ```<atributo>``` es el nombre de un método que posee el objeto.
* ```<argumento i>``` es un objeto que se ingresa al método para que pueda ser ejecutado. Un método puede no requerir argumentos.
**Ejemplos:**
* Los objetos de tipo ```complex``` contiene a los atributos:
* ```real```, el cual corresponde a un objeto de tipo ```float``` que contiene al valor del componente real del número complejo.
* ```imag```, el cual corresponde a un objeto de tipo ```float``` que contiene al valor del componente imaginario del número complejo.
* Del mismo modo, los objetos de tipo ```complex``` contiene el método ```conjugate()```, el cual calcula y regresa al número complejo conjugado, el cual también es de tipo ```complex```.
* La siguiente celda regresará el atributo ```real``` del objeto ```(15-23j)``` de tipo ```complex```. El resultado es ```15.0```.
* La siguiente celda regresará el atributo ```imag``` del objeto ```(15-23j)``` de tipo ```complex```. El resultado es ```-23.0```.
* El método ```(15-13j).conjugate``` es de tipo ```function```.
**Nota:** Las funciones (objetos de tipo ```function```) se estudiarán mas adelante.
* La siguiente celda regresará el número conjugado del objeto ```(15-23j)``` ejecutando el método ```conjugate()```. El resultado es ```(15+23j)```.
* Los objetos de tipo ```float``` cuentan con el método ```__int__()```, el cual regresa el valor entero trunco del objeto de tipo ```float``` como un objeto de tipo ```int```.
* La siguiente celda ejecutará el método ```__int__()``` del objeto ```-12.3```. El resultado es ```-12```.
* Los objetos de tipo ```float``` e ```int``` cuentan con el método ```__abs__()```, el cual regresa el valor absoluto del objeto.
* La siguiente celda ejecutará el método ```__abs__()``` del objeto ```-12```. El resultado es ```12```.
### Concatenacion de atributos.
El operador ```.``` puede utilizarse con la siguiente sintaxis.
Donde:
* ```<atributo i>``` puede ser un atributo/método que se aplica al objeto que regrese la invocación o ejecución del atributo/método previo.
**Ejemplo:**
* La siguiente celda realizará lo siguiente:
* Obtendrá el atributo ```real``` del objeto ```(-15.456-13.23j)``` el cual será el objeto de tipo ```float``` cuyo valor es de ```-15.456```.
* A partir del atributo ```(-15.456-13.23j).real``` se ejecutará el método ```__int__()```, el cual regresará al objeto de tipo ```int``` cuyo valor es de ```-15```.
* A partir del resultado de ejecutar el método ```(-15.456-13.23j).real.__int__()```, se ejecutará el método ```__abs__()``` el cual regresará al objeto de tipo ```int``` cuyo valor es de ```15```.
## La función ```eval()```.
La función ```eval()``` evalúa un objeto de tipo ```str``` como si fuera una expresión.
Si el texto a evaluar no es una expresión válida, ```eval()``` generará un mensaje de error.
**Ejemplos:**
* Las siguientes celdas ejemplifican el uso de la función ```eval()```.
* La expresión dentro de la cadena de caracteres que se ingresa como argumento para la función ```eval()``` en la siguiente celda hace referencia al nombre ```indefinido```, el cual no está definido y se desencadenará un error de tipo ```NameError```.
* La cadena de caracteres usada como argumento para la función ```eval()``` en la siguiente celda no es una expresión válida y se generará un error de tipo ```SyntaxError```.
| 0.525612 | 0.97506 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.