markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
it's really intersting that somes common french traffic signs are not present in INI German traffic signs dataset or differedWhatever our input - evenif it's not present in the training dataset, by using softmax activation our classififer can not say that 'this is a new traffic sign that it doesn't recognize' (sum of probability across all classes is 1), it's just try to find class that probably suit for the input. | #Normalize the dataset
X_frenchsign_norm = input_normalization(images_frenchsign)
#One-hot matrix
y_frenchsign_onehot = keras.utils.to_categorical(y_frenchsign, n_classes)
#Load saved model
reconstructed = keras.models.load_model("LeNet_enhanced_trainingdataset_HLS.h5")
#Evaluate and display the prediction performance
prediction_performance = reconstructed.evaluate(X_frenchsign_norm, y_frenchsign_onehot)
dict(zip(reconstructed.metrics_names, prediction_performance))
#### Prediction for all instances inside the test dataset
y_pred_logits = reconstructed.predict(X_frenchsign_norm)
y_pred_proba = tf.nn.softmax(y_pred_logits).numpy()
y_pred_class = y_pred_proba.argmax(axis=-1)
### Showing prediction results
for i, pred in enumerate(y_pred_class):
print('Image {} - Target = {}, Predicted = {}'.format(i, y_frenchsign[i], pred)) | Image 0 - Target = 13, Predicted = 6
Image 1 - Target = 31, Predicted = 6
Image 2 - Target = 29, Predicted = 6
Image 3 - Target = 24, Predicted = 6
Image 4 - Target = 26, Predicted = 6
Image 5 - Target = 27, Predicted = 6
Image 6 - Target = 33, Predicted = 6
Image 7 - Target = 17, Predicted = 6
Image 8 - Target = 15, Predicted = 6
Image 9 - Target = 34, Predicted = 6
Image 10 - Target = 12, Predicted = 6
Image 11 - Target = 2, Predicted = 6
Image 12 - Target = 2, Predicted = 6
Image 13 - Target = 4, Predicted = 6
Image 14 - Target = 2, Predicted = 6
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
*French traffic signs to classsify &8595;* | #### plot softmax probs along with traffic sign examples
n_img = X_frenchsign_norm.shape[0]
fig, axarray = plot.subplots(n_img, 2)
plot.suptitle('Visualization of softmax probabilities', fontweight='bold')
for r in range(0, n_img):
axarray[r, 0].imshow(numpy.squeeze(images_frenchsign[r]))
axarray[r, 0].set_xticks([]), axarray[r, 0].set_yticks([])
plot.setp(axarray[r, 0].get_xticklabels(), visible=False)
plot.setp(axarray[r, 0].get_yticklabels(), visible=False)
axarray[r, 1].bar(numpy.arange(n_classes), y_pred_proba[r])
axarray[r, 1].set_ylim([0, 1])
plot.setp(axarray[r, 1].get_yticklabels(), visible=False)
plot.draw()
fig.savefig('figures/' + 'french_sign_softmax_visuali_LeNet_enhanced_trainingdataset_HLS' + '.jpg', dpi=700)
K = 3
#### print top K predictions of the model for each example, along with confidence (softmax score)
for i in range(len(images_frenchsign)):
print('Top {} model predictions for image {} (Target is {:02d})'.format(K, i, y_frenchsign[i]))
top_3_idx = numpy.argsort(y_pred_proba[i])[-3:]
top_3_values = y_pred_proba[i][top_3_idx]
top_3_logits = y_pred_logits[i][top_3_idx]
for k in range(K):
print(' Prediction = {:02d} with probability {:.4f} (logit is {:.4f})'.format(top_3_idx[k], top_3_values[k], top_3_logits[k])) | Top 3 model predictions for image 0 (Target is 13)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 1 (Target is 31)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 2 (Target is 29)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 3 (Target is 24)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 4 (Target is 26)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 5 (Target is 27)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 6 (Target is 33)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 7 (Target is 17)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 8 (Target is 15)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 9 (Target is 34)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 10 (Target is 12)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 11 (Target is 02)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 12 (Target is 02)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 13 (Target is 04)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
Top 3 model predictions for image 14 (Target is 02)
Prediction = 02 with probability 0.0250 (logit is 0.1806)
Prediction = 31 with probability 0.0262 (logit is 0.2275)
Prediction = 06 with probability 0.0292 (logit is 0.3377)
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
*Visualization of softmax probabilities &8595;* Visualization of layers | ### Import tensorflow and keras
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model
import matplotlib.pyplot as plot
print ("TensorFlow version: " + tf.__version__)
# Load pickled data
import pickle
import numpy
training_file = 'traffic-signs-data/train.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
X_train, y_train = train['features'], train['labels'] # training dataset
n_classes = len(numpy.unique(y_train))
import cv2
def input_normalization(X_in):
X = numpy.float32(X_in/255.0)
return X
# normalization of dataset
X_train_norm = input_normalization(X_train)
# one-hot matrix
y_train_onehot = keras.utils.to_categorical(y_train, n_classes)
#Load saved model
reconstructed = keras.models.load_model("LeNet_enhanced_trainingdataset_HLS.h5")
#Build model for layer display
layers_output = [layer.output for layer in reconstructed.layers]
outputs_model = Model(inputs=reconstructed.input, outputs=layers_output)
outputs_history = outputs_model.predict(X_train_norm[900].reshape(1,32,32,3)) | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Display analized input | plot.imshow(X_train[900])
def display_layer(outputs_history, col_size, row_size, layer_index):
activation = outputs_history[layer_index]
activation_index = 0
fig, ax = plot.subplots(row_size, col_size, figsize=(row_size*2.5,col_size*1.5))
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].axis('off')
if activation_index < activation.shape[3]:
ax[row][col].imshow(activation[0, :, :, activation_index]) # , cmap='gray'
activation_index += 1
display_layer(outputs_history, 3, 2, 1)
display_layer(outputs_history, 8, 8, 2)
display_layer(outputs_history, 8, 8, 3)
display_layer(outputs_history, 8, 8, 4) | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Embed an Escher map in an IPython notebook | escher.list_available_maps()
b = escher.Builder(map_name='e_coli_core.Core metabolism')
b.display_in_notebook() | _____no_output_____ | MIT | .ipynb_checkpoints/Nucleotide metabolism-checkpoint.ipynb | polybiome/PolyEnzyme |
Plot FBA solutions in Escher | model = cobra.io.load_json_model( "iECW_1372.json") # E coli metabolic model
FBA_Solution = model.optimize() # FBA of the original model
print('Original Growth rate: %.9f' % FBA_Solution.f)
b = escher.Builder(map_name='e_coli_core.Core metabolism',
reaction_data=FBA_Solution.x_dict,
# color and size according to the absolute value
reaction_styles=['color', 'size', 'abs', 'text'],
# change the default colors
reaction_scale=[{'type': 'min', 'color': '#cccccc', 'size': 4},
{'type': 'mean', 'color': '#0000dd', 'size': 20},
{'type': 'max', 'color': '#ff0000', 'size': 40}],
# only show the primary metabolites
hide_secondary_metabolites=True,
highlight_missing = True)
b.display_in_notebook()
#b.display_in_browser()
# MAP EDITION
model_knockout = model.copy()
cobra.manipulation.delete_model_genes(model_knockout, ["ECW_m3223"]) #ODC
cobra.manipulation.delete_model_genes(model_knockout, ["ECW_m0743"]) #ODC
cobra.manipulation.delete_model_genes(model_knockout, ["ECW_m3196"]) #Agmatinase.
knockout_FBA_solution = model_knockout.optimize() # FBA of the knockout
print('Knockout Growth rate: %.9f' % knockout_FBA_solution.f)
#PASS THE MODEL TO A NEW BUILDER
b = escher.Builder(map_name='e_coli_core.Core metabolism',
reaction_data=knockout_FBA_solution.x_dict,
# color and size according to the absolute value
reaction_styles=['color', 'size', 'abs', 'text'],
# change the default colors
reaction_scale=[{'type': 'min', 'color': '#cccccc', 'size': 4},
{'type': 'mean', 'color': '#0000dd', 'size': 20},
{'type': 'max', 'color': '#ff0000', 'size': 40}],
# only show the primary metabolites
hide_secondary_metabolites=True,
highlight_missing = True)
b.display_in_notebook()
#b.display_in_browser() | _____no_output_____ | MIT | .ipynb_checkpoints/Nucleotide metabolism-checkpoint.ipynb | polybiome/PolyEnzyme |
Methods | class ourCircle:
pi = 3.14
def __init__(self,radius=1):
self.radius = radius
self.area = self.getArea(radius)
def setRadius(self,new_radius):
self.radius = new_radius
self = new_radius * new_radius * self.pi
def getCircumference(self):
return self.radius * self.pi * 2
def getArea(self,radius):
return radius * radius * self.pi
cir = ourCircle()
cir.getCircumference()
cir.getArea(2) | _____no_output_____ | MIT | Object Oriented Programming.ipynb | ramsvijay/basic_datatype_python |
Inheritance | class Animal:
def __init__(self):
print("Animal Object Cost Created")
def whoAmI(self):
print("I am Animal Class")
def eat(self):
print("I am eating")
a = Animal()
a.eat()
class Man(Animal):
def __init__(self):
Animal.__init__(self)
m = Man()
m.eat()
#Polymorphism
#Exception Handling
try:
f =open('test','w')
# f.write("rams")
f.read()
except IOError:
print('geting error')
else:
print("Print CONETENT SUCCESS")
f.close()
print("Error Handle by try and catch")
try:
a='ram'
b=10
print(a+b)
except Exception as e:
print(e.args)
try:
a='ram'
b=10
print(a+b)
except Exception as e:
print(e.args)
finally:
print("i'm getting error")
while False:
print("rams")
else:
print("vijay")
#pip install pylint
%%writefile simple.py
a=10
print(a)
! pylint simple.py
%%writefile simple.py
'''
A very simple script
'''
def fun():
first = 1
second = 2
print(first)
print(second)
fun()
!pylint simple.py
#UnitTest
%%writefile capitalize_text.py
def capitalize_test(text):
return text.capitalize_t()
%%writefile test_file.py
import unittest
import capitalize_text
class TestCap(unittest.TestCase):
def test_one_word(self):
test = "python"
result = capitalize_text.capitalize_test(text)
self.assertEqual(result,'python')
def test_multiple_words(self):
text = "my python"
result = capitalize_text.capitalize_test(text)
self.assertEqual(result,'my python')
if __name__ == '__main__':
unittest.main()
! python test_file.py
! pylint capitalize_text.py | ************* Module capitalize_text
capitalize_text.py:1:0: C0111: Missing module docstring (missing-docstring)
capitalize_text.py:1:0: C0111: Missing function docstring (missing-docstring)
-----------------------------------
Your code has been rated at 0.00/10
| MIT | Object Oriented Programming.ipynb | ramsvijay/basic_datatype_python |
Simulating Grover's Search Algorithm with 2 Qubits | import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline | _____no_output_____ | Apache-2.0 | lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb | UAAppComp/studyGroup |
Define the zero and one vectorsDefine the initial state $\psi$ | zero = np.matrix([[1],[0]]);
one = np.matrix([[0],[1]]);
psi = np.kron(zero,zero);
print(psi) | [[1]
[0]
[0]
[0]]
| Apache-2.0 | lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb | UAAppComp/studyGroup |
Define the gates we will use:$\text{Id} = \begin{pmatrix} 1 & 0 \\0 & 1 \end{pmatrix},\quadX = \begin{pmatrix} 0 & 1 \\1 & 0 \end{pmatrix},\quadZ = \begin{pmatrix} 1 & 0 \\0 & -1 \end{pmatrix},\quadH = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\1 & -1 \end{pmatrix},\quad\text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\0 & 0 & 1 & 0\end{pmatrix},\quadCZ = (\text{Id} \otimes H) \text{ CNOT } (\text{Id} \otimes H) $ | Id = np.matrix([[1,0],[0,1]]);
X = np.matrix([[0,1],[1,0]]);
Z = np.matrix([[1,0],[0,-1]]);
H = np.sqrt(0.5) * np.matrix([[1,1],[1,-1]]);
CNOT = np.matrix([[1,0,0,0],[0,1,0,0],[0,0,0,1],[0,0,1,0]]);
CZ = np.kron(Id,H).dot(CNOT).dot(np.kron(Id,H));
print(CZ) | [[ 1. 0. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 0. 1. 0.]
[ 0. 0. 0. -1.]]
| Apache-2.0 | lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb | UAAppComp/studyGroup |
Define the oracle for Grover's algorithm (take search answer to be "10")$\text{oracle} = \begin{pmatrix} 1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & -1 & 0 \\0 & 0 & 0 & 1\end{pmatrix}= (Z \otimes \text{Id}) CZ$Use different combinations of $Z \otimes \text{Id}$ to change where search answer is. | oracle = np.kron(Z,Id).dot(CZ);
print(oracle) | [[ 1. 0. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 0. -1. 0.]
[ 0. 0. 0. 1.]]
| Apache-2.0 | lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb | UAAppComp/studyGroup |
Act the H gates on the input vector and apply the oracle | psi0 = np.kron(H,H).dot(psi);
psi1 = oracle.dot(psi0);
print(psi1) | [[ 0.5]
[ 0.5]
[-0.5]
[ 0.5]]
| Apache-2.0 | lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb | UAAppComp/studyGroup |
Remember that when we measure the result ("00", "01", "10", "11") is chosen randomly with probabilities given by the vector elements squared. | print(np.multiply(psi1,psi1)) | [[ 0.25]
[ 0.25]
[ 0.25]
[ 0.25]]
| Apache-2.0 | lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb | UAAppComp/studyGroup |
There is no difference between any of the probabilities. It's still just a 25% chance of getting the right answer. We need some of gates after the oracle before measuring to converge on the right answer. These gates do the operation $W = \frac{1}{2}\begin{pmatrix} -1 & 1 & 1 & 1 \\1 & -1 & 1 & 1 \\1 & 1 & -1 & 1 \\1 & 1 & 1 & -1\end{pmatrix}=(H \otimes H)(Z \otimes Z) CZ (H \otimes H)$Notice that if the matrix W is multiplied by the vector after the oracle, W $\frac{1}{2}\begin{pmatrix} 1 \\1 \\-1 \\1\end{pmatrix} = \begin{pmatrix} 0 \\0 \\1 \\0\end{pmatrix} $,every vector element decreases, except the correct answer element which increases. This would be true if if we chose a different place for the search result originally. | W = np.kron(H,H).dot(np.kron(Z,Z)).dot(CZ).dot(np.kron(H,H));
print(W)
psif = W.dot(psi1);
print(np.multiply(psif,psif))
x = [0,1,2,3];
xb = [0.25,1.25,2.25,3.25];
labels=['00', '01', '10', '11'];
plt.axis([-0.5,3.5,-1.25,1.25]);
plt.xticks(x,labels);
plt.bar(x, np.ravel(psi0), 1/1.5, color="red");
plt.bar(xb, np.ravel(np.multiply(psi0,psi0)), 1/2., color="blue");
labels=['00', '01', '10', '11'];
plt.axis([-0.5,3.5,-1.25,1.25]);
plt.xticks(x,labels);
plt.bar(x, np.ravel(psi1), 1/1.5, color="red");
plt.bar(xb, np.ravel(np.multiply(psi1,psi1)), 1/2., color="blue");
labels=['00', '01', '10', '11'];
plt.axis([-0.5,3.5,-1.25,1.25]);
plt.xticks(x,labels);
plt.bar(x, np.ravel(psif), 1/1.5, color="red");
plt.bar(xb, np.ravel(np.multiply(psif,psif)), 1/2., color="blue"); | _____no_output_____ | Apache-2.0 | lessons/misc/quantum-computing/grovers-algorthim-2-qubits.ipynb | UAAppComp/studyGroup |
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium. | import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) | _____no_output_____ | MIT | Visualization/image_color_ramp.ipynb | pberezina/earthengine-py-notebooks |
Import libraries | import ee
import folium
import geehydro | _____no_output_____ | MIT | Visualization/image_color_ramp.ipynb | pberezina/earthengine-py-notebooks |
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. | try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize() | _____no_output_____ | MIT | Visualization/image_color_ramp.ipynb | pberezina/earthengine-py-notebooks |
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. | Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID') | _____no_output_____ | MIT | Visualization/image_color_ramp.ipynb | pberezina/earthengine-py-notebooks |
Add Earth Engine Python script | # Load SRTM Digital Elevation Model data.
image = ee.Image('CGIAR/SRTM90_V4');
# Define an SLD style of discrete intervals to apply to the image.
sld_intervals = \
'<RasterSymbolizer>' + \
'<ColorMap type="intervals" extended="false" >' + \
'<ColorMapEntry color="#0000ff" quantity="0" label="0"/>' + \
'<ColorMapEntry color="#00ff00" quantity="100" label="1-100" />' + \
'<ColorMapEntry color="#007f30" quantity="200" label="110-200" />' + \
'<ColorMapEntry color="#30b855" quantity="300" label="210-300" />' + \
'<ColorMapEntry color="#ff0000" quantity="400" label="310-400" />' + \
'<ColorMapEntry color="#ffff00" quantity="1000" label="410-1000" />' + \
'</ColorMap>' + \
'</RasterSymbolizer>';
# Define an sld style color ramp to apply to the image.
sld_ramp = \
'<RasterSymbolizer>' + \
'<ColorMap type="ramp" extended="false" >' + \
'<ColorMapEntry color="#0000ff" quantity="0" label="0"/>' + \
'<ColorMapEntry color="#00ff00" quantity="100" label="100" />' + \
'<ColorMapEntry color="#007f30" quantity="200" label="200" />' + \
'<ColorMapEntry color="#30b855" quantity="300" label="300" />' + \
'<ColorMapEntry color="#ff0000" quantity="400" label="400" />' + \
'<ColorMapEntry color="#ffff00" quantity="500" label="500" />' + \
'</ColorMap>' + \
'</RasterSymbolizer>';
# Add the image to the map using both the color ramp and interval schemes.
Map.setCenter(-76.8054, 42.0289, 8);
Map.addLayer(image.sldStyle(sld_intervals), {}, 'SLD intervals');
Map.addLayer(image.sldStyle(sld_ramp), {}, 'SLD ramp'); | _____no_output_____ | MIT | Visualization/image_color_ramp.ipynb | pberezina/earthengine-py-notebooks |
Display Earth Engine data layers | Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map | _____no_output_____ | MIT | Visualization/image_color_ramp.ipynb | pberezina/earthengine-py-notebooks |
Ingeniería de Características En las clases previas vimos las ideas fundamentales de machine learning, pero todos los ejemplos asumían que ya teníamos los datos numéricos en un formato ordenado de tamaño ``[n_samples, n_features]``.En la realidad son raras las ocasiones en que los datos vienen así, _llegar y llevar_.Con esto en mente, uno de los pasos más importantes en la práctica de machine learging es la _ingeniería de características_ (_feature engineering_), que es tomar cualquier información que tengas sobre tu problema y convertirla en números con los que construirás tu matriz de características.En esta sección veremos dos ejemplos comunes de _tareas_ de ingeniería de características: cómo representar _datos categóricos_ y cómo representar _texto_. Otras características más avanzandas, como el procesamiento de imágenes, quedarán para el fin del curso.Adicionalmente, discutiremos _características derivadas_ para incrementar la complejidad del modelo y la _imputación_ de datos perdidos.En ocasiones este proceso se conoce como _vectorización_, ya que se refiere a convertir datos arbitrarios en vectores bien definidos. Características CategóricasUn tipo común de datos no numéricos son los datos _categóricos_.Por ejemplo, imagina que estás explorando datos de precios de propiedad, y junto a variables numéricas como precio (_price_) y número de habitaciones (_rooms_), también tienes información del barrio (_neighborhood_) de cada propiedad.Por ejemplo, los datos podrían verse así: | data = [
{'price': 850000, 'rooms': 4, 'neighborhood': 'Queen Anne'},
{'price': 700000, 'rooms': 3, 'neighborhood': 'Fremont'},
{'price': 650000, 'rooms': 3, 'neighborhood': 'Wallingford'},
{'price': 600000, 'rooms': 2, 'neighborhood': 'Fremont'}
] | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Podrías estar tentade a codificar estos datos directamente con un mapeo numérico: | {'Queen Anne': 1, 'Fremont': 2, 'Wallingford': 3}; | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Resulta que esto no es una buena idea. En Scikit-Learn, y en general, los modelos asumen que los datos numéricos reflejan cantidades algebraicas.Usar un mapeo así implica, por ejemplo, que *Queen Anne < Fremont < Wallingford*, o incluso que *Wallingford - Queen Anne = Fremont*, lo que no tiene mucho sentido.Una técnica que funciona en estas situaciones es _codificación caliente_ (_one-hot encoding_), que crea columnas numéricas que indican la presencia o ausencia de la categoría correspondiente, con un valor de 1 o 0 respectivamente.Cuando tus datos son una lista de diccionarios, la clase ``DictVectorizer`` se encarga de la codificación por ti: | from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer(sparse=False, dtype=int)
vec.fit_transform(data) | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Nota que la característica `neighborhood` se ha expandido en tres columnas separadas, representando las tres etiquetas de barrio, y que cada fila tiene un 1 en la columna asociada al barrio respectivo.Teniendo los datos codificados de esa manera, se puede proceder a ajustar un modelo en Scikit-Learn.Para ver el significado de cada columna se puede hacer lo siguiente: | vec.get_feature_names() | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Hay una clara desventaja en este enfoque: si las categorías tienen muchos valores posibles, el dataset puede crecer demasiado.Sin embargo, como los datos codificados contienen principalmente ceros, una matriz dispersa puede ser una solucion eficiente: | vec = DictVectorizer(sparse=True, dtype=int)
vec.fit_transform(data) | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Varios (pero no todos) de los estimadores en Scikit-Learn aceptan entradas dispersas. ``sklearn.preprocessing.OneHotEncoder`` y ``sklearn.feature_extraction.FeatureHasher`` son dos herramientas adicionales que permiten trabajar con este tipo de características. TextoOtra necesidad común es convertir texto en una serie de números que representen su contenido.Por ejemplo, mucho del análisis automático del contenido generado en redes sociales depende de alguna manera de codificar texto como números.Uno de los métodos más simples es a través de _conteo de palabras_ (_word counts_): tomas cada pedazo del texto, cuentas las veces que aparece cada palabra en él, y pones los resultados en una table.Por ejemplo, considera las siguientes tres frases: | sample = ['problem of evil',
'evil queen',
'horizon problem'] | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Para vectorizar estos datos construiríamos una columna para las palabras "problem," "evil,", "horizon," etc.Hacer esto a mano es posible, pero nos podemos ahorrar el tedio utilizando el ``CountVectorizer`` de Scikit-Learn: | from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer()
X = vec.fit_transform(sample)
X | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
El resultado es una matriz dispersa que contiene cada vez que aparece cada palabra en los textos. Para inspeccionarlo fácilmente podemos convertir esto en un ``DataFrame``: | import pandas as pd
pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Todavía falta algo. Este enfoque puede tener problemas: el conteo de palabras puede hacer que algunas características pesen más que otras debido a la frecuencia con la que utilizamos las palabras, y esto puede ser sub-óptimo en algunos algoritmos de clasificación.Una manera de considerar esto es utilizar el modelo _frecuencia de términos-frecuencia inversa de documents_ (_TF-IDF_), que da peso a las palabras de acuerdo a qué tan frecuentemente aparecen en los documentos, pero también qué tan únicas son para cada documento.La sintaxis para aplicar TF-IDF es similar a la que hemos visto antes: | from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer()
X = vec.fit_transform(sample)
pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Esto lo veremos en más detalle en la clase de Naive Bayes. Características DerivadasOtro tipo útil de característica es aquella derivada matemáticamente desde otras características en los datos de entrada.Vimos un ejemplo en la clase de Hiperparámetros cuando construimos características polinomiales desde los datos.Vimos que se puede convertir una regresión lineal en una polinomial sin usar un modelo distinto, sino que transformando los datos de entrada.Esto es conocido como _función de regresión base_ (_basis function regression_), y lo exploraremos en la clase de Regresión Lineal.Por ejemplo, es claro que los siguientes datos no se pueden describir por una línea recta: | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1, 2, 3, 4, 5])
y = np.array([4, 2, 1, 3, 7])
plt.scatter(x, y); | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Si ajustamos una recta a los datos usando ``LinearRegression`` obtendremos un resultado óptimo: | from sklearn.linear_model import LinearRegression
X = x[:, np.newaxis]
model = LinearRegression().fit(X, y)
yfit = model.predict(X)
plt.scatter(x, y)
plt.plot(x, yfit); | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Es óptimo, pero también queda claro que necesitamos un modelo más sofisticado para describir la relació entre $x$ e $y$.Una manera de lograrlo es transformando los datos, agregando columnas o características adicionales que le den más flexibilidad al modelo. Por ejemplo, podemos agregar características polinomiales de la siguiente forma: | from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=3, include_bias=False)
X2 = poly.fit_transform(X)
print(X2) | [[ 1. 1. 1.]
[ 2. 4. 8.]
[ 3. 9. 27.]
[ 4. 16. 64.]
[ 5. 25. 125.]]
| MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Esta matriz de características _derivada_ tiene una columna que representa a $x$, una segunda columna que representa $x^2$, y una tercera que representa $x^3$.Calcular una regresión lineal en esta entrada da un ajuste más cercano a nuestros datos: | model = LinearRegression().fit(X2, y)
yfit = model.predict(X2)
plt.scatter(x, y)
plt.plot(x, yfit); | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
La idea de mejorar un modelo sin cambiarlo, sino que transformando la entrada que recibe, es fundamental para muchas de las técnicas de machine learning más poderosas.Exploraremos más esta idea en la clase de Regresión Lineal. Este camino es motivante y se puede generalizar con las técnicas conocidas como _métodos de kernel_, que exploraremos en la clase de _Support Vector Machines_ (SVM). Imputación de Datos FaltantesUna necesidad común en la ingeniería de características es la manipulación de datos faltantes.En clases anteriores es posible que hayan visto el valor `NaN` en un `DataFrame`, utilizado para marcar valores que no existen.Por ejemplo, podríamos tener un dataset que se vea así: | from numpy import nan
X = np.array([[ nan, 0, 3 ],
[ 3, 7, 9 ],
[ 3, 5, 2 ],
[ 4, nan, 6 ],
[ 8, 8, 1 ]])
y = np.array([14, 16, -1, 8, -5]) | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Antes de aplicar un modelo a estos datos necesitamos reemplazar esos datos faltantes con algún valor apropiado de relleno.Esto es conocido como _imputación_ de valores faltantes, y las estrategias para hacerlo varían desde las más simples (como rellenar con el promedio de cada columna) hasta las más sofisticadas (como completar la matrix con un modelo robusto para esos casos). Estos últimos enfoques suelen ser específicos para cada aplicación, así que no los veremos en el curso.La clase `Imputer` de Scikit-Learn provee en enfoque base de imputación que calcula el promedio, la media, o el valor más frecuente: | from sklearn.preprocessing import Imputer
imp = Imputer(strategy='mean')
X2 = imp.fit_transform(X)
X2 | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Como vemos, al aplicar el imputador los dos valores que faltaban fueron reemplazados por el promedio de los valores presentes en las columnas respectivas. Ahora que tenemos una matriz sin valores faltantes, podemos usarla con la instancia de un modelo, en este caso, una regresión lineal: | model = LinearRegression().fit(X2, y)
model.predict(X2) | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Cadena de Procesamiento (_Pipeline_)Considerando los ejemplos que hemos visto, es posible que sea tedioso hacer cada una de estas transformaciones a mano. En ocasiones querremos automatizar la cadena de procesamiento para un modelo. Imagina una secuencia como la siguiente:1. Imputar valores usando el promedio.2. Transformar las características incluyendo un factor cuadrático.3. Ajustar una regresión lineal.Para encadenar estas etapas Scikit-Learn provee una clase ``Pipeline``, que se usa como sigue: | from sklearn.pipeline import make_pipeline
model = make_pipeline(Imputer(strategy='mean'),
PolynomialFeatures(degree=2),
LinearRegression()) | _____no_output_____ | MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Esta cadena o _pipeline_ se ve y actúa como un objeto estándar de Scikit-Learn, por lo que podemos utilizarla en todo lo que hemos visto hasta ahora que siga la receta de uso de Scikit-Learn. | model.fit(X, y) # X con valores faltantes
print(y)
print(model.predict(X)) | [14 16 -1 8 -5]
[14. 16. -1. 8. -5.]
| MIT | 05.04-Feature-Engineering.ipynb | sebaspee/intro_machine_learning |
Pre-training VGG16 for Distillation | import torch
import torch.nn as nn
from src.data.dataset import get_dataloader
import torchvision.transforms as transforms
import numpy as np
import matplotlib.pyplot as plt
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(DEVICE)
SEED = 0
BATCH_SIZE = 32
LR = 5e-4
NUM_EPOCHES = 25
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True | _____no_output_____ | MIT | VGG16_CIFAR10.ipynb | UdbhavPrasad072300/CPS803_Final_Project |
Preprocessing | transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
#transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
train_loader, val_loader, test_loader = get_dataloader("./data/CIFAR10/", BATCH_SIZE) | Files already downloaded and verified
Files already downloaded and verified
| MIT | VGG16_CIFAR10.ipynb | UdbhavPrasad072300/CPS803_Final_Project |
Model | from src.models.model import VGG16_classifier
classes = 10
hidden_size = 512
dropout = 0.3
model = VGG16_classifier(classes, hidden_size, preprocess_flag=False, dropout=dropout).to(DEVICE)
model
for img, label in train_loader:
img = img.to(DEVICE)
label = label.to(DEVICE)
print("Input Image Dimensions: {}".format(img.size()))
print("Label Dimensions: {}".format(label.size()))
print("-"*100)
out = model(img)
print("Output Dimensions: {}".format(out.size()))
break | Input Image Dimensions: torch.Size([32, 3, 32, 32])
Label Dimensions: torch.Size([32])
----------------------------------------------------------------------------------------------------
| MIT | VGG16_CIFAR10.ipynb | UdbhavPrasad072300/CPS803_Final_Project |
Training | criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params=model.parameters(), lr=LR)
loss_hist = {"train accuracy": [], "train loss": [], "val accuracy": []}
for epoch in range(1, NUM_EPOCHES+1):
model.train()
epoch_train_loss = 0
y_true_train = []
y_pred_train = []
for batch_idx, (img, labels) in enumerate(train_loader):
img = img.to(DEVICE)
labels = labels.to(DEVICE)
preds = model(img)
loss = criterion(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
y_pred_train.extend(preds.detach().argmax(dim=-1).tolist())
y_true_train.extend(labels.detach().tolist())
epoch_train_loss += loss.item()
with torch.no_grad():
model.eval()
y_true_test = []
y_pred_test = []
for batch_idx, (img, labels) in enumerate(val_loader):
img = img.to(DEVICE)
label = label.to(DEVICE)
preds = model(img)
y_pred_test.extend(preds.detach().argmax(dim=-1).tolist())
y_true_test.extend(labels.detach().tolist())
test_total_correct = len([True for x, y in zip(y_pred_test, y_true_test) if x==y])
test_total = len(y_pred_test)
test_accuracy = test_total_correct * 100 / test_total
loss_hist["train loss"].append(epoch_train_loss)
total_correct = len([True for x, y in zip(y_pred_train, y_true_train) if x==y])
total = len(y_pred_train)
accuracy = total_correct * 100 / total
loss_hist["train accuracy"].append(accuracy)
loss_hist["val accuracy"].append(test_accuracy)
print("-------------------------------------------------")
print("Epoch: {} Train mean loss: {:.8f}".format(epoch, epoch_train_loss))
print(" Train Accuracy%: ", accuracy, "==", total_correct, "/", total)
print(" Validation Accuracy%: ", test_accuracy, "==", test_total_correct, "/", test_total)
print("-------------------------------------------------")
plt.plot(loss_hist["train accuracy"])
plt.plot(loss_hist["val accuracy"])
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.show()
plt.plot(loss_hist["train loss"])
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.show() | _____no_output_____ | MIT | VGG16_CIFAR10.ipynb | UdbhavPrasad072300/CPS803_Final_Project |
Testing | with torch.no_grad():
model.eval()
y_true_test = []
y_pred_test = []
for batch_idx, (img, labels) in enumerate(test_loader):
img = img.to(DEVICE)
label = label.to(DEVICE)
preds = model(img)
y_pred_test.extend(preds.detach().argmax(dim=-1).tolist())
y_true_test.extend(labels.detach().tolist())
total_correct = len([True for x, y in zip(y_pred_test, y_true_test) if x==y])
total = len(y_pred_test)
accuracy = total_correct * 100 / total
print("Test Accuracy%: ", accuracy, "==", total_correct, "/", total) | Test Accuracy%: 81.04 == 4052 / 5000
| MIT | VGG16_CIFAR10.ipynb | UdbhavPrasad072300/CPS803_Final_Project |
Saving Model Weights | torch.save(model.state_dict(), "./trained_models/vgg16_cifar10.pt") | _____no_output_____ | MIT | VGG16_CIFAR10.ipynb | UdbhavPrasad072300/CPS803_Final_Project |
HELLO WORLD | with Error
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
t = np.arange(0.0, 2.0, 0.01)
s = 1 + np.sin(2 * np.pi * t)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='About as simple as it gets, folks')
ax.grid()
fig.savefig('test.png')
plt.show() | _____no_output_____ | MIT | src/test/datascience/notebook/withOutputForTrust.ipynb | jakebailey/vscode-jupyter |
_Mini Program - Working with SQLLite DB using Python_ Objective -1. This program gives an idea how to connect with SQLLite DB using Python and perform data manipulation 2. There are 2 ways in which tables are create below to help you understand the robustness of this language Step 1 - Import required libraries In this program we make used of 3 libraries 1. sqlite3 - This module help to work with sql interface. It helps in performing db operations in sqllite database 2. pandas - This module provides high performance and easy to use data manipulation and data analysis functionalities 3. os - This module provides function to interact with operating system with easy use | #Importing the required modules
import sqlite3
import pandas as pd
import os | _____no_output_____ | Apache-2.0 | SQLLiteDBConnection/workingWithSQLLiteDB.ipynb | Snigdha171/PythonMiniProgramSeries |
Step 2 - Creating a function to drop the table Function helps to re-create a reusable component that can be used conviniently and easily in other part of the code In Line 1 - We state the function name and specify the parameter being passed. In this case, the parameter is the table name In Line 2 - We write the sql query to be executed In Line 3 - We execute the query using the cursor object | #Creating a function to drop the table if it exists
def dropTbl(tablename):
dropTblStmt = "DROP TABLE IF EXISTS " + tablename
c.execute(dropTblStmt) | _____no_output_____ | Apache-2.0 | SQLLiteDBConnection/workingWithSQLLiteDB.ipynb | Snigdha171/PythonMiniProgramSeries |
Step 3 - We create the database in which our table will reside In Line 1 - We are removing the already existing database file In Line 2 - We use connect function from the sqlite3 module to create a database studentGrades.db and establish a connection In Line 3 - We create a context of the database connection. This help to run all the database queries | #Removing the database file
os.remove('studentGrades.db')
#Creating a new database - studentGrades.db
conn = sqlite3.connect("studentGrades.db")
c = conn.cursor() | _____no_output_____ | Apache-2.0 | SQLLiteDBConnection/workingWithSQLLiteDB.ipynb | Snigdha171/PythonMiniProgramSeries |
Step 4 - We create a table in sqllite DB using data defined in the excel file This is the first method in which you can create a table. You can use to_sql function directly to read a dataframe and dump all it's content to the table In Line 1 - We are making use of dropTbl function created above to drop the table In Line 2 - We are creating a dataframe from the data read from the csv In Line 3 - We use to_sql function to push the data into the table. The first row of the file becomes the column name of the tables We repeat the above steps for all the 3 files to create 3 tables - STUDENT, GRADES and SUBJECTS | #Reading data from csv file - student details, grades and subject
dropTbl('STUDENT')
student_details = pd.read_csv("Datafiles/studentDetails.csv")
student_details.to_sql('STUDENT',conn,index = False)
dropTbl('GRADES')
student_grades = pd.read_csv('Datafiles/studentGrades.csv')
student_grades.to_sql('GRADES',conn,index = False)
dropTbl('SUBJECTS')
subjects = pd.read_csv("Datafiles/subjects.csv")
subjects.to_sql('SUBJECTS',conn,index = False) | _____no_output_____ | Apache-2.0 | SQLLiteDBConnection/workingWithSQLLiteDB.ipynb | Snigdha171/PythonMiniProgramSeries |
Step 5 - We create a master table STUDENT_GRADE_MASTER where we can colate the data from the individual tables by performing the joining operations In Line 1 - We are making use of dropTbl function created above to drop the table In Line 2 - We are writing sql query for table creation In Line 3 - We are using the cursor created above to execute the sql statement In Line 4 - We are using the second method of inserting data into the table. We are writing a query to insert the data after joining the data from all the tables In Line 5 - We are using the cursor created above to execute the sql statement In Line 6 - We are doing a commit operation. Since INSERT operation is a ddl, we have to perform a commit operation to register it into the database | #Creating a table to store student master data
dropTbl('STUDENT_GRADE_MASTER')
createTblStmt = '''CREATE TABLE STUDENT_GRADE_MASTER
([Roll_number] INTEGER,
[Student_Name] TEXT,
[Stream] TEXT,
[Subject] TEXT,
[Marks] INTEGER
)'''
c.execute(createTblStmt)
#Inserting data into the master table by joining the tables mentioned above
queryMaster = '''INSERT INTO STUDENT_GRADE_MASTER(Roll_number,Student_Name,Stream,Subject,Marks)
SELECT g.roll_number, s.student_name, stream, sub.subject, marks from GRADES g
LEFT OUTER JOIN STUDENT s on g.roll_number = s.roll_number
LEFT OUTER JOIN SUBJECTS sub on g.subject_code = sub.subject_code'''
c.execute(queryMaster)
c.execute("COMMIT") | _____no_output_____ | Apache-2.0 | SQLLiteDBConnection/workingWithSQLLiteDB.ipynb | Snigdha171/PythonMiniProgramSeries |
Step 6 - We can perform data fetch like we do in sqls using this sqlite3 module In Line 1 - We are writing a query to find the number of records in the master table In Line 2 - We are executing the above created query In Line 3 - fetchall function is used to get the result returned by the query. The result will be in the form of a list of tuples In Line 4 - We are writing another query to find the maximum marks recorded for each subject In Line 5 - We are executing the above created query In Line 6 - fetchall function is used to get the result returned by the query. The result will be in the form of a list of tuples In Line 7 - We are writing another query to find the percentage of marks obtained by each student in the class In Line 8 - We are executing the above created query In Line 9 - fetchall function is used to get the result returned by the query. The result will be in the form of a list of tuples | #Finding the key data from the master table
#1. Find the number of records in the master table
query_count = '''SELECT COUNT(*) FROM STUDENT_GRADE_MASTER'''
c.execute(query_count)
number_of_records = c.fetchall()
print(number_of_records)
#2. Maximum marks for each subject
query_max_marks = '''SELECT Subject,max(Marks) as 'Max_Marks' from STUDENT_GRADE_MASTER GROUP BY Subject'''
c.execute(query_max_marks)
max_marks_data = c.fetchall()
print(max_marks_data)
#3. Percenatge of marks scored by each student
query_percentage = '''SELECT Student_Name, avg(Marks) as 'Percentage' from STUDENT_GRADE_MASTER GROUP BY Student_Name'''
c.execute(query_percentage)
percentage_data = c.fetchall()
print(percentage_data)
| [(20,)]
[('C', 97), ('C++', 95), ('Environmental studies', 92), ('Java', 96), ('Maths', 98)]
[('Abhishek', 94.2), ('Anand', 85.2), ('Sourabh', 89.0), ('Vivek', 84.8)]
| Apache-2.0 | SQLLiteDBConnection/workingWithSQLLiteDB.ipynb | Snigdha171/PythonMiniProgramSeries |
Step 7 - We are closing the database connection It is always a good practice to close the database connection after all the operations are completed | #Closing the connection
conn.close() | _____no_output_____ | Apache-2.0 | SQLLiteDBConnection/workingWithSQLLiteDB.ipynb | Snigdha171/PythonMiniProgramSeries |
**PUNTO 2 ** | premio1 = "Viaje todo incluído para dos personas a San Andrés"
premio2 = "una pasadía a los termales de San Vicente incluyendo almuerzo"
premio3 = "Viaje todo incluido para dos personas a Santa Marta"
premio4 = "Pasadía al desierto de Tatacoa (Sin incluír alimentación)"
rosada = premio1
verde = premio2
azul = premio3
gris = premio4
roja = "No hay premio"
cliente = input("por favor digite el nombre del concursante ")
balota = input("Digite el color de la balota ")
valorVariable = int(input("Digite un valor variable "))
antiguedad = int(input("Digite los años de antiguedad del cliente "))
referidos = int(input("Digite los referidos del cliente "))
liderazgo = input("¿El cliente tiene liderazgo en los programas de cooperación de viajes internos? ")
if balota == "rosada":
print("La empresa VIVAFLY se complace en anunciar que La participante ",cliente," ganó un ", rosada, " en nuestro sorteo de viajes de AMOR y AMISTAD")
valorVariable1 = valorVariable * 0.15
if valorVariable < 120000:
print("El cliente ",cliente," también recibirá dos boletas de cine 4D con un combo de palomitas")
else :
print("El cliente ", cliente, " también recibirá un bono de $",valorVariable1)
elif balota == "verde":
print("La empresa VIVAFLY se complace en anunciar que La participante ",cliente," ganó ", verde, " en nuestro sorteo de viajes de AMOR y AMISTAD")
valorVariable2 = valorVariable * 0.20
if valorVariable < 120000:
print("El cliente ",cliente," también recibirá dos boletas de cine 4D con un combo de palomitas")
else :
print("El cliente ", cliente, " también recibirá un bono de $",valorVariable2)
elif balota == "azul":
print("La empresa VIVAFLY se complace en anunciar que La participante ",cliente," ganó ", azul, " en nuestro sorteo de viajes de AMOR y AMISTAD")
valorVariable3 = valorVariable * 0.05
if valorVariable < 120000:
print("El cliente ",cliente," también recibirá dos boletas de cine 4D con un combo de palomitas")
else :
print("El cliente ", cliente, " también recibirá un bono de $",valorVariable3)
elif balota == "gris":
print("La empresa VIVAFLY se complace en anunciar que La participante ",cliente," ganó ", gris, " en nuestro sorteo de viajes de AMOR y AMISTAD")
valorVariable4 = valorVariable * 0.20
if valorVariable < 120000:
print("El cliente ",cliente," también recibirá dos boletas de cine 4D con un combo de palomitas")
else :
print("El cliente ", cliente, " también recibirá un bono de $",valorVariable4)
else :
print("La empresa VIVAFLY se complace en anunciar que La participante ",cliente," ganó $120.000")
#si se cumple los 3:
if antiguedad > 10 and referidos > 10 and liderazgo == "si" :
puntos = valorVariable * 0.50
print("Adicionalmente:")
if puntos > 600000 :
puntos = 500 * 0.20
print("Se le otorga ",puntos, " puntos para redimir en un premio a futuro")
else :
puntos = 250 * 0.20
print("Se le otorga ", puntos, " puntos para redimir en un premio a futuro")
#Si se cumple solo dos requisitos:
elif antiguedad > 10 and referidos > 10 :
puntos = valorVariable * 0.50
print("Adicionalmente: ")
if puntos > 250000 :
puntos = 200 * 0.20
print("Se le otorga ", puntos, " puntos para redimir en un premio a futuro")
else :
puntos = 50 * 0.20
print("Se le otorga ", puntos, " puntos para redimir en un premio a futuro")
elif antiguedad > 10 and liderazgo == "si" :
puntos = valorVariable * 0.50
print("Adicionalmente: ")
if puntos > 250000 :
puntos = 200 * 0.20
print("Se le otorga ", puntos, " puntos para redimir en un premio a futuro")
else :
puntos = 50 * 0.20
print("Se le otorga ", puntos, "puntos para redimir en un premio a futuro")
elif referidos > 10 and liderazgo =="si" :
puntos = valorVariable * 0.30
print("Adicionalmente: ")
if puntos > 250000 :
puntos = 200 * 0.20
print("Se le otorga 200 puntos para redimir en un premio a futuro")
else :
puntos = 50 * 0.20
print("Se le otorga 50 puntos para redimir en un premio a futuro")
#Solo si se cumple uno
else :
puntos = valorVariable * 0.10
puntos = puntos * 0.10
print("Adicionalmente se se le otorga", puntos, "puntos para redimir en un premio a futuro")
| por favor digite el nombre del concursante Angie
Digite el color de la balota rosada
Digite un valor variable 2000000
Digite los años de antiguedad del cliente 14
Digite los referidos del cliente 1
¿El cliente tiene liderazgo en los programas de cooperación de viajes internos? si
La empresa VIVAFLY se complace en anunciar que La participante Angie ganó un Viaje todo incluído para dos personas a San Andrés en nuestro sorteo de viajes de AMOR y AMISTAD
El cliente Angie también recibirá un bono de $ 300000.0
Adicionalmente:
Se le otorga 40.0 puntos para redimir en un premio a futuro
| MIT | TALLER1.ipynb | AngieCat26/MujeresDigitales |
Tutorial This tutorial will introduce you to the *fifa_preprocessing*'s functionality!In general, the following functions will alow you to preprocess your data to be able to perform machine learning or statistical data analysis by reformatting, casting or deleting certain values. The data used in these examples comes from https://www.kaggle.com/karangadiya/fifa19, a webpage this package was inspired by. The module's functions work best with this data set, however, they will work with any data structured in a similar manner. Prerequisites First, import the fifa_preprocessing, pandas and math modules: | import fifa_preprocessing as fp
import pandas as pd
import math | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Load your data: | data = pd.read_csv('data.csv')
data | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Exclude goalkeepersBefore any preprocessing, the data contains all the players. | data[['Name', 'Position']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
This command will exclude goalkeepers from your data set (i.e. delete all the rows where column 'Position' is equal to 'GK'): | data = fp.exclude_goalkeepers(data) | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
As you may notice, the row number 3 was deleted. | data[['Name', 'Position']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Format currenciesTo remove unnecessary characters form a monetary value use: | money = '€23.4M'
fp.money_format(money) | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
The value will be expressed in thousands of euros: | money = '€7K'
fp.money_format(money) | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Format players' ratingIn FIFA players get a ranking on they skills on the pitch. The ranking is represented as a sum of two integers.The following function lets you take in a string containing two numbers separated by a '+' and get the actual sum: | rating = '81+3'
fp.rating_format(rating) | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Format players' work rateThe next function takes in a qualitative parameter that could be expressed as a quantitive value.If you have a data set where one category is expressed as 'High', 'Medium' or 'Low', this function will assign numbers to these values (2, 1 and 0 respectively): | fp.work_format('High')
fp.work_format('Medium')
fp.work_format('Low') | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
In fact, the function returns 0 in every case where the passed in parameter id different than 'High' and 'Medium': | fp.work_format('Mediocre') | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Cast to intThis simple function casts a float to int, but also adds extra flexibility and returns 0 when it encounters a NaN (Not a Number): | fp.to_int(3.24)
import numpy
nan = numpy.nan
fp.to_int(nan) | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Apply format of choiceThis generic function lets you choose what format to apply to every value in the columns of the data frame you specify. | data[['Name', 'Jersey Number', 'Skill Moves', 'Weak Foot']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
By format, is meant a function that operates on the values in the specified columns: | columns = ['Jersey Number', 'Skill Moves', 'Weak Foot']
format_fun = fp.to_int
data = fp.apply_format(data, columns, format_fun)
data[['Name', 'Jersey Number', 'Skill Moves', 'Weak Foot']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Dummy variablesIf we intend to build machine learning models to explore our data, we usually are not able to extract any information from qualitative data. Here 'Club' and 'Preferred Foot' are categories that could bring interesting information. To be able to use it in our machine learning algorithms we can get dummy variables. | data[['Name', 'Preferred Foot']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
If we choose 'Preferred Foot', new columns will be aded, their titles will be the same as the values in 'Preferred Foot' column: 'Left' and 'Right'. So now instead of seeing 'Left' in the column 'Preferred Foot' we will see 1 in 'Left' column (and 0 in 'Right'). | data = fp.to_dummy(data, ['Preferred Foot'])
data[['Name', 'Left', 'Right']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Learn more about [dummy variables](https://en.wikiversity.org/wiki/Dummy_variable_(statistics)). The data frame will no longer contain the columns we transformed: | 'Preferred Foot' in data | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
We can get dummy variables for multiple columns at once. | data[['Name', 'Club', 'Position']]
data = fp.to_dummy(data, ['Club', 'Nationality'])
data[['Name', 'Paris Saint-Germain', 'Manchester City', 'Brazil', 'Portugal']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Split work rate columnIn FIFA the players' work rate is saved in a special way, two qualiative values are split with a slash: | data[['Name', 'Work Rate']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
This next function allows you to split column 'Work Rate' into 'Defensive Work Rate' and 'Offensive Work Rate': | data = fp.split_work_rate(data)
data[['Name', 'Defensive Work Rate', 'Offensive Work Rate']] | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Default preprocessingTo perform all the basic preprocessing (optimized for the FIFA 19 data set) on your data, simply go: | data = pd.read_csv('data.csv')
fp.preprocess(data) | _____no_output_____ | MIT | tutorial/tutorial.ipynb | piotrfratczak/fifa_preprocessing |
Bias GoalsIn this notebook, you're going to explore a way to identify some biases of a GAN using a classifier, in a way that's well-suited for attempting to make a model independent of an input. Note that not all biases are as obvious as the ones you will see here. Learning Objectives1. Be able to distinguish a few different kinds of bias in terms of demographic parity, equality of odds, and equality of opportunity (as proposed [here](http://m-mitchell.com/papers/Adversarial_Bias_Mitigation.pdf)).2. Be able to use a classifier to try and detect biases in a GAN by analyzing the generator's implicit associations. ChallengesOne major challenge in assessing bias in GANs is that you still want your generator to be able to generate examples of different values of a protected class—the class you would like to mitigate bias against. While a classifier can be optimized to have its output be independent of a protected class, a generator which generates faces should be able to generate examples of various protected class values. When you generate examples with various values of a protected class, you don’t want those examples to correspond to any properties that aren’t strictly a function of that protected class. This is made especially difficult since many protected classes (e.g. gender or ethnicity) are social constructs, and what properties count as “a function of that protected class” will vary depending on who you ask. It’s certainly a hard balance to strike.Moreover, a protected class is rarely used to condition a GAN explicitly, so it is often necessary to resort to somewhat post-hoc methods (e.g. using a classifier trained on relevant features, which might be biased itself). In this assignment, you will learn one approach to detect potential bias, by analyzing correlations in feature classifications on the generated images. Getting StartedAs you have done previously, you will start by importing some useful libraries and defining a visualization function for your images. You will also use the same generator and basic classifier from previous weeks. Packages and Visualization | import torch
import numpy as np
from torch import nn
from tqdm.auto import tqdm
from torchvision import transforms
from torchvision.utils import make_grid
from torchvision.datasets import CelebA
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
torch.manual_seed(0) # Set for our testing purposes, please do not change!
def show_tensor_images(image_tensor, num_images=16, size=(3, 64, 64), nrow=3):
'''
Function for visualizing images: Given a tensor of images, number of images,
size per image, and images per row, plots and prints the images in an uniform grid.
'''
image_tensor = (image_tensor + 1) / 2
image_unflat = image_tensor.detach().cpu()
image_grid = make_grid(image_unflat[:num_images], nrow=nrow)
plt.imshow(image_grid.permute(1, 2, 0).squeeze())
plt.show() | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
Generator and Noise | class Generator(nn.Module):
'''
Generator Class
Values:
z_dim: the dimension of the noise vector, a scalar
im_chan: the number of channels in the images, fitted for the dataset used, a scalar
(CelebA is rgb, so 3 is your default)
hidden_dim: the inner dimension, a scalar
'''
def __init__(self, z_dim=10, im_chan=3, hidden_dim=64):
super(Generator, self).__init__()
self.z_dim = z_dim
# Build the neural network
self.gen = nn.Sequential(
self.make_gen_block(z_dim, hidden_dim * 8),
self.make_gen_block(hidden_dim * 8, hidden_dim * 4),
self.make_gen_block(hidden_dim * 4, hidden_dim * 2),
self.make_gen_block(hidden_dim * 2, hidden_dim),
self.make_gen_block(hidden_dim, im_chan, kernel_size=4, final_layer=True),
)
def make_gen_block(self, input_channels, output_channels, kernel_size=3, stride=2, final_layer=False):
'''
Function to return a sequence of operations corresponding to a generator block of DCGAN;
a transposed convolution, a batchnorm (except in the final layer), and an activation.
Parameters:
input_channels: how many channels the input feature representation has
output_channels: how many channels the output feature representation should have
kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
stride: the stride of the convolution
final_layer: a boolean, true if it is the final layer and false otherwise
(affects activation and batchnorm)
'''
if not final_layer:
return nn.Sequential(
nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride),
nn.BatchNorm2d(output_channels),
nn.ReLU(inplace=True),
)
else:
return nn.Sequential(
nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride),
nn.Tanh(),
)
def forward(self, noise):
'''
Function for completing a forward pass of the generator: Given a noise tensor,
returns generated images.
Parameters:
noise: a noise tensor with dimensions (n_samples, z_dim)
'''
x = noise.view(len(noise), self.z_dim, 1, 1)
return self.gen(x)
def get_noise(n_samples, z_dim, device='cpu'):
'''
Function for creating noise vectors: Given the dimensions (n_samples, z_dim)
creates a tensor of that shape filled with random numbers from the normal distribution.
Parameters:
n_samples: the number of samples to generate, a scalar
z_dim: the dimension of the noise vector, a scalar
device: the device type
'''
return torch.randn(n_samples, z_dim, device=device) | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
Classifier | class Classifier(nn.Module):
'''
Classifier Class
Values:
im_chan: the number of channels in the images, fitted for the dataset used, a scalar
(CelebA is rgb, so 3 is your default)
n_classes: the total number of classes in the dataset, an integer scalar
hidden_dim: the inner dimension, a scalar
'''
def __init__(self, im_chan=3, n_classes=2, hidden_dim=64):
super(Classifier, self).__init__()
self.classifier = nn.Sequential(
self.make_classifier_block(im_chan, hidden_dim),
self.make_classifier_block(hidden_dim, hidden_dim * 2),
self.make_classifier_block(hidden_dim * 2, hidden_dim * 4, stride=3),
self.make_classifier_block(hidden_dim * 4, n_classes, final_layer=True),
)
def make_classifier_block(self, input_channels, output_channels, kernel_size=4, stride=2, final_layer=False):
'''
Function to return a sequence of operations corresponding to a classifier block;
a convolution, a batchnorm (except in the final layer), and an activation (except in the final layer).
Parameters:
input_channels: how many channels the input feature representation has
output_channels: how many channels the output feature representation should have
kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
stride: the stride of the convolution
final_layer: a boolean, true if it is the final layer and false otherwise
(affects activation and batchnorm)
'''
if not final_layer:
return nn.Sequential(
nn.Conv2d(input_channels, output_channels, kernel_size, stride),
nn.BatchNorm2d(output_channels),
nn.LeakyReLU(0.2, inplace=True),
)
else:
return nn.Sequential(
nn.Conv2d(input_channels, output_channels, kernel_size, stride),
)
def forward(self, image):
'''
Function for completing a forward pass of the classifier: Given an image tensor,
returns an n_classes-dimension tensor representing classes.
Parameters:
image: a flattened image tensor with im_chan channels
'''
class_pred = self.classifier(image)
return class_pred.view(len(class_pred), -1) | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
Specifying ParametersYou will also need to specify a few parameters before you begin training: * z_dim: the dimension of the noise vector * batch_size: the number of images per forward/backward pass * device: the device type | z_dim = 64
batch_size = 128
device = 'cuda' | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
Train a Classifier (Optional)You're welcome to train your own classifier with this code, but you are provide a pre-trained one based on this architecture here which you can load and use in the next section. | # You can run this code to train your own classifier, but there is a provided pre-trained one
# If you'd like to use this, just run "train_classifier(filename)"
# To train and save a classifier on the label indices to that filename
def train_classifier(filename):
import seaborn as sns
import matplotlib.pyplot as plt
# You're going to target all the classes, so that's how many the classifier will learn
label_indices = range(40)
n_epochs = 3
display_step = 500
lr = 0.001
beta_1 = 0.5
beta_2 = 0.999
image_size = 64
transform = transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
dataloader = DataLoader(
CelebA(".", split='train', download=True, transform=transform),
batch_size=batch_size,
shuffle=True)
classifier = Classifier(n_classes=len(label_indices)).to(device)
class_opt = torch.optim.Adam(classifier.parameters(), lr=lr, betas=(beta_1, beta_2))
criterion = nn.BCEWithLogitsLoss()
cur_step = 0
classifier_losses = []
# classifier_val_losses = []
for epoch in range(n_epochs):
# Dataloader returns the batches
for real, labels in tqdm(dataloader):
real = real.to(device)
labels = labels[:, label_indices].to(device).float()
class_opt.zero_grad()
class_pred = classifier(real)
class_loss = criterion(class_pred, labels)
class_loss.backward() # Calculate the gradients
class_opt.step() # Update the weights
classifier_losses += [class_loss.item()] # Keep track of the average classifier loss
### Visualization code ###
if cur_step % display_step == 0 and cur_step > 0:
class_mean = sum(classifier_losses[-display_step:]) / display_step
print(f"Step {cur_step}: Classifier loss: {class_mean}")
step_bins = 20
x_axis = sorted([i * step_bins for i in range(len(classifier_losses) // step_bins)] * step_bins)
sns.lineplot(x_axis, classifier_losses[:len(x_axis)], label="Classifier Loss")
plt.legend()
plt.show()
torch.save({"classifier": classifier.state_dict()}, filename)
cur_step += 1
# Uncomment the last line to train your own classfier - this line will not work in Coursera.
# If you'd like to do this, you'll have to download it and run it, ideally using a GPU.
# train_classifier("filename") | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
Loading the Pre-trained ModelsYou can now load the pre-trained generator (trained on CelebA) and classifier using the following code. If you trained your own classifier, you can load that one here instead. However, it is suggested that you first go through the assignment using the pre-trained one. | import torch
gen = Generator(z_dim).to(device)
gen_dict = torch.load("pretrained_celeba.pth", map_location=torch.device(device))["gen"]
gen.load_state_dict(gen_dict)
gen.eval()
n_classes = 40
classifier = Classifier(n_classes=n_classes).to(device)
class_dict = torch.load("pretrained_classifier.pth", map_location=torch.device(device))["classifier"]
classifier.load_state_dict(class_dict)
classifier.eval()
print("Loaded the models!")
opt = torch.optim.Adam(classifier.parameters(), lr=0.01) | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
Feature CorrelationNow you can generate images using the generator. By also using the classifier, you will be generating images with different amounts of the "male" feature.You are welcome to experiment with other features as the target feature, but it is encouraged that you initially go through the notebook as is before exploring. | # First you generate a bunch of fake images with the generator
n_images = 256
fake_image_history = []
classification_history = []
grad_steps = 30 # How many gradient steps to take
skip = 2 # How many gradient steps to skip in the visualization
feature_names = ["5oClockShadow", "ArchedEyebrows", "Attractive", "BagsUnderEyes", "Bald", "Bangs",
"BigLips", "BigNose", "BlackHair", "BlondHair", "Blurry", "BrownHair", "BushyEyebrows", "Chubby",
"DoubleChin", "Eyeglasses", "Goatee", "GrayHair", "HeavyMakeup", "HighCheekbones", "Male",
"MouthSlightlyOpen", "Mustache", "NarrowEyes", "NoBeard", "OvalFace", "PaleSkin", "PointyNose",
"RecedingHairline", "RosyCheeks", "Sideburn", "Smiling", "StraightHair", "WavyHair", "WearingEarrings",
"WearingHat", "WearingLipstick", "WearingNecklace", "WearingNecktie", "Young"]
n_features = len(feature_names)
# Set the target feature
target_feature = "Male"
target_indices = feature_names.index(target_feature)
noise = get_noise(n_images, z_dim).to(device)
new_noise = noise.clone().requires_grad_()
starting_classifications = classifier(gen(new_noise)).cpu().detach()
# Additive direction (more of a feature)
for i in range(grad_steps):
opt.zero_grad()
fake = gen(new_noise)
fake_image_history += [fake]
classifications = classifier(fake)
classification_history += [classifications.cpu().detach()]
fake_classes = classifications[:, target_indices].mean()
fake_classes.backward()
new_noise.data += new_noise.grad / grad_steps
# Subtractive direction (less of a feature)
new_noise = noise.clone().requires_grad_()
for i in range(grad_steps):
opt.zero_grad()
fake = gen(new_noise)
fake_image_history += [fake]
classifications = classifier(fake)
classification_history += [classifications.cpu().detach()]
fake_classes = classifications[:, target_indices].mean()
fake_classes.backward()
new_noise.data -= new_noise.grad / grad_steps
classification_history = torch.stack(classification_history)
print(classification_history.shape)
print(starting_classifications[None, :, :].shape) | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
You've now generated image samples, which have increasing or decreasing amounts of the target feature. You can visualize the way in which that affects other classified features. The x-axis will show you the amount of change in your target feature and the y-axis shows how much the other features change, as detected in those images by the classifier. Together, you will be able to see the covariance of "male-ness" and other features.You are started off with a set of features that have interesting associations with "male-ness", but you are welcome to change the features in `other_features` with others from `feature_names`. | import seaborn as sns
# Set the other features
other_features = ["Smiling", "Bald", "Young", "HeavyMakeup", "Attractive"]
classification_changes = (classification_history - starting_classifications[None, :, :]).numpy()
for other_feature in other_features:
other_indices = feature_names.index(other_feature)
with sns.axes_style("darkgrid"):
sns.regplot(
classification_changes[:, :, target_indices].reshape(-1),
classification_changes[:, :, other_indices].reshape(-1),
fit_reg=True,
truncate=True,
ci=99,
x_ci=99,
x_bins=len(classification_history),
label=other_feature
)
plt.xlabel(target_feature)
plt.ylabel("Other Feature")
plt.title(f"Generator Biases: Features vs {target_feature}-ness")
plt.legend(loc=1)
plt.show() | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
This correlation detection can be used to reduce bias by penalizing this type of correlation in the loss during the training of the generator. However, currently there is no rigorous and accepted solution for debiasing GANs. A first step that you can take in the right direction comes before training the model: make sure that your dataset is inclusive and representative, and consider how you can mitigate the biases resulting from whatever data collection method you used—for example, getting a representative labelers for your task. It is important to note that, as highlighted in the lecture and by many researchers including [Timnit Gebru and Emily Denton](https://sites.google.com/view/fatecv-tutorial/schedule), a diverse dataset alone is not enough to eliminate bias. Even diverse datasets can reinforce existing structural biases by simply capturing common social biases. Mitigating these biases is an important and active area of research. Note on CelebAYou may have noticed that there are obvious correlations between the feature you are using, "male", and other seemingly unrelates features, "smiling" and "young" for example. This is because the CelebA dataset labels had no serious consideration for diversity. The data represents the biases their labelers, the dataset creators, the social biases as a result of using a dataset based on American celebrities, and many others. Equipped with knowledge about bias, we trust that you will do better in the future datasets you create. QuantificationFinally, you can also quantitatively evaluate the degree to which these factors covary. Given a target index, for example corresponding to "male," you'll want to return the other features that covary with that target feature the most. You'll want to account for both large negative and positive covariances, and you'll want to avoid returning the target feature in your list of covarying features (since a feature will often have a high covariance with itself).Optional hints for get_top_covariances1. You will likely find the following function useful: [np.cov](https://numpy.org/doc/stable/reference/generated/numpy.cov.html).2. You will probably find it useful to [reshape](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html) the input.3. The target feature should not be included in the outputs.4. Feel free to use any reasonable method to get the top-n elements.5. It may be easiest to solve this if you find the `relevant_indices` first.6. You want to sort by absolute value but return the actual values. | # UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL: get_top_covariances
def get_top_covariances(classification_changes, target_index, top_n=10):
'''
Function for getting the top n covariances: Given a list of classification changes
and the index of the target feature, returns (1) a list or tensor (numpy or torch) of the indices
corresponding to the n features that covary most with the target in terms of absolute covariance
and (2) a list or tensor (numpy or torch) of the degrees to which they covary.
Parameters:
classification_changes: relative changes in classifications of each generated image
resulting from optimizing the target feature (see above for a visualization)
target_index: the index of the target feature, a scalar
top_n: the top most number of elements to return, default is 10
'''
# Hint: Don't forget you also care about negative covariances!
# Note that classification_changes has a shape of (2 * grad_steps, n_images, n_features)
# where n_features is the number of features measured by the classifier, and you are looking
# for the covariance of the features based on the (2 * grad_steps * n_images) samples
#### START CODE HERE ####
relevant_indices = None
highest_covariances = None
#### END CODE HERE ####
return relevant_indices, highest_covariances
# UNIT TEST
from torch.distributions import MultivariateNormal
mean = torch.Tensor([0, 0, 0, 0])
covariance = torch.Tensor(
[[10, 2, -0.5, -5],
[2, 11, 5, 4],
[-0.5, 5, 10, 2],
[-5, 4, 2, 11]]
)
independent_dist = MultivariateNormal(mean, covariance)
samples = independent_dist.sample((60 * 128,))
foo = samples.reshape(60, 128, samples.shape[-1])
relevant_indices, highest_covariances = get_top_covariances(foo, 1, top_n=3)
assert (tuple(relevant_indices) == (2, 3, 0)), "Make sure you're getting the greatest, not the least covariances"
assert np.all(np.abs(highest_covariances - [5, 4, 2]) < 0.5 )
relevant_indices, highest_covariances = get_top_covariances(foo, 0, top_n=3)
assert (tuple(relevant_indices) == (3, 1, 2)), "Make sure to consider the magnitude of negative covariances"
assert np.all(np.abs(highest_covariances - [-5, 2, -0.5]) < 0.5 )
relevant_indices, highest_covariances = get_top_covariances(foo, 2, top_n=2)
assert (tuple(relevant_indices) == (1, 3))
assert np.all(np.abs(highest_covariances - [5, 2]) < 0.5 )
relevant_indices, highest_covariances = get_top_covariances(foo, 3, top_n=2)
assert (tuple(relevant_indices) == (0, 1))
assert np.all(np.abs(highest_covariances - [-5, 4]) < 0.5 )
print("All tests passed")
relevant_indices, highest_covariances = get_top_covariances(classification_changes, target_indices, top_n=10)
print(relevant_indices)
assert relevant_indices[9] == 34
assert len(relevant_indices) == 10
assert highest_covariances[8] - (-1.2418) < 1e-3
for index, covariance in zip(relevant_indices, highest_covariances):
print(f"{feature_names[index]} {covariance:f}") | _____no_output_____ | Apache-2.0 | 12-Bias.ipynb | pedro-abundio-wang/GANs |
Neural network hybrid recommendation system on Google Analytics data model and trainingThis notebook demonstrates how to implement a hybrid recommendation system using a neural network to combine content-based and collaborative filtering recommendation models using Google Analytics data. We are going to use the learned user embeddings from [wals.ipynb](../wals.ipynb) and combine that with our previous content-based features from [content_based_using_neural_networks.ipynb](../content_based_using_neural_networks.ipynb)Now that we have our data preprocessed from BigQuery and Cloud Dataflow, we can build our neural network hybrid recommendation model to our preprocessed data. Then we can train locally to make sure everything works and then use the power of Google Cloud ML Engine to scale it out. We're going to use TensorFlow Hub to use trained text embeddings, so let's first pip install that and reset our session. | !pip install tensorflow_hub | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Now reset the notebook's session kernel! Since we're no longer using Cloud Dataflow, we'll be using the python3 kernel from here on out so don't forget to change the kernel if it's still python2. | # Import helpful libraries and setup our project, bucket, and region
import os
import tensorflow as tf
import tensorflow_hub as hub
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/hybrid_recommendation/preproc; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical set of preprocessed files if you didn't do preprocessing notebook
gsutil -m cp -R gs://cloud-training-demos/courses/machine_learning/deepdive/10_recommendation/hybrid_recommendation gs://${BUCKET}
fi | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Create hybrid recommendation system model using TensorFlow Now that we've created our training and evaluation input files as well as our categorical feature vocabulary files, we can create our TensorFlow hybrid recommendation system model. Let's first get some of our aggregate information that we will use in the model from some of our preprocessed files we saved in Google Cloud Storage. | from tensorflow.python.lib.io import file_io
# Get number of content ids from text file in Google Cloud Storage
with file_io.FileIO(tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocab_counts/content_id_vocab_count.txt*".format(BUCKET))[0], mode = 'r') as ifp:
number_of_content_ids = int([x for x in ifp][0])
print("number_of_content_ids = {}".format(number_of_content_ids))
# Get number of categories from text file in Google Cloud Storage
with file_io.FileIO(tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocab_counts/category_vocab_count.txt*".format(BUCKET))[0], mode = 'r') as ifp:
number_of_categories = int([x for x in ifp][0])
print("number_of_categories = {}".format(number_of_categories))
# Get number of authors from text file in Google Cloud Storage
with file_io.FileIO(tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocab_counts/author_vocab_count.txt*".format(BUCKET))[0], mode = 'r') as ifp:
number_of_authors = int([x for x in ifp][0])
print("number_of_authors = {}".format(number_of_authors))
# Get mean months since epoch from text file in Google Cloud Storage
with file_io.FileIO(tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocab_counts/months_since_epoch_mean.txt*".format(BUCKET))[0], mode = 'r') as ifp:
mean_months_since_epoch = float([x for x in ifp][0])
print("mean_months_since_epoch = {}".format(mean_months_since_epoch))
# Determine CSV and label columns
NON_FACTOR_COLUMNS = 'next_content_id,visitor_id,content_id,category,title,author,months_since_epoch'.split(',')
FACTOR_COLUMNS = ["user_factor_{}".format(i) for i in range(10)] + ["item_factor_{}".format(i) for i in range(10)]
CSV_COLUMNS = NON_FACTOR_COLUMNS + FACTOR_COLUMNS
LABEL_COLUMN = 'next_content_id'
# Set default values for each CSV column
NON_FACTOR_DEFAULTS = [["Unknown"],["Unknown"],["Unknown"],["Unknown"],["Unknown"],["Unknown"],[mean_months_since_epoch]]
FACTOR_DEFAULTS = [[0.0] for i in range(10)] + [[0.0] for i in range(10)] # user and item
DEFAULTS = NON_FACTOR_DEFAULTS + FACTOR_DEFAULTS | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Create input function for training and evaluation to read from our preprocessed CSV files. | # Create input function for train and eval
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(records = value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename = filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(filenames = file_list).map(map_func = decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(count = num_epochs).batch(batch_size = batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Next, we will create our feature columns using our read in features. | # Create feature columns to be used in model
def create_feature_columns(args):
# Create content_id feature column
content_id_column = tf.feature_column.categorical_column_with_hash_bucket(
key = "content_id",
hash_bucket_size = number_of_content_ids)
# Embed content id into a lower dimensional representation
embedded_content_column = tf.feature_column.embedding_column(
categorical_column = content_id_column,
dimension = args['content_id_embedding_dimensions'])
# Create category feature column
categorical_category_column = tf.feature_column.categorical_column_with_vocabulary_file(
key = "category",
vocabulary_file = tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocabs/category_vocab.txt*".format(args['bucket']))[0],
num_oov_buckets = 1)
# Convert categorical category column into indicator column so that it can be used in a DNN
indicator_category_column = tf.feature_column.indicator_column(categorical_column = categorical_category_column)
# Create title feature column using TF Hub
embedded_title_column = hub.text_embedding_column(
key = "title",
module_spec = "https://tfhub.dev/google/nnlm-de-dim50-with-normalization/1",
trainable = False)
# Create author feature column
author_column = tf.feature_column.categorical_column_with_hash_bucket(
key = "author",
hash_bucket_size = number_of_authors + 1)
# Embed author into a lower dimensional representation
embedded_author_column = tf.feature_column.embedding_column(
categorical_column = author_column,
dimension = args['author_embedding_dimensions'])
# Create months since epoch boundaries list for our binning
months_since_epoch_boundaries = list(range(400, 700, 20))
# Create months_since_epoch feature column using raw data
months_since_epoch_column = tf.feature_column.numeric_column(
key = "months_since_epoch")
# Create bucketized months_since_epoch feature column using our boundaries
months_since_epoch_bucketized = tf.feature_column.bucketized_column(
source_column = months_since_epoch_column,
boundaries = months_since_epoch_boundaries)
# Cross our categorical category column and bucketized months since epoch column
crossed_months_since_category_column = tf.feature_column.crossed_column(
keys = [categorical_category_column, months_since_epoch_bucketized],
hash_bucket_size = len(months_since_epoch_boundaries) * (number_of_categories + 1))
# Convert crossed categorical category and bucketized months since epoch column into indicator column so that it can be used in a DNN
indicator_crossed_months_since_category_column = tf.feature_column.indicator_column(categorical_column = crossed_months_since_category_column)
# Create user and item factor feature columns from our trained WALS model
user_factors = [tf.feature_column.numeric_column(key = "user_factor_" + str(i)) for i in range(10)]
item_factors = [tf.feature_column.numeric_column(key = "item_factor_" + str(i)) for i in range(10)]
# Create list of feature columns
feature_columns = [embedded_content_column,
embedded_author_column,
indicator_category_column,
embedded_title_column,
indicator_crossed_months_since_category_column] + user_factors + item_factors
return feature_columns | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Now we'll create our model function | # Create custom model function for our custom estimator
def model_fn(features, labels, mode, params):
# TODO: Create neural network input layer using our feature columns defined above
# TODO: Create hidden layers by looping through hidden unit list
# TODO: Compute logits (1 per class) using the output of our last hidden layer
# TODO: Find the predicted class indices based on the highest logit (which will result in the highest probability)
predicted_classes =
# Read in the content id vocabulary so we can tie the predicted class indices to their respective content ids
with file_io.FileIO(tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocabs/content_id_vocab.txt*".format(BUCKET))[0], mode = 'r') as ifp:
content_id_names = tf.constant(value = [x.rstrip() for x in ifp])
# Gather predicted class names based predicted class indices
predicted_class_names = tf.gather(params = content_id_names, indices = predicted_classes)
# If the mode is prediction
if mode == tf.estimator.ModeKeys.PREDICT:
# Create predictions dict
predictions_dict = {
'class_ids': tf.expand_dims(input = predicted_classes, axis = -1),
'class_names' : tf.expand_dims(input = predicted_class_names, axis = -1),
'probabilities': tf.nn.softmax(logits = logits),
'logits': logits
}
# Create export outputs
export_outputs = {"predict_export_outputs": tf.estimator.export.PredictOutput(outputs = predictions_dict)}
return tf.estimator.EstimatorSpec( # return early since we're done with what we need for prediction mode
mode = mode,
predictions = predictions_dict,
loss = None,
train_op = None,
eval_metric_ops = None,
export_outputs = export_outputs)
# Continue on with training and evaluation modes
# Create lookup table using our content id vocabulary
table = tf.contrib.lookup.index_table_from_file(
vocabulary_file = tf.gfile.Glob(filename = "gs://{}/hybrid_recommendation/preproc/vocabs/content_id_vocab.txt*".format(BUCKET))[0])
# Look up labels from vocabulary table
labels = table.lookup(keys = labels)
# TODO: Compute loss using the correct type of softmax cross entropy since this is classification and our labels (content id indices) and probabilities are mutually exclusive
loss =
# Compute evaluation metrics of total accuracy and the accuracy of the top k classes
accuracy = tf.metrics.accuracy(labels = labels, predictions = predicted_classes, name = 'acc_op')
top_k_accuracy = tf.metrics.mean(values = tf.nn.in_top_k(predictions = logits, targets = labels, k = params['top_k']))
map_at_k = tf.metrics.average_precision_at_k(labels = labels, predictions = predicted_classes, k = params['top_k'])
# Put eval metrics into a dictionary
eval_metrics = {
'accuracy': accuracy,
'top_k_accuracy': top_k_accuracy,
'map_at_k': map_at_k}
# Create scalar summaries to see in TensorBoard
tf.summary.scalar(name = 'accuracy', tensor = accuracy[1])
tf.summary.scalar(name = 'top_k_accuracy', tensor = top_k_accuracy[1])
tf.summary.scalar(name = 'map_at_k', tensor = map_at_k[1])
# Create scalar summaries to see in TensorBoard
tf.summary.scalar(name = 'accuracy', tensor = accuracy[1])
tf.summary.scalar(name = 'top_k_accuracy', tensor = top_k_accuracy[1])
# If the mode is evaluation
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec( # return early since we're done with what we need for evaluation mode
mode = mode,
predictions = None,
loss = loss,
train_op = None,
eval_metric_ops = eval_metrics,
export_outputs = None)
# Continue on with training mode
# If the mode is training
assert mode == tf.estimator.ModeKeys.TRAIN
# Create a custom optimizer
optimizer = tf.train.AdagradOptimizer(learning_rate = params['learning_rate'])
# Create train op
train_op = optimizer.minimize(loss = loss, global_step = tf.train.get_global_step())
return tf.estimator.EstimatorSpec( # final return since we're done with what we need for training mode
mode = mode,
predictions = None,
loss = loss,
train_op = train_op,
eval_metric_ops = None,
export_outputs = None) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Now create a serving input function | # Create serving input function
def serving_input_fn():
feature_placeholders = {
colname : tf.placeholder(dtype = tf.string, shape = [None]) \
for colname in NON_FACTOR_COLUMNS[1:-1]
}
feature_placeholders['months_since_epoch'] = tf.placeholder(dtype = tf.float32, shape = [None])
for colname in FACTOR_COLUMNS:
feature_placeholders[colname] = tf.placeholder(dtype = tf.float32, shape = [None])
features = {
key: tf.expand_dims(tensor, -1) \
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Now that all of the pieces are assembled let's create and run our train and evaluate loop | # Create train and evaluate loop to combine all of the pieces together.
tf.logging.set_verbosity(tf.logging.INFO)
def train_and_evaluate(args):
estimator = tf.estimator.Estimator(
model_fn = model_fn,
model_dir = args['output_dir'],
params={
'feature_columns': create_feature_columns(args),
'hidden_units': args['hidden_units'],
'n_classes': number_of_content_ids,
'learning_rate': args['learning_rate'],
'top_k': args['top_k'],
'bucket': args['bucket']
})
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset(filename = args['train_data_paths'], mode = tf.estimator.ModeKeys.TRAIN, batch_size = args['batch_size']),
max_steps = args['train_steps'])
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset(filename = args['eval_data_paths'], mode = tf.estimator.ModeKeys.EVAL, batch_size = args['batch_size']),
steps = None,
start_delay_secs = args['start_delay_secs'],
throttle_secs = args['throttle_secs'],
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Run train_and_evaluate! | # Call train and evaluate loop
import shutil
outdir = 'hybrid_recommendation_trained'
shutil.rmtree(outdir, ignore_errors = True) # start fresh each time
arguments = {
'bucket': BUCKET,
'train_data_paths': "gs://{}/hybrid_recommendation/preproc/features/train.csv*".format(BUCKET),
'eval_data_paths': "gs://{}/hybrid_recommendation/preproc/features/eval.csv*".format(BUCKET),
'output_dir': outdir,
'batch_size': 128,
'learning_rate': 0.1,
'hidden_units': [256, 128, 64],
'content_id_embedding_dimensions': 10,
'author_embedding_dimensions': 10,
'top_k': 10,
'train_steps': 1000,
'start_delay_secs': 30,
'throttle_secs': 30
}
train_and_evaluate(arguments) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Run on module locallyNow let's place our code into a python module with model.py and task.py files so that we can train using Google Cloud's ML Engine! First, let's test our module locally. | %writefile requirements.txt
tensorflow_hub
%%bash
echo "bucket=${BUCKET}"
rm -rf hybrid_recommendation_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/hybrid_recommendations_module
python -m trainer.task \
--bucket=${BUCKET} \
--train_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/train.csv* \
--eval_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/eval.csv* \
--output_dir=${OUTDIR} \
--batch_size=128 \
--learning_rate=0.1 \
--hidden_units="256 128 64" \
--content_id_embedding_dimensions=10 \
--author_embedding_dimensions=10 \
--top_k=10 \
--train_steps=1000 \
--start_delay_secs=30 \
--throttle_secs=60 | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Run on Google Cloud ML EngineIf our module locally trained fine, let's now use of the power of ML Engine to scale it out on Google Cloud. | %%bash
OUTDIR=gs://${BUCKET}/hybrid_recommendation/small_trained_model
JOBNAME=hybrid_recommendation_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/hybrid_recommendations_module/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--train_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/train.csv* \
--eval_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/eval.csv* \
--output_dir=${OUTDIR} \
--batch_size=128 \
--learning_rate=0.1 \
--hidden_units="256 128 64" \
--content_id_embedding_dimensions=10 \
--author_embedding_dimensions=10 \
--top_k=10 \
--train_steps=1000 \
--start_delay_secs=30 \
--throttle_secs=30 | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Let's add some hyperparameter tuning! | %%writefile hyperparam.yaml
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 5
maxParallelTrials: 1
hyperparameterMetricTag: accuracy
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 64
scaleType: UNIT_LINEAR_SCALE
- parameterName: learning_rate
type: DOUBLE
minValue: 0.01
maxValue: 0.1
scaleType: UNIT_LINEAR_SCALE
- parameterName: hidden_units
type: CATEGORICAL
categoricalValues: ['1024 512 256', '1024 512 128', '1024 256 128', '512 256 128', '1024 512 64', '1024 256 64', '512 256 64', '1024 128 64', '512 128 64', '256 128 64', '1024 512 32', '1024 256 32', '512 256 32', '1024 128 32', '512 128 32', '256 128 32', '1024 64 32', '512 64 32', '256 64 32', '128 64 32']
- parameterName: content_id_embedding_dimensions
type: INTEGER
minValue: 5
maxValue: 250
scaleType: UNIT_LOG_SCALE
- parameterName: author_embedding_dimensions
type: INTEGER
minValue: 5
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
%%bash
OUTDIR=gs://${BUCKET}/hybrid_recommendation/hypertuning
JOBNAME=hybrid_recommendation_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/hybrid_recommendations_module/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
--config=hyperparam.yaml \
-- \
--bucket=${BUCKET} \
--train_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/train.csv* \
--eval_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/eval.csv* \
--output_dir=${OUTDIR} \
--batch_size=128 \
--learning_rate=0.1 \
--hidden_units="256 128 64" \
--content_id_embedding_dimensions=10 \
--author_embedding_dimensions=10 \
--top_k=10 \
--train_steps=1000 \
--start_delay_secs=30 \
--throttle_secs=30 | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Now that we know the best hyperparameters, run a big training job! | %%bash
OUTDIR=gs://${BUCKET}/hybrid_recommendation/big_trained_model
JOBNAME=hybrid_recommendation_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/hybrid_recommendations_module/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--train_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/train.csv* \
--eval_data_paths=gs://${BUCKET}/hybrid_recommendation/preproc/features/eval.csv* \
--output_dir=${OUTDIR} \
--batch_size=128 \
--learning_rate=0.1 \
--hidden_units="256 128 64" \
--content_id_embedding_dimensions=10 \
--author_embedding_dimensions=10 \
--top_k=10 \
--train_steps=10000 \
--start_delay_secs=30 \
--throttle_secs=30 | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/10_recommend/labs/hybrid_recommendations/hybrid_recommendations.ipynb | smartestrobotdai/training-data-analyst |
Setting the path | path = Path("C:/Users/shahi/.fastai/data/lgg-mri-segmentation/kaggle_3m")
path
getMask = lambda x: x.parents[0] / (x.stem + '_mask' + x.suffix)
tempImgFile = path/"TCGA_CS_4941_19960909/TCGA_CS_4941_19960909_1.tif"
tempMaskFile = getMask(tempImgFile)
image = open_image(tempImgFile)
image
image.shape
mask = open_mask(getMask(tempImgFile),div=True)
mask
class SegmentationLabelListWithDiv(SegmentationLabelList):
def open(self, fn): return open_mask(fn, div=True)
class SegmentationItemListWithDiv(SegmentationItemList):
_label_cls = SegmentationLabelListWithDiv
codes = [0,1]
np.random.seed(42)
src = (SegmentationItemListWithDiv.from_folder(path)
.filter_by_func(lambda x:not x.name.endswith('_mask.tif'))
.split_by_rand_pct(0.2)
.label_from_func(getMask, classes=codes))
data = (src.transform(get_transforms(),size=256,tfm_y=True)
.databunch(bs=8,num_workers=0)
.normalize(imagenet_stats))
data.train_ds
data.valid_ds
data.show_batch(2) | _____no_output_____ | Apache-2.0 | Brain MRI Segmentation.ipynb | shahidhj/Deep-Learning-notebooks |
Building the model Pretrained resnet 34 used for downsampling | learn = unet_learner(data,models.resnet34,metrics=[dice])
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(4,1e-4)
learn.unfreeze()
learn.fit_one_cycle(4,1e-4,wd=1e-2) | _____no_output_____ | Apache-2.0 | Brain MRI Segmentation.ipynb | shahidhj/Deep-Learning-notebooks |
Resnet 34 without pretrained weights with dialation | learn.save('ResnetWithPrettrained')
learn.summary()
def conv(ni,nf):
return nn.Conv2d(ni, nf, kernel_size=3, stride=2, padding=1,dilation=2)
def conv2(ni,nf): return conv_layer(ni,nf)
models.resnet34
model = nn.Sequential(
conv2(3,8),
res_block(8),
conv2(8,16),
res_block(16),
conv2(16,32),
res_block(32),
)
act_fn = nn.ReLU(inplace=True)
def presnet(block, n_layers, name, pre=False, **kwargs):
model = PResNet(block, n_layers, **kwargs)
#if pre: model.load_state_dict(model_zoo.load_url(model_urls[name]))
if pre: model.load_state_dict(torch.load(model_urls[name]))
return model
def presnet18(pretrained=False, **kwargs):
return presnet(BasicBlock, [2, 2, 2, 2], 'presnet18', pre=pretrained, **kwargs)
class ResBlock(nn.Module):
def __init__(self, nf):
super().__init__()
self.conv1 = conv_layer(nf,nf)
self.conv2 = conv_layer(nf,nf)
def forward(self, x): return x + self.conv2(self.conv1(x))
model3 = nn.Sequential(
conv2(3, 8),
res_block(8),
conv2(8, 16),
res_block(16),
conv2(16, 32),
res_block(32),
conv2(32, 16),
res_block(16),
conv2(16, 10),
)
model3
class ResBlock(nn.Module):
def __init__(self, nf):
super().__init__()
self.conv1 = conv_layer(nf,nf)
self.conv2 = conv_layer(nf,nf)
def forward(self, x): return x + self.conv2(self.conv1(x))
learn3 = unet_learner(data,drn_d_38,metrics=[dice],pretrained=True)
learn3.lr_find()
learn3.recorder.plot()
learn3.fit_one_cycle(4,1e-4,wd=1e-1)
learn3.unfreeze()
learn3.fit_one_cycle(4,1e-4,wd=1e-2)
learn3.save('ResnetWithPretrainedDilation')
learn3.summary()
import pdb
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
BatchNorm = nn.BatchNorm2d
# __all__ = ['DRN', 'drn26', 'drn42', 'drn58']
webroot = 'http://dl.yf.io/drn/'
model_urls = {
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
'drn-c-26': webroot + 'drn_c_26-ddedf421.pth',
'drn-c-42': webroot + 'drn_c_42-9d336e8c.pth',
'drn-c-58': webroot + 'drn_c_58-0a53a92c.pth',
'drn-d-22': webroot + 'drn_d_22-4bd2f8ea.pth',
'drn-d-38': webroot + 'drn_d_38-eebb45f0.pth',
'drn-d-54': webroot + 'drn_d_54-0e0534ff.pth',
'drn-d-105': webroot + 'drn_d_105-12b40979.pth'
}
def conv3x3(in_planes, out_planes, stride=1, padding=1, dilation=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=padding, bias=False, dilation=dilation)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None,
dilation=(1, 1), residual=True):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride,
padding=dilation[0], dilation=dilation[0])
self.bn1 = BatchNorm(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes,
padding=dilation[1], dilation=dilation[1])
self.bn2 = BatchNorm(planes)
self.downsample = downsample
self.stride = stride
self.residual = residual
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
if self.residual:
out += residual
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None,
dilation=(1, 1), residual=True):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = BatchNorm(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=dilation[1], bias=False,
dilation=dilation[1])
self.bn2 = BatchNorm(planes)
self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
self.bn3 = BatchNorm(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class DRN(nn.Module):
def __init__(self, block, layers, num_classes=1000,
channels=(16, 32, 64, 128, 256, 512, 512, 512),
out_map=False, out_middle=False, pool_size=28, arch='D'):
super(DRN, self).__init__()
self.inplanes = channels[0]
self.out_map = out_map
self.out_dim = channels[-1]
self.out_middle = out_middle
self.arch = arch
if arch == 'C':
self.conv1 = nn.Conv2d(3, channels[0], kernel_size=7, stride=1,
padding=3, bias=False)
self.bn1 = BatchNorm(channels[0])
self.relu = nn.ReLU(inplace=True)
self.layer1 = self._make_layer(
BasicBlock, channels[0], layers[0], stride=1)
self.layer2 = self._make_layer(
BasicBlock, channels[1], layers[1], stride=2)
elif arch == 'D':
self.layer0 = nn.Sequential(
nn.Conv2d(3, channels[0], kernel_size=7, stride=1, padding=3,
bias=False),
BatchNorm(channels[0]),
nn.ReLU(inplace=True)
)
self.layer1 = self._make_conv_layers(
channels[0], layers[0], stride=1)
self.layer2 = self._make_conv_layers(
channels[1], layers[1], stride=2)
self.layer3 = self._make_layer(block, channels[2], layers[2], stride=2)
self.layer4 = self._make_layer(block, channels[3], layers[3], stride=2)
self.layer5 = self._make_layer(block, channels[4], layers[4],
dilation=2, new_level=False)
self.layer6 = None if layers[5] == 0 else \
self._make_layer(block, channels[5], layers[5], dilation=4,
new_level=False)
if arch == 'C':
self.layer7 = None if layers[6] == 0 else \
self._make_layer(BasicBlock, channels[6], layers[6], dilation=2,
new_level=False, residual=False)
self.layer8 = None if layers[7] == 0 else \
self._make_layer(BasicBlock, channels[7], layers[7], dilation=1,
new_level=False, residual=False)
elif arch == 'D':
self.layer7 = None if layers[6] == 0 else \
self._make_conv_layers(channels[6], layers[6], dilation=2)
self.layer8 = None if layers[7] == 0 else \
self._make_conv_layers(channels[7], layers[7], dilation=1)
if num_classes > 0:
self.avgpool = nn.AvgPool2d(pool_size)
self.fc = nn.Conv2d(self.out_dim, num_classes, kernel_size=1,
stride=1, padding=0, bias=True)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, BatchNorm):
m.weight.data.fill_(1)
m.bias.data.zero_()
def _make_layer(self, block, planes, blocks, stride=1, dilation=1,
new_level=True, residual=True):
assert dilation == 1 or dilation % 2 == 0
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
BatchNorm(planes * block.expansion),
)
layers = list()
layers.append(block(
self.inplanes, planes, stride, downsample,
dilation=(1, 1) if dilation == 1 else (
dilation // 2 if new_level else dilation, dilation),
residual=residual))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, residual=residual,
dilation=(dilation, dilation)))
return nn.Sequential(*layers)
def _make_conv_layers(self, channels, convs, stride=1, dilation=1):
modules = []
for i in range(convs):
modules.extend([
nn.Conv2d(self.inplanes, channels, kernel_size=3,
stride=stride if i == 0 else 1,
padding=dilation, bias=False, dilation=dilation),
BatchNorm(channels),
nn.ReLU(inplace=True)])
self.inplanes = channels
return nn.Sequential(*modules)
def forward(self, x):
y = list()
if self.arch == 'C':
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
elif self.arch == 'D':
x = self.layer0(x)
x = self.layer1(x)
y.append(x)
x = self.layer2(x)
y.append(x)
x = self.layer3(x)
y.append(x)
x = self.layer4(x)
y.append(x)
x = self.layer5(x)
y.append(x)
if self.layer6 is not None:
x = self.layer6(x)
y.append(x)
if self.layer7 is not None:
x = self.layer7(x)
y.append(x)
if self.layer8 is not None:
x = self.layer8(x)
y.append(x)
if self.out_map:
x = self.fc(x)
else:
x = self.avgpool(x)
x = self.fc(x)
x = x.view(x.size(0), -1)
if self.out_middle:
return x, y
else:
return x
class DRN_A(nn.Module):
def __init__(self, block, layers, num_classes=1000):
self.inplanes = 64
super(DRN_A, self).__init__()
self.out_dim = 512 * block.expansion
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=1,
dilation=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=1,
dilation=4)
self.avgpool = nn.AvgPool2d(28, stride=1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, BatchNorm):
m.weight.data.fill_(1)
m.bias.data.zero_()
# for m in self.modules():
# if isinstance(m, nn.Conv2d):
# nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
# elif isinstance(m, nn.BatchNorm2d):
# nn.init.constant_(m.weight, 1)
# nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1, dilation=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes,
dilation=(dilation, dilation)))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def drn_a_50(pretrained=False, **kwargs):
model = DRN_A(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
return model
def drn_c_26(pretrained=False, **kwargs):
model = DRN(BasicBlock, [1, 1, 2, 2, 2, 2, 1, 1], arch='C', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-c-26']))
return model
def drn_c_42(pretrained=False, **kwargs):
model = DRN(BasicBlock, [1, 1, 3, 4, 6, 3, 1, 1], arch='C', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-c-42']))
return model
def drn_c_58(pretrained=False, **kwargs):
model = DRN(Bottleneck, [1, 1, 3, 4, 6, 3, 1, 1], arch='C', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-c-58']))
return model
def drn_d_22(pretrained=False, **kwargs):
model = DRN(BasicBlock, [1, 1, 2, 2, 2, 2, 1, 1], arch='D', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-d-22']))
return model
def drn_d_24(pretrained=False, **kwargs):
model = DRN(BasicBlock, [1, 1, 2, 2, 2, 2, 2, 2], arch='D', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-d-24']))
return model
def drn_d_38(pretrained=False, **kwargs):
model = DRN(BasicBlock, [1, 1, 3, 4, 6, 3, 1, 1], arch='D', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-d-38']))
return model
def drn_d_40(pretrained=False, **kwargs):
model = DRN(BasicBlock, [1, 1, 3, 4, 6, 3, 2, 2], arch='D', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-d-40']))
return model
def drn_d_54(pretrained=False, **kwargs):
model = DRN(Bottleneck, [1, 1, 3, 4, 6, 3, 1, 1], arch='D', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-d-54']))
return model
def drn_d_56(pretrained=False, **kwargs):
model = DRN(Bottleneck, [1, 1, 3, 4, 6, 3, 2, 2], arch='D', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-d-56']))
return model
def drn_d_105(pretrained=False, **kwargs):
model = DRN(Bottleneck, [1, 1, 3, 4, 23, 3, 1, 1], arch='D', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-d-105']))
return model
def drn_d_107(pretrained=False, **kwargs):
model = DRN(Bottleneck, [1, 1, 3, 4, 23, 3, 2, 2], arch='D', **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['drn-d-107']))
return model
import torch.nn as nn
import torch.utils.model_zoo as model_zoo
__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
'resnet152']
model_urls = {
'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}
def conv3x3(in_planes, out_planes, stride=1,padding=1,dilation=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False,dilation=dilation)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None,dilation=(1,1)):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride,padding=dilation[0],dilation=dilation[1])
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes,padding=dilation[1],dilation=dilation[1])
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None,dilation=(1,1)):
super(Bottleneck, self).__init__()
self.conv1 = conv1x1(inplanes, planes)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = conv3x3(planes, planes, stride)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = conv1x1(planes, planes * self.expansion)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
super(ResNet, self).__init__()
self.inplanes = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
# Zero-initialize the last BN in each residual branch,
# so that the residual branch starts with zeros, and each residual block behaves like an identity.
# This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
if zero_init_residual:
for m in self.modules():
if isinstance(m, Bottleneck):
nn.init.constant_(m.bn3.weight, 0)
elif isinstance(m, BasicBlock):
nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def resnet18(pretrained=False, **kwargs):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
return model
def resnet34(pretrained=False, **kwargs):
"""Constructs a ResNet-34 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))
return model
def resnet50(pretrained=False, **kwargs):
"""Constructs a ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))
return model
def resnet101(pretrained=False, **kwargs):
"""Constructs a ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))
return model
def resnet152(pretrained=False, **kwargs):
"""Constructs a ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))
return model
| _____no_output_____ | Apache-2.0 | Brain MRI Segmentation.ipynb | shahidhj/Deep-Learning-notebooks |
Supervised Learning: Finding Donors for *CharityML*> Udacity Machine Learning Engineer Nanodegree: _Project 2_>> Author: _Ke Zhang_>> Submission Date: _2017-04-30_ (Revision 3) Content- [Getting Started](Getting-Started)- [Exploring the Data](Exploring-the-Data)- [Preparing the Data](Preparing-the-Data)- [Evaluating Model Performance](Evaluating-Model-Performance)- [Improving Results](Improving-Results)- [Feature Importance](Feature-Importance)- [References](References)- [Reproduction Environment](Reproduction-Environment) Getting StartedIn this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features. The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Census+Income). The datset was donated by Ron Kohavi and Barry Becker, after being published in the article _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. You can find the article by Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). The data we investigate here consists of small changes to the original dataset, such as removing the `'fnlwgt'` feature and records with missing or ill-formatted entries. ---- Exploring the DataRun the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, `'income'`, will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database. | # Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
import matplotlib.pyplot as plt
import seaborn as sns
# Import supplementary visualization code visuals.py
import visuals as vs
#sklearn makes lots of deprecation warnings...
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# Pretty display for notebooks
%matplotlib inline
sns.set(style='white', palette='muted', color_codes=True)
sns.set_context('notebook', font_scale=1.2, rc={'lines.linewidth': 1.2})
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data.head(n=1)) | _____no_output_____ | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Subsets and Splits