markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Connect to trinoThe following cell creates a trino api connection.It assumes that your `credentials.env` file has been edited so that`TRINO_PASSWD` has a JWT token obtained from:https://das-odh-trino.apps.odh-cl1.apps.os-climate.org/Your `TRINO_USER` value should be your github username. | import trino
conn = trino.dbapi.connect(
host=os.environ['TRINO_HOST'],
port=int(os.environ['TRINO_PORT']),
user=os.environ['TRINO_USER'],
http_scheme='https',
auth=trino.auth.JWTAuthentication(os.environ['TRINO_PASSWD']),
verify=True,
)
cur = conn.cursor() | _____no_output_____ | FTL | notebooks/test-trino-access.ipynb | os-climate/data-platform-demo |
Test your trino connectionThis cell shows all the catalogs visible to you.If your trino api connection initialized correctly above,this `show catalogs` command should always succeed. | cur.execute('show catalogs')
cur.fetchall() | _____no_output_____ | FTL | notebooks/test-trino-access.ipynb | os-climate/data-platform-demo |
Gaussian discriminant analysis con stessa matrice di covarianza per le distribuzioni delle due classi e conseguente separatore lineare. Implementata in scikit-learn. Valutazione con cross validation. | import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import pandas as pd
import numpy as np
import scipy.stats as st
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import cross_val_score
import sklearn.metrics as mt
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (16, 8)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c',
'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b',
'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']
cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]]) | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Leggiamo i dati da un file csv in un dataframe pandas. I dati hanno 3 valori: i primi due corrispondono alle features e sono assegnati alle colonne x1 e x2 del dataframe; il terzo è il valore target, assegnato alla colonna t. Vengono poi creati una matrice X delle features e un vettore target t | # legge i dati in dataframe pandas
data = pd.read_csv("../../data/ex2data1.txt", header= None,delimiter=',', names=['x1','x2','t'])
# calcola dimensione dei dati
n = len(data)
n0 = len(data[data.t==0])
# calcola dimensionalità delle features
features = data.columns
nfeatures = len(features)-1
X = np.array(data[features[:-1]])
t = np.array(data['t'])
| _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Visualizza il dataset. | fig = plt.figure(figsize=(16,8))
ax = fig.gca()
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, color=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.title('Dataset', fontsize=12)
plt.show() | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Definisce un classificatore basato su GDA quadratica ed effettua il training sul dataset. | clf = LinearDiscriminantAnalysis(store_covariance=True)
clf.fit(X, t) | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Definiamo la griglia 100x100 da utilizzare per la visualizzazione delle varie distribuzioni. | # insieme delle ascisse dei punti
u = np.linspace(min(X[:,0]), max(X[:,0]), 100)
# insieme delle ordinate dei punti
v = np.linspace(min(X[:,1]), max(X[:,1]), 100)
# deriva i punti della griglia: il punto in posizione i,j nella griglia ha ascissa U(i,j) e ordinata V(i,j)
U, V = np.meshgrid(u, v) | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Calcola sui punti della griglia le probabilità delle classi $p(x|C_0), p(x|C_1)$ e le probabilità a posteriori delle classi $p(C_0|x), p(C_1|x)$ | # probabilità a posteriori delle due distribuzioni sulla griglia
Z = clf.predict_proba(np.c_[U.ravel(), V.ravel()])
pp0 = Z[:, 0].reshape(U.shape)
pp1 = Z[:, 1].reshape(V.shape)
# rapporto tra le probabilità a posteriori delle classi per tutti i punti della griglia
z=pp0/pp1
# probabilità per le due classi sulla griglia
mu0 = clf.means_[0]
mu1 = clf.means_[1]
sigma = clf.covariance_
vf0=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu0,sigma))
vf1=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu1,sigma))
p0=vf0(U,V)
p1=vf1(U,V) | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Visualizzazione della distribuzione di $p(x|C_0)$ | fig = plt.figure(figsize=(16,8))
ax = fig.gca()
# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap
imshow_handle = plt.imshow(p0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
plt.contour(U, V, p0, linewidths=[.7], colors=[colors[6]])
# rappresenta i punti del dataset
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
# rappresenta la media della distribuzione
ax.scatter(mu0[0], mu0[1], s=150,c=colors[3], marker='*', alpha=1)
# inserisce titoli, etc.
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title('Distribuzione di $p(x|C_0)$', fontsize=12)
plt.show() | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Visualizzazione della distribuzione di $p(x|C1)$ | fig = plt.figure(figsize=(16,8))
ax = fig.gca()
# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap
imshow_handle = plt.imshow(p1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
plt.contour(U, V, p1, linewidths=[.7], colors=[colors[6]])
# rappresenta i punti del dataset
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
# rappresenta la media della distribuzione
ax.scatter(mu1[0], mu1[1], s=150,c=colors[3], marker='*', alpha=1)
# inserisce titoli, etc.
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title('Distribuzione di $p(x|C_1)$', fontsize=12)
plt.show() | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Visualizzazione di $p(C_0|x)$ | fig = plt.figure(figsize=(8,8))
ax = fig.gca()
imshow_handle = plt.imshow(pp0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1])
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title("Distribuzione di $p(C_0|x)$", fontsize=12)
plt.show() | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Visualizzazione di $p(C_1|x)$ | fig = plt.figure(figsize=(8,8))
ax = fig.gca()
imshow_handle = plt.imshow(pp1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1])
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title("Distribuzione di $p(C_1|x)$", fontsize=12)
plt.show() | _____no_output_____ | MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Applica la cross validation (5-fold) per calcolare l'accuracy effettuando la media sui 5 valori restituiti. | print("Accuracy: {0:5.3f}".format(cross_val_score(clf, X, t, cv=5, scoring='accuracy').mean())) | Accuracy: 0.870
| MIT | codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb | tvml/fo2021 |
Parte 04Nessa parte os modelos criados anteriormente serão utilizados para realizar predições. Para isso, eles devem serregistrados no TFX. Para efetuar as predições, os dados utilizados no treinamento desses modelos serão inseridosno SAVIME, o qual ficará encarregado de enviar e receber os dados para/de TFX. | import os
import sys
# Necessário mudar o diretório de trabalho para o nível mais acima
if not 'notebooks' in os.listdir('.'):
current_dir = os.path.abspath(os.getcwd())
parent_dir = os.path.dirname(current_dir)
os.chdir(parent_dir)
# Inserir aqui o caminho do arquivo de dados: um json contendo informações a respeito
# da partição de x e y utilizada na parte 01.
data_fp = 'saved_models_arima/data.json'
# Configuração do host e porta em que o SAVIME está escutando
savime_host = '127.0.0.1'
savime_port = 65000
# Configuração TFX
tfx_host = 'localhost'
tfx_port = 8501
# Diretório de dados
data_dir = 'data'
# Local do array de temperaturas
dataset_path = os.path.join(data_dir, 'tiny-dataset.hdf5')
%load_ext autoreload
%autoreload 2
%matplotlib agg
from IPython.display import HTML
import json
import h5py
import numpy as np
import seaborn as sns
import tensorflow as tf
from src.animation import animate_heat_map
from src.predictor_consumer import PredictionConsumer
# Savime imports
import pysavime
from pysavime.util.converter import DataVariableBlockConverter
sns.set_context('notebook')
sns.set_style('whitegrid')
sns.set_palette(sns.color_palette("Paired"))
tf.get_logger().setLevel('ERROR')
with open(data_fp, 'r') as _in:
data = json.load(_in) | _____no_output_____ | MIT | notebooks-pt/Tutorial - Parte 04.ipynb | dnasc/savime-notebooks |
A primeira etapa a ser realizada é converter os dados para um formato processável para o SAVIME. | with h5py.File(dataset_path, 'r') as in_:
array = in_['real'][...]
# Especificar dimensões
time_series = ('time_series', range(array.shape[0]))
time_step = ('time_step', range(array.shape[1]))
pos_x = ('pos_x', range(array.shape[2]))
pos_y = ('pos_y', range(array.shape[3]))
# Remover última dimensão espúria
squeezed_array = np.squeeze(array, axis=-1)
# Salvar array
temperatura_data_fp = os.path.join(data_dir, 'temperatura.data')
squeezed_array.ravel().astype('float64').tofile(temperatura_data_fp) | _____no_output_____ | MIT | notebooks-pt/Tutorial - Parte 04.ipynb | dnasc/savime-notebooks |
Também é nessário fazer a divisão do conjunto de dados de entrada em x e y. Como dito na parte anterior, cada série temporal possuí 10 instantes de tempo. Além disso, os modelos foram treinados a prever o décimo instante de tempo a partir dos nove anteriores. A critério de exemplo, selecionamos abaixo um grupo de séries temporais para realizar a predição de temperatura. | # Seleciona-se apenas um grupo para predição.
chosen_model_name = data['model']
chosen_group_ix = 0
x = squeezed_array[[chosen_group_ix], :-1]
y = squeezed_array[[chosen_group_ix], 1:]
pc = PredictionConsumer(host=tfx_host, port=tfx_port, model_name=chosen_model_name)
y_hat = pc.predict(x)
anim_y = animate_heat_map(np.squeeze(y,axis=0))
anim_y_html = anim_y.to_html5_video()
anim_yhat = animate_heat_map(np.squeeze(y_hat,axis=0))
anim_yhat_html = anim_yhat.to_html5_video()
HTML(f'<div style="float: left;"> {anim_y_html} </div><div style="float: left;"> {anim_yhat_html} </div>')
num_models = 25
num_groups, num_time_steps, num_pos_x, num_pos_y = squeezed_array.shape
# Define o dataset com as temperaturas a ser registrado no SAVIME.
dataset = pysavime.define.file_dataset('temperature_data', temperatura_data_fp, 'double')
print('- Dataset CREATE query:', dataset.create_query_str())
# Define o esquema do tar
group_dim = pysavime.define.implicit_tar_dimension('group', 'int32', 0, num_groups - 1)
time_step_dim = pysavime.define.implicit_tar_dimension('time_step', 'int32', 0, num_time_steps - 1)
pos_x_dim = pysavime.define.implicit_tar_dimension('pos_x', 'int32', 0, num_pos_x - 1)
pos_y_dim = pysavime.define.implicit_tar_dimension('pos_y', 'int32', 0, num_pos_y - 1)
temperature = pysavime.define.tar_attribute('temperature', 'double')
dims = [group_dim, time_step_dim, pos_x_dim, pos_y_dim]
attributes = [temperature]
tar = pysavime.define.tar('temperatures_tar', dims, attributes)
print('- Tar CREATE query:', tar.create_query_str())
# Define o subtar único responsável por registrar o dataset no tar criado anteriormente.
group_dim_sub = pysavime.define.ordered_subtar_dimension(group_dim, 0, num_groups - 1, True)
time_step_dim_sub = pysavime.define.ordered_subtar_dimension(time_step_dim, 0, num_time_steps - 1, True)
pos_x_dim_sub = pysavime.define.ordered_subtar_dimension(pos_x_dim, 0, num_pos_x - 1, True)
pos_y_dim_sub = pysavime.define.ordered_subtar_dimension(pos_y_dim, 0, num_pos_y - 1, True)
temperature_sub = pysavime.define.subtar_attribute(temperature, dataset)
subtar_dims = [group_dim_sub, time_step_dim_sub, pos_x_dim_sub, pos_y_dim_sub]
subtar_attrs = [temperature_sub]
subtar = pysavime.define.subtar(tar, subtar_dims, subtar_attrs)
print('- SubTar LOAD query', subtar.load_query_str())
with pysavime.Client(host='127.0.0.1', port=65000, raise_silent_error=True) as client:
client.execute(pysavime.operator.create(dataset))
client.execute(pysavime.operator.create(tar))
client.execute(pysavime.operator.load(subtar)) | _____no_output_____ | MIT | notebooks-pt/Tutorial - Parte 04.ipynb | dnasc/savime-notebooks |
Abaixo verificamos se os dados foram corretamente registrados no SAVIME. | with pysavime.Client(host=savime_host, port=savime_port, raise_silent_error=True) as client:
response = client.execute(pysavime.operator.select(tar))[0]
is_the_same = np.isclose(response.attrs['temperature'].reshape(squeezed_array.shape),squeezed_array).all()
print('Checagem:', is_the_same) | Checagem: True
| MIT | notebooks-pt/Tutorial - Parte 04.ipynb | dnasc/savime-notebooks |
O próximo passo é executar o comando PREDICT. | # Vamos selecionar apenas os 9 primeiros instantes de tempo
cmd = pysavime.operator.subset(tar, time_step_dim.name, 0, 8)
# Definir as dimensões de entrada e saída do nosso modelo
input_dims_spec = [(group_dim.name, num_groups),
(time_step_dim.name, num_time_steps - 1),
(pos_x_dim.name, num_pos_x),
(pos_y_dim.name, num_pos_y)]
output_dims_spec = [("time", 9)]
register_cmd = pysavime.operator.register_model(model_identifier=chosen_model_name,
input_dim_specification=input_dims_spec,
output_dim_specification=output_dims_spec,
attribute_specification=[temperature.name])
predict_cmd = pysavime.operator.predict(tar=cmd, model_identifier=chosen_model_name)
print(register_cmd)
print(predict_cmd)
with pysavime.Client(host=savime_host, port=savime_port, raise_silent_error=True) as client:
client.execute(register_cmd)
response = client.execute(predict_cmd)[0]
pandas_converter = DataVariableBlockConverter('pandas')
pandas_converter(response) | _____no_output_____ | MIT | notebooks-pt/Tutorial - Parte 04.ipynb | dnasc/savime-notebooks |
Convolutional Neural Networks: ApplicationWelcome to Course 4's second assignment! In this notebook, you will:- Implement helper functions that you will use when implementing a TensorFlow model- Implement a fully functioning ConvNet using TensorFlow **After this assignment you will be able to:**- Build and train a ConvNet in TensorFlow for a classification problem We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*"). Updates to Assignment If you were working on a previous version* The current notebook filename is version "1a". * You can find your work in the file directory as version "1".* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of Updates* `initialize_parameters`: added details about tf.get_variable, `eval`. Clarified test case.* Added explanations for the kernel (filter) stride values, max pooling, and flatten functions.* Added details about softmax cross entropy with logits.* Added instructions for creating the Adam Optimizer.* Added explanation of how to evaluate tensors (optimizer and cost).* `forward_propagation`: clarified instructions, use "F" to store "flatten" layer.* Updated print statements and 'expected output' for easier visual comparisons.* Many thanks to Kevin P. Brown (mentor for the deep learning specialization) for his suggestions on the assignments in this course! 1.0 - TensorFlow modelIn the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. As usual, we will start by loading in the packages. | import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1) | _____no_output_____ | MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
Run the next cell to load the "SIGNS" dataset you are going to use. | # Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() | _____no_output_____ | MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples. | # Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) | y = 2
| MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.To get started, let's examine the shapes of your data. | X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {} | number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)
| MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
1.1 - Create placeholdersTensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.**Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint: search for the tf.placeholder documentation"](https://www.tensorflow.org/api_docs/python/tf/placeholder). | # GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
"""
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, [None, n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, [None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y)) | X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
| MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
**Expected Output** X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) 1.2 - Initialize parametersYou will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.**Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:```pythonW = tf.get_variable("W", [1,2,3,4], initializer = ...)``` tf.get_variable()[Search for the tf.get_variable documentation](https://www.tensorflow.org/api_docs/python/tf/get_variable). Notice that the documentation says:```Gets an existing variable with these parameters or create a new one.```So we can use this function to create a tensorflow variable with the specified name, but if the variables already exist, it will get the existing variable with that same name. | # GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Note that we will hard code the shape values in the function to make the grading simpler.
Normally, functions should take values as inputs rather than hard coding.
Returns:
parameters -- a dictionary of tensors containing W1, W2
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable("W1", [4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0))
W2 = tf.get_variable("W2", [2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1[1,1,1] = \n" + str(parameters["W1"].eval()[1,1,1]))
print("W1.shape: " + str(parameters["W1"].shape))
print("\n")
print("W2[1,1,1] = \n" + str(parameters["W2"].eval()[1,1,1]))
print("W2.shape: " + str(parameters["W2"].shape)) | W1[1,1,1] =
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394
-0.06847463 0.05245192]
W1.shape: (4, 4, 3, 8)
W2[1,1,1] =
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
W2.shape: (2, 2, 8, 16)
| MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
** Expected Output:**```W1[1,1,1] = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192]W1.shape: (4, 4, 3, 8)W2[1,1,1] = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498]W2.shape: (2, 2, 8, 16)``` 1.3 - Forward propagationIn TensorFlow, there are built-in functions that implement the convolution steps for you.- **tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W$, this function convolves $W$'s filters on X. The third parameter ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). Normally, you'll choose a stride of 1 for the number of examples (the first value) and for the channels (the fourth value), which is why we wrote the value as `[1,s,s,1]`. You can read the full documentation on [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d).- **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, we usually operate on a single example at a time and a single channel at a time. So the first and fourth value in `[1,f,f,1]` are both 1. You can read the full documentation on [max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool).- **tf.nn.relu(Z):** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu).- **tf.contrib.layers.flatten(P)**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector. * If a tensor P has the shape (m,h,w,c), where m is the number of examples (the batch size), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension. * For example, given a tensor with dimensions [100,2,3,4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [flatten](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten).- **tf.contrib.layers.fully_connected(F, num_outputs):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [full_connected](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected).In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. Window, kernel, filterThe words "window", "kernel", and "filter" are used to refer to the same thing. This is why the parameter `ksize` refers to "kernel size", and we use `(f,f)` to refer to the filter size. Both "kernel" and "filter" refer to the "window." **Exercise**Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. In detail, we will use the following parameters for all the steps: - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - Flatten the previous output. - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost. | # GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Note that for simplicity and grading purposes, we'll hard-code some values
such as the stride and kernel (filter) sizes.
Normally, functions should take these values as function parameters.
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, stride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding='SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize=[1, 4, 4, 1], strides=[1, 4, 4, 1], padding='SAME')
# FLATTEN
F = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(F, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = \n" + str(a)) | Z3 =
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064]
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
| MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
**Expected Output**:```Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]``` 1.4 - Compute costImplement the compute cost function below. Remember that the cost function helps the neural network see how much the model's predictions differ from the correct labels. By adjusting the weights of the network to reduce the cost, the neural network can improve its predictions.You might find these two functions helpful: - **tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [softmax_cross_entropy_with_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits).- **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to calculate the sum of the losses over all the examples to get the overall cost. You can check the full documentation [reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean). Details on softmax_cross_entropy_with_logits (optional reading)* Softmax is used to format outputs so that they can be used for classification. It assigns a value between 0 and 1 for each category, where the sum of all prediction values (across all possible categories) equals 1.* Cross Entropy is compares the model's predicted classifications with the actual labels and results in a numerical value representing the "loss" of the model's predictions.* "Logits" are the result of multiplying the weights and adding the biases. Logits are passed through an activation function (such as a relu), and the result is called the "activation."* The function is named `softmax_cross_entropy_with_logits` takes logits as input (and not activations); then uses the model to predict using softmax, and then compares the predictions with the true labels using cross entropy. These are done with a single function to optimize the calculations.** Exercise**: Compute the cost below using the function above. | # GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3, labels=Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a)) | cost = 2.91034
| MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
**Expected Output**: ```cost = 2.91034``` 1.5 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. **Exercise**: Complete the function below. The model below should:- create placeholders- initialize parameters- forward propagate- compute the cost- create an optimizerFinally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer) Adam OptimizerYou can use `tf.train.AdamOptimizer(learning_rate = ...)` to create the optimizer. The optimizer has a `minimize(loss=...)` function that you'll call to set the cost function that the optimizer will minimize.For details, check out the documentation for [Adam Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) Random mini batchesIf you took course 2 of the deep learning specialization, you implemented `random_mini_batches()` in the "Optimization" programming assignment. This function returns a list of mini-batches. It is already implemented in the `cnn_utils.py` file and imported here, so you can call it like this:```Pythonminibatches = random_mini_batches(X, Y, mini_batch_size = 64, seed = 0)```(You will want to choose the correct variable names when you use it in your code). Evaluating the optimizer and costWithin a loop, for each mini-batch, you'll use the `tf.Session` object (named `sess`) to feed a mini-batch of inputs and labels into the neural network and evaluate the tensors for the optimizer as well as the cost. Remember that we built a graph data structure and need to feed it inputs and labels and use `sess.run()` in order to get values for the optimizer and cost.You'll use this kind of syntax:```output_for_var1, output_for_var2 = sess.run( fetches=[var1, var2], feed_dict={var_inputs: the_batch_of_inputs, var_labels: the_batch_of_labels} )```* Notice that `sess.run` takes its first argument `fetches` as a list of objects that you want it to evaluate (in this case, we want to evaluate the optimizer and the cost). * It also takes a dictionary for the `feed_dict` parameter. * The keys are the `tf.placeholder` variables that we created in the `create_placeholders` function above. * The values are the variables holding the actual numpy arrays for each mini-batch. * The sess.run outputs a tuple of the evaluated tensors, in the same order as the list given to `fetches`. For more information on how to use sess.run, see the documentation [tf.Sesssionrun](https://www.tensorflow.org/api_docs/python/tf/Sessionrun) documentation. | # GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
"""
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
"""
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost.
# The feedict should contain a minibatch for (X,Y).
"""
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer, cost], feed_dict={X:minibatch_X, Y:minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters | _____no_output_____ | MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code! | _, _, parameters = model(X_train, Y_train, X_test, Y_test) | Cost after epoch 0: 1.917929
Cost after epoch 5: 1.506757
Cost after epoch 10: 0.955359
Cost after epoch 15: 0.845802
Cost after epoch 20: 0.701174
Cost after epoch 25: 0.571977
Cost after epoch 30: 0.518435
Cost after epoch 35: 0.495806
Cost after epoch 40: 0.429827
Cost after epoch 45: 0.407291
Cost after epoch 50: 0.366394
Cost after epoch 55: 0.376922
Cost after epoch 60: 0.299491
Cost after epoch 65: 0.338870
Cost after epoch 70: 0.316400
Cost after epoch 75: 0.310413
Cost after epoch 80: 0.249549
Cost after epoch 85: 0.243457
Cost after epoch 90: 0.200031
Cost after epoch 95: 0.175452
| MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
**Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. **Cost after epoch 0 =** 1.917929 **Cost after epoch 5 =** 1.506757 **Train Accuracy =** 0.940741 **Test Accuracy =** 0.783333 Congratulations! You have finished the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). Once again, here's a thumbs up for your work! | fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image) | _____no_output_____ | MIT | Convolution_model_Application_v1a.ipynb | Yfyangd/Deep_Learning |
Imports | import requests
import getpass
import pickle
import io
import time
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import itertools | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Loginhttps://www.statistikdaten.bayern.de/genesis/online?Menu=Anmeldungabreadcrumb | username = input()
password = getpass.getpass() | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Test login | class GenesisApi:
def __init__(self, username, password, polling_rate=5):
self.username = username
self.password = password
self.polling_rate = polling_rate
self.__base_url = 'https://www.statistikdaten.bayern.de/genesisWS/rest/2020/'
self.__base_params = {
'username': username,
'password': password,
'language': 'de'
}
self.__default_table_params = self.__base_params.copy()
self.__default_table_params.update({
'name': '',
'area': 'all',
'compress': 'false',
'transpose': 'false',
'startyear': '',
'endyear': '',
'timeslices': '',
'regionalvariable': '',
'regionalkey': '',
'classifyingkey1': '',
'classifyingvariable2': '',
'classifyingkey2': '',
'classifyingvariable3': '',
'classifyingkey3': '',
'job': 'true'
})
self.__default_jobs_params = self.__base_params.copy()
self.__default_jobs_params.update({
'selection': '',
'searchcriterion': 'code',
'sortcriterion': 'code',
'type': 'all',
'area': 'all',
'pagelength': '100'
})
self.__default_result_params = self.__base_params.copy()
self.__default_result_params.update({
'name': '',
'area': 'all',
'compress': 'false'
})
def check_login(self):
response = requests.get(self.__base_url + 'helloworld/logincheck', params=self.__base_params)
b'{"Status":"Sie wurden erfolgreich an- und abgemeldet!","Username":"GB3U65P838"}'
try:
return response.json()['Status'] == 'Sie wurden erfolgreich an- und abgemeldet!'
except Exception as e:
return False
def get_table(self, name, startyear=''):
startyear = str(startyear)
params = self.__default_table_params.copy()
params['name'] = name
params['startyear'] = startyear
response = requests.get(self.__base_url + 'data/table', params=params)
data = response.json()
code = data['Status']['Code']
if (code == 0): # Success
return data
elif (code == 99): # Table is too big a job has been created
print('Table is too big, created a job.')
result_name = data['Status']['Content'].split(':', 1)[1][1:]
return self.get_job_result(result_name)
else:
params['password'] = '***'
print('Error requesting ' + name + ' with params:', params, 'response:', data)
return data
def is_job_ready(self, name):
params = self.__default_jobs_params.copy()
params['selection'] = 'Werteabruf ' + name
response = requests.get(self.__base_url + 'catalogue/jobs', params=params)
try:
return response.json()['List'][0]['State'] == 'Fertig'
except Exception as e:
return False
def delete_job_result(self, name):
params = self.__default_result_params.copy()
params['name'] = name
response = requests.get(self.__base_url + 'profile/removeResult', params=params)
return response
def get_job_result(self, name):
params = self.__default_result_params.copy()
params['name'] = name
while(not self.is_job_ready(name)):
print('Data is not ready waiting ' + str(self.polling_rate) + ' seconds longer.')
time.sleep(self.polling_rate)
response = requests.get(self.__base_url + 'data/result', params=params)
self.delete_job_result(name)
return response.json()
genesis = GenesisApi(username, password)
genesis.check_login() | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Download dataNote: This takes a long time | responses_demographic = {}
for year in range(1980, 2020 + 1):
print('Requesting table for the year ' + str(year))
response = genesis.get_table('12411-003r', year)
print('Got data')
responses_demographic[str(year)] = response
responses_area = {}
# 33111-201r 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2009, 2010, 2011, 2012, 2013
# 33111-101r 2011 - 2015
# 33111-001r 2014 - 2020
for year in [1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2009, 2010, 2011, 2012, 2013]:
print('Requesting table for the year ' + str(year))
response = genesis.get_table('33111-201r', year)
print('Got data')
responses_area[str(year)] = response
for year in range(2014, 2020 + 1):
print('Requesting table for the year ' + str(year))
response = genesis.get_table('33111-001r', year)
print('Got data')
responses_area[str(year)] = response | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Convert to DataFrame | def convert_to_dataframe(response, start_at_line, date_line, header_line):
raw_content = response['Object']['Content']
content = raw_content.split('\n', start_at_line)
date = content[date_line].split(';',1)[0]
csv = io.StringIO(content[header_line] + '\n' + content[start_at_line].split('\n__________', 1)[0])
df = pd.read_csv(csv, ';')
df['date'] = pd.to_datetime(date, format='%d.%m.%Y')
return df | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Demographic | dfs = list()
for year, response in responses_demographic.items():
df = convert_to_dataframe(response, start_at_line=6, date_line=4, header_line=5)
dfs.append(df)
df_demographic = pd.concat(dfs, axis=0, ignore_index=True)
column_names = df_demographic.columns.values
column_names[0] = 'AGS'
column_names[1] = 'Gemeinde'
df_demographic.columns = column_names
df_demographic['Gemeinde'] = df_demographic['Gemeinde'].str.strip()
df_demographic['Insgesamt'] = pd.to_numeric(df_demographic['Insgesamt'], errors='coerce')
df_demographic['männlich'] = pd.to_numeric(df_demographic['männlich'], errors='coerce')
df_demographic['weiblich'] = pd.to_numeric(df_demographic['weiblich'], errors='coerce')
df_demographic
# TODO Filter regierungsbezirke
# TODO Filter male and female | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Area | dfs = list()
for year, response in responses_area.items():
df = convert_to_dataframe(response, start_at_line=10, date_line=5, header_line=8)
column_names = df.columns.values
column_names[0] = 'AGS'
column_names[1] = 'Gemeinde'
df.columns = column_names
for column_name in column_names[2: len(column_names) - 1]:
df[column_name] = pd.to_numeric(df[column_name].str.replace(',', '.'), errors='coerce')
df['Gemeinde'] = df['Gemeinde'].str.strip()
dfs.append(df)
df_area = pd.concat(dfs, axis=0, ignore_index=True)
df_area
# TODO Map old area codes to new ones
# TODO Map area codes to sealed and non-sealed
# TODO Filter regierungsbezirke | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Combined | df_all = pd.merge(df_area, df_demographic, how='left', on=['AGS', 'Gemeinde', 'date'])
df_all.rename(columns={'Insgesamt_x':'Insgesamt Fläche', 'Insgesamt_y':'Insgesamt Bewohner'}, inplace=True)
df_all | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Save and load data | df_demographic.to_pickle('df_demographic.pickle')
df_area.to_pickle('df_area.pickle')
with open('responses_demographic.pickle', 'wb') as f:
pickle.dump(responses_demographic, f, pickle.HIGHEST_PROTOCOL)
with open('responses_area.pickle', 'wb') as f:
pickle.dump(responses_area, f, pickle.HIGHEST_PROTOCOL)
df_demographic = pd.read_pickle('df_demographic.pickle')
df_area = pd.read_pickle('df_area.pickle')
with open('responses_demographic.pickle', 'rb') as f:
responses_demographic = pickle.load(f)
with open('responses_area.pickle', 'rb') as f:
responses_area = pickle.load(f) | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Categorize | categories = {
"living": [
"Wohnen",
"11000 Wohnbaufläche",
],
"industry": [
"Gewerbe, Industrie",
"Betriebsfläche (ohne Abbauland)",
"Abbauland",
"12100 Industrie und Gewerbe",
"12200 Handel und Dienstleistung",
"12300 Versorgungsanlage",
"12400 Entsorgung",
"13000 Halde",
"14000 Bergbaubetrieb",
"15000 Tagebau, Grube, Steinbruch",
],
"transport_infrastructure": [
"Straße, Weg, Platz",
"sonstige Verkehrsfläche",
"21000 Straßenverkehr",
"22000 Weg",
"23000 Platz",
"24000 Bahnverkehr",
"25000 Flugverkehr",
"26000 Schiffsverkehr",
"42000 Hafenbecken",
],
"nature_and_water": [
"Moor",
"Landwirtschaftsfläche (ohne Moor, Heide)",
"Grünanlage",
"Heide",
"Waldfläche",
"Wasserfläche",
"Unland",
"18400 Grünanlage",
"31100 Ackerland",
"31200 Grünland",
"31300 Gartenland",
"31400 Weingarten",
"31500 Obstplantage",
"32000 Wald",
"33000 Gehölz",
"34000 Heide",
"35000 Moor",
"36000 Sumpf",
"37000 Unland, Vegetationslose Fläche",
"41000 Fließgewässer",
"43000 Stehendes Gewässer",
],
"miscellaneous": [
"Flächen anderer Nutzung (ohne Unland, Friedhof)",
"sonstige Erholungsfläche",
"sonstige Gebäude- und Freifläche",
"Friedhof",
"16000 Fläche gemischter Nutzung",
"17000 Fläche besonderer funktionaler Prägung",
"18100 Sportanlage",
"18200 Freizeitanlage",
"19000 Friedhof",
"18300 Erholungsfläche",
]
}
# Check if we classified all columns and used each only once
all_columns = set(df_area.columns)
for l in categories.values():
all_columns = all_columns - set(l)
all_columns = all_columns - set(['AGS', 'Gemeinde', 'Insgesamt', 'date'])
if (len(all_columns) != 0):
print ("The categories", all_columns, "have not yet been categorized.")
for ((name1, l1), (name2, l2)) in itertools.combinations(categories.items(), 2):
if (not set(l1).isdisjoint(l2)):
print(name1, "and", name2, "contain the same category.")
for (name, category) in categories.items():
df_area[name] = df_area.loc[:,category].sum(axis=1)
df_area.drop(category, axis=1, inplace=True)
df_area[name + '_percent'] = df_area[name] / df_area['Insgesamt']
used_areas = [
"living",
"industry",
"transport_infrastructure"
]
df_area['used_area'] = 0
for name in used_areas:
df_area['used_area'] = df_area['used_area'] + df_area.loc[:,used_areas].sum(axis=1)
df_area['used_area_percent'] = df_area['used_area'] / df_area['Insgesamt'] | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Rename columns | df_area.rename(columns={"Insgesamt": "total", "Gemeinde": "municipality"}, inplace=True) | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Filter unused municipalities | df_area = df_area[df_area["AGS"] <= 9999] | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Merge demographic data | df_area
df_demographic.drop(["männlich", "weiblich"], axis=1, inplace=True)
df_demographic.rename(columns={"Gemeinde": "municipality", "Insgesamt": "demographic"}, inplace=True)
df_area = pd.merge(df_area, df_demographic, how='left', on=['AGS', 'municipality', 'date']) | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Export to JSON | df_area
df_export = df_area.copy()
df_export['date'] = df_export['date'].dt.strftime('%d.%m.%Y')
with open("data.json", "w", encoding="utf-8") as f:
df_export.to_json(f, orient="records", force_ascii=False) | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Basic graphs | f, ax = plt.subplots(figsize=(7, 7))
ax.set(yscale="log")
g = sns.lineplot(data=df_demographic[(df_demographic['Gemeinde']=='Friedberg, St') | (df_demographic['Gemeinde']=='Augsburg (Krfr.St)') | (df_demographic['Gemeinde']=='Garmisch-Partenkirchen, M')], style='Gemeinde', x='date', y='Insgesamt', ax=ax)
g.set_title('Einwohner')
g.set(ylim=(1, None))
g
#sns.lineplot(data=df_demographic, style='Gemeinde', x='date', y='Insgesamt', ax=ax)#, ylim=(0,300000))
f, ax = plt.subplots(figsize=(7, 7))
#ax.set(yscale="log")
g = sns.lineplot(data=df_area[(df_area['municipality']=='Friedberg, St') | (df_area['municipality']=='Augsburg (Krfr.St)') | (df_area['municipality']=='Garmisch-Partenkirchen, M')], style='municipality', x='date', y='nature_and_water_percent', ax=ax)
g.set_title('Natur und Wasserflächen')
#g.set(ylim=(0, None))
g
gem = ['Bayern']#, 'Oberbayern', 'Schwaben']
size = 10
f, axs = plt.subplots(len(gem), 1, figsize=(size*3, len(gem)*size*3))
#df_area_2 = df_area_2[df_area_2['date'] > pd.to_datetime("1.1.2010", format='%d.%m.%Y')]
for i in range(0, len(gem)):
g = df_area[(df_area['municipality']==gem[i])].plot.area(
x='date',
y=['living_percent', 'industry_percent', 'transport_infrastructure_percent', 'nature_and_water_percent', 'miscellaneous_percent'],
stacked=True,
ax=(axs if len(gem) == 1 else axs[i]))
g.set_title('Flächen in ' + gem[i])
g.set(ylim=(0, None))
plt.savefig('flächen.jpg') | _____no_output_____ | MIT | data/GenesisGrabber.ipynb | yoki31/visualize |
Import data | pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 38)
df = pd.read_csv('zar_dataset.csv')
df.info()
df.head()
df.describe() | _____no_output_____ | MIT | BNN Model.ipynb | mdtycho/Zar-Currency-Prediction-Model |
Create Labels For Data, Classification and Regression Labels | from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
scaler = StandardScaler()
df['target_clf'] = df['target_return'].apply(lambda x: float(x/abs(x)) if x!=0 else -1)
df['target_clf'] = df['target_clf'].apply(lambda x: x if x==1 else 0)
df.rename(columns = {'target_return':'target_reg'}, inplace = True)
df.head()
df.describe()
df.columns
df.head()
X = df[['RSI', 'TSI', 'ATR', 'BHBI', 'BBL', 'BBH', 'BLBI', 'BBMAVG', 'DCH',
'DCHI', 'DCL', 'DCLI', 'KCC', 'KCH', 'KCL', 'ADX', 'ADXI', 'ADXN',
'ADXP', 'CCI', 'DPO', 'SEMA', 'LEMA', 'Ichimoku', 'Ichimoku_b', 'KST',
'KST_SIG', 'MACD', 'MACD_DIFF', 'MACD_SIG', 'MI', 'TRIX', 'VIN', 'VIP',
'CR', 'DR']].as_matrix()
y_cl = df['target_clf'].as_matrix()
X_train, X_test, y_train, y_test = train_test_split(X, y_cl, test_size=0.20) | _____no_output_____ | MIT | BNN Model.ipynb | mdtycho/Zar-Currency-Prediction-Model |
Function for Feeding Data In Batches | def next_batch(num, data, labels):
'''
Return a total of `num` random samples and labels.
'''
idx = np.arange(0 , len(data))
np.random.shuffle(idx)
idx = idx[:num]
data_shuffle = [data[ i] for i in idx]
labels_shuffle = [labels[ i] for i in idx]
return np.asarray(data_shuffle), np.asarray(labels_shuffle)
X.shape | _____no_output_____ | MIT | BNN Model.ipynb | mdtycho/Zar-Currency-Prediction-Model |
Build Bayesian Model | N = 100 # number of rows in a minibatch.
D = 36 # number of features.
K = 2 # number of classes.
# Create a placeholder to hold the data (in minibatches) in a TensorFlow graph.
x = tf.placeholder(tf.float32, [None, D])
# Normal(0,1) priors for the variables. Note that the syntax assumes TensorFlow 1.1.
w = Normal(loc=tf.zeros([D, K]), scale=tf.ones([D, K]))
b = Normal(loc=tf.zeros(K), scale=tf.ones(K))
# Categorical likelihood for classication.
y = Categorical(tf.matmul(x,w)+b)
# Contruct the q(w) and q(b). in this case we assume Normal distributions.
qw = Normal(loc=tf.Variable(tf.random_normal([D, K])),
scale=tf.nn.softplus(tf.Variable(tf.random_normal([D, K]))))
qb = Normal(loc=tf.Variable(tf.random_normal([K])),
scale=tf.nn.softplus(tf.Variable(tf.random_normal([K]))))
# We use a placeholder for the labels in anticipation of the traning data.
y_ph = tf.placeholder(tf.int32, [N])
# Define the VI inference technique, ie. minimise the KL divergence between q and p.
inference = ed.KLqp({w: qw, b: qb}, data={y:y_ph})
# Initialse the infernce variables
inference.initialize(n_iter=5000, n_print=100, scale={y: float(X_train.shape[0]) / N})
# We will use an interactive session.
sess = tf.InteractiveSession()
# Initialise all the vairables in the session.
tf.global_variables_initializer().run()
# Let the training begin. We load the data in minibatches and update the VI infernce using each new batch.
for _ in range(inference.n_iter):
X_batch, Y_batch = next_batch(N, X_train, y_train)
# TensorFlow method gives the label data in a one hot vetor format. We convert that into a single label.
#Y_batch = np.argmax(Y_batch,axis=1)
info_dict = inference.update(feed_dict={x: X_batch, y_ph: Y_batch})
inference.print_progress(info_dict)
X_test = X_test.astype(np.float32)
X_test.dtype
# Generate samples the posterior and store them.
n_samples = 200
prob_lst = []
samples = []
w_samples = []
b_samples = []
for _ in range(n_samples):
w_samp = qw.sample()
b_samp = qb.sample()
w_samples.append(w_samp)
b_samples.append(b_samp)
# Also compue the probabiliy of each class for each (w,b) sample.
prob = tf.nn.softmax(tf.matmul( X_test,w_samp ) + b_samp)
prob_lst.append(prob.eval())
sample = tf.concat([tf.reshape(w_samp,[-1]),b_samp],0)
samples.append(sample.eval()) | _____no_output_____ | MIT | BNN Model.ipynb | mdtycho/Zar-Currency-Prediction-Model |
Compute The Accuracy Distribution For The Bayesian Neural Net | # Compute the accuracy of the model.
# For each sample we compute the predicted class and compare with the test labels.
# Predicted class is defined as the one which as maximum proability.
# We perform this test for each (w,b) in the posterior giving us a set of accuracies
# Finally we make a histogram of accuracies for the test data.
fig, axes = plt.subplots(figsize = (15, 8))
accy_test = []
for prob in prob_lst:
y_trn_prd = np.argmax(prob,axis=1).astype(np.float32)
acc = (y_trn_prd == y_test).mean()*100
accy_test.append(acc)
axes.hist(accy_test)
axes.set_title("Histogram of prediction accuracies on the test data")
axes.set_xlabel("Accuracy")# Compute the accuracy of the model.
# For each sample we compute the predicted class and compare with the test labels.
# Predicted class is defined as the one which as maximum proability.
# We perform this test for each (w,b) in the posterior giving us a set of accuracies
# Finally we make a histogram of accuracies for the test data.
fig, axes = plt.subplots(figsize = (15, 8))
accy_test = []
for prob in prob_lst:
y_trn_prd = np.argmax(prob,axis=1).astype(np.float32)
acc = (y_trn_prd == y_test).mean()*100
accy_test.append(acc)
axes.hist(accy_test)
axes.set_title("Histogram of prediction accuracies on the test data")
axes.set_xlabel("Accuracy")
axes.set_ylabel("Frequency")
fig.savefig('accuracy_plot.png')
axes.set_ylabel("Frequency")
fig.savefig('accuracy_plot.png')
# Here we compute the mean of probabilties for each class for all the (w,b) samples.
# We then use the class with maximum of the mean proabilities as the prediction.
# In other words, we have used (w,b) samples to construct a set of models and
# used their combined outputs to make the predcitions.
Y_pred = np.argmax(np.mean(prob_lst,axis=0),axis=1)
print("accuracy in predicting the test data = ", (Y_pred == y_test).mean()*100)
# Load the first row from the test data and its label.
test_row = X_test[2]
test_label = y_test[2]
print('truth = ',test_label)
# Now the check what the model perdicts for each (w,b) sample from the posterior. This may take a few seconds...
sing_img_probs = []
for w_samp,b_samp in zip(w_samples,b_samples):
prob = tf.nn.softmax(tf.matmul( X_test[2:3],w_samp ) + b_samp)
sing_img_probs.append(prob.eval())
# Create a histogram of these predictions.
fig, axes = plt.subplots(figsize = (15, 8))
axes.hist(np.argmax(sing_img_probs,axis=2),bins = range(3))
axes.set_xticks(np.arange(0,2))
axes.set_xlim(0,2)
axes.set_xlabel("Accuracy of the prediction of the test row")
axes.set_ylabel("Frequency")
y_test.mean() | _____no_output_____ | MIT | BNN Model.ipynb | mdtycho/Zar-Currency-Prediction-Model |
Test The Model On Brazil Data, We Should Get A wider Distribution Of Accuracies Brazil | brazil = pd.read_csv('brazil_clean.csv')
brazil.head()
brazil.info()
brazil['Date'] = pd.to_datetime(brazil['Date'])
brazil.head()
brazil.info()
brazil['target_cl'] = brazil['target_cl'].apply(lambda x: x if x==1 else 0)
brazil.rename(columns = {'target_return':'target_reg'}, inplace = True)
brazil.head()
brazil.describe()
brazil.set_index('Date', inplace = True)
brazil.head()
X_br = brazil[['RSI', 'TSI', 'ATR', 'BHBI', 'BBL', 'BBH', 'BLBI', 'BBMAVG', 'DCH',
'DCHI', 'DCL', 'DCLI', 'KCC', 'KCH', 'KCL', 'ADX', 'ADXI', 'ADXN',
'ADXP', 'CCI', 'DPO', 'SEMA', 'LEMA', 'Ichimoku', 'Ichimoku_b', 'KST',
'KST_SIG', 'MACD', 'MACD_DIFF', 'MACD_SIG', 'MI', 'TRIX', 'VIN', 'VIP',
'CR', 'DR']].as_matrix()
y_cl_br = brazil['target_cl'].as_matrix()
X_br = X_br.astype(np.float32)
prob_lst_bra = []
for w_samp, b_samp in zip(w_samples, b_samples):
# Also compue the probabiliy of each class for each (w,b) sample.
prob = tf.nn.softmax(tf.matmul( X_br,w_samp ) + b_samp)
prob_lst_bra.append(prob.eval())
# Compute the accuracy of the model.
# For each sample we compute the predicted class and compare with the test labels.
# Predicted class is defined as the one which as maximum proability.
# We perform this test for each (w,b) in the posterior giving us a set of accuracies
# Finally we make a histogram of accuracies for the test data.
fig, axes = plt.subplots(figsize = (15, 8))
accy_test = []
for prob in prob_lst_bra:
y_trn_prd = np.argmax(prob,axis=1).astype(np.float32)
acc = (y_trn_prd == y_cl_br).mean()*100
accy_test.append(acc)
axes.hist(accy_test)
axes.set_title("Histogram of prediction accuracies on the brazil data")
axes.set_xlabel("Accuracy")# Compute the accuracy of the model.
axes.set_ylabel("Frequency")
fig.savefig('accuracy_plot_bra.png')
# For each sample we compute the predicted class and compare with the test labels.
# Predicted class is defined as the one which as maximum proability.
# We perform this test for each (w,b) in the posterior giving us a set of accuracies
# Finally we make a histogram of accuracies for the test data.
fig, axes = plt.subplots(figsize = (15, 8))
accy_test = []
for prob in prob_lst:
y_trn_prd = np.argmax(prob,axis=1).astype(np.float32)
acc = (y_trn_prd == y_test).mean()*100
accy_test.append(acc)
axes.hist(accy_test)
axes.set_title("Histogram of prediction accuracies on the test data")
axes.set_xlabel("Accuracy")
axes.set_ylabel("Frequency")
| _____no_output_____ | MIT | BNN Model.ipynb | mdtycho/Zar-Currency-Prediction-Model |
Train your first modelThis is the second of our [beginner tutorial series](https://github.com/deepjavalibrary/djl/tree/master/jupyter/tutorial) that will take you through creating, training, and running inference on a neural network. In this tutorial, you will learn how to train an image classification model that can recognize handwritten digits. PreparationThis tutorial requires the installation of the Java Jupyter Kernel. To install the kernel, see the [Jupyter README](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md). | // Add the snapshot repository to get the DJL snapshot artifacts
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
// Add the maven dependencies
%maven ai.djl:api:0.17.0
%maven ai.djl:basicdataset:0.17.0
%maven ai.djl:model-zoo:0.17.0
%maven ai.djl.mxnet:mxnet-engine:0.17.0
%maven org.slf4j:slf4j-simple:1.7.32
import java.nio.file.*;
import ai.djl.*;
import ai.djl.basicdataset.cv.classification.Mnist;
import ai.djl.ndarray.types.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.initializer.*;
import ai.djl.training.loss.*;
import ai.djl.training.listener.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.optimizer.*;
import ai.djl.training.util.*;
import ai.djl.basicmodelzoo.cv.classification.*;
import ai.djl.basicmodelzoo.basic.*; | _____no_output_____ | Apache-2.0 | jupyter/tutorial/02_train_your_first_model.ipynb | dandansamax/djl |
Step 1: Prepare MNIST dataset for trainingIn order to train, you must create a [Dataset class](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/dataset/Dataset.html) to contain your training data. A dataset is a collection of sample input/output pairs for the function represented by your neural network. Each single input/output is represented by a [Record](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/dataset/Record.html). Each record could have multiple arrays of inputs or outputs such as an image question and answer dataset where the input is both an image and a question about the image while the output is the answer to the question.Because data learning is highly parallelizable, training is often done not with a single record at a time, but a [Batch](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/dataset/Batch.html). This can lead to significant performance gains, especially when working with images SamplerThen, we must decide the parameters for loading data from the dataset. The only parameter we need for MNIST is the choice of [Sampler](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/dataset/Sampler.html). The sampler decides which and how many element from datasets are part of each batch when iterating through it. We will have it randomly shuffle the elements for the batch and use a batchSize of 32. The batchSize is usually the largest power of 2 that fits within memory. | int batchSize = 32;
Mnist mnist = Mnist.builder().setSampling(batchSize, true).build();
mnist.prepare(new ProgressBar()); | _____no_output_____ | Apache-2.0 | jupyter/tutorial/02_train_your_first_model.ipynb | dandansamax/djl |
Step 2: Create your ModelNext we will build a model. A [Model](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/Model.html) contains a neural network [Block](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/nn/Block.html) along with additional artifacts used for the training process. It possesses additional information about the inputs, outputs, shapes, and data types you will use. Generally, you will use the Model once you have fully completed your Block.In this part of the tutorial, we will use the built-in Multilayer Perceptron Block from the Model Zoo. To learn how to build it from scratch, see the previous tutorial: [Create Your First Network](01_create_your_first_network.ipynb).Because images in the MNIST dataset are 28x28 grayscale images, we will create an MLP block with 28 x 28 input. The output will be 10 because there are 10 possible classes (0 to 9) each image could be. For the hidden layers, we have chosen `new int[] {128, 64}` by experimenting with different values. | Model model = Model.newInstance("mlp");
model.setBlock(new Mlp(28 * 28, 10, new int[] {128, 64})); | _____no_output_____ | Apache-2.0 | jupyter/tutorial/02_train_your_first_model.ipynb | dandansamax/djl |
Step 3: Create a TrainerNow, you can create a [`Trainer`](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/Trainer.html) to train your model. The trainer is the main class to orchestrate the training process. Usually, they will be opened using a try-with-resources and closed after training is over.The trainer takes an existing model and attempts to optimize the parameters inside the model's Block to best match the dataset. Most optimization is based upon [Stochastic Gradient Descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) (SGD). Step 3.1: Setup your training configurationsBefore you create your trainer, we we will need a [training configuration](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/DefaultTrainingConfig.html) that describes how to train your model.The following are a few common items you may need to configure your training:* **REQUIRED** [`Loss`](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/loss/Loss.html) function: A loss function is used to measure how well our model matches the dataset. Because the lower value of the function is better, it's called the "loss" function. The Loss is the only required argument to the model* [`Evaluator`](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/evaluator/Evaluator.html) function: An evaluator function is also used to measure how well our model matches the dataset. Unlike the loss, they are only there for people to look at and are not used for optimizing the model. Since many losses are not as intuitive, adding other evaluators such as Accuracy can help to understand how your model is doing. If you know of any useful evaluators, we recommend adding them.* [`Training Listeners`](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/repository/zoo/ZooModel.html): The training listener adds additional functionality to the training process through a listener interface. This can include showing training progress, stopping early if training becomes undefined, or recording performance metrics. We offer several easy sets of [default listeners](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/repository/zoo/ZooModel.html).You can also configure other options such as the Device, Initializer, and Optimizer. See [more details](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/TrainingConfig.html). | DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss())
//softmaxCrossEntropyLoss is a standard loss for classification problems
.addEvaluator(new Accuracy()) // Use accuracy so we humans can understand how accurate the model is
.addTrainingListeners(TrainingListener.Defaults.logging());
// Now that we have our training configuration, we should create a new trainer for our model
Trainer trainer = model.newTrainer(config); | _____no_output_____ | Apache-2.0 | jupyter/tutorial/02_train_your_first_model.ipynb | dandansamax/djl |
Step 5: Initialize TrainingBefore training your model, you have to initialize all of the parameters with starting values. You can use the trainer for this initialization by passing in the input shape.* The first axis of the input shape is the batch size. This won't impact the parameter initialization, so you can use 1 here.* The second axis of the input shape of the MLP - the number of pixels in the input image. | trainer.initialize(new Shape(1, 28 * 28)); | _____no_output_____ | Apache-2.0 | jupyter/tutorial/02_train_your_first_model.ipynb | dandansamax/djl |
Step 6: Train your modelNow, we can train the model.When training, it is usually organized into epochs where each epoch trains the model on each item in the dataset once. It is slightly faster than training randomly.Then, we will use the EasyTrain to, as the name promises, make the training easy. If you want to see more details about how the training loop works, see [the EasyTrain class](https://github.com/deepjavalibrary/djl/blob/0.9/api/src/main/java/ai/djl/training/EasyTrain.java) or [read our Dive into Deep Learning book](https://d2l.djl.ai). | // Deep learning is typically trained in epochs where each epoch trains the model on each item in the dataset once.
int epoch = 2;
EasyTrain.fit(trainer, epoch, mnist, null); | _____no_output_____ | Apache-2.0 | jupyter/tutorial/02_train_your_first_model.ipynb | dandansamax/djl |
Step 7: Save your modelOnce your model is trained, you should save it so that it can be reloaded later. You can also add metadata to it such as training accuracy, number of epochs trained, etc that can be used when loading the model or when examining it. | Path modelDir = Paths.get("build/mlp");
Files.createDirectories(modelDir);
model.setProperty("Epoch", String.valueOf(epoch));
model.save(modelDir, "mlp");
model | _____no_output_____ | Apache-2.0 | jupyter/tutorial/02_train_your_first_model.ipynb | dandansamax/djl |
HSV feature | train_x = []
train_y = []
train_dir = os.path.join(dest, "random_forest_train")
for img in os.listdir(train_dir):
img_id = int(img.split(".")[0])
train_y.append(int(img_id2class[img_id]))
train_x.append(extract_hist_feature(cv2.imread(os.path.join(train_dir, img)), rgb=False))
classifier = RandomForestClassifier()
classifier.fit(train_x, train_y)
test_x = []
test_y = []
test_dir = os.path.join(dest, "random_forest_test")
for img in os.listdir(test_dir):
img_id = int(img.split(".")[0])
test_y.append(int(img_id2class[img_id]))
test_x.append(extract_hist_feature(cv2.imread(os.path.join(test_dir, img)), rgb=False))
classifier.score(test_x, test_y) | _____no_output_____ | MIT | random_forest/color.ipynb | den8972/228 |
RGB feature | train_x = []
train_y = []
train_dir = os.path.join(dest, "random_forest_train")
for img in os.listdir(train_dir):
img_id = int(img.split(".")[0])
train_y.append(int(img_id2class[img_id]))
train_x.append(extract_hist_feature(cv2.imread(os.path.join(train_dir, img)), rgb=True))
classifier = RandomForestClassifier()
classifier.fit(train_x, train_y)
test_x = []
test_y = []
test_dir = os.path.join(dest, "random_forest_test")
for img in os.listdir(test_dir):
img_id = int(img.split(".")[0])
test_y.append(int(img_id2class[img_id]))
test_x.append(extract_hist_feature(cv2.imread(os.path.join(test_dir, img)), rgb=True))
classifier.score(test_x, test_y) | _____no_output_____ | MIT | random_forest/color.ipynb | den8972/228 |
Transfer LearningMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using [VGGNet](https://arxiv.org/pdf/1409.1556.pdf) trained on the [ImageNet dataset](http://www.image-net.org/) as a feature extractor. Below is a diagram of the VGGNet architecture.VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.You can read more about transfer learning from [the CS231n course notes](http://cs231n.github.io/transfer-learning/tf). Pretrained VGGNetWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. | from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
print(device_lib)
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!") | Parameter file already exists!
| MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
Flower powerHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the [TensorFlow inception tutorial](https://www.tensorflow.org/tutorials/image_retraining). | import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close() | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
ConvNet CodesBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.Here we're using the `vgg16` module from `tensorflow_vgg`. The network takes images of size $244 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from [the source code](https://github.com/machrisaa/tensorflow-vgg/blob/master/vgg16.py):```self.conv1_1 = self.conv_layer(bgr, "conv1_1")self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")self.pool1 = self.max_pool(self.conv1_2, 'pool1')self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")self.pool2 = self.max_pool(self.conv2_2, 'pool2')self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")self.pool3 = self.max_pool(self.conv3_3, 'pool3')self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")self.pool4 = self.max_pool(self.conv4_3, 'pool4')self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")self.pool5 = self.max_pool(self.conv5_3, 'pool5')self.fc6 = self.fc_layer(self.pool5, "fc6")self.relu6 = tf.nn.relu(self.fc6)```So what we want are the values of the first fully connected layer, after being ReLUd (`self.relu6`). To build the network, we use```with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_)```This creates the `vgg` object, then builds the graph with `vgg.build(input_)`. Then to get the values from the layer,```feed_dict = {input_: images}codes = sess.run(vgg.relu6, feed_dict=feed_dict)``` | import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)] | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
Below I'm running images through the VGG network in batches. | # Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
print(img.shape)
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
print(images.shape)
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
print(codes.shape)
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
codes.shape
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels) | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
Building the ClassifierNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. | # read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1)) | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
Data prepAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!> **Exercise:** From scikit-learn, use [LabelBinarizer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html) to create one-hot encoded vectors from the labels. | codes
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
labels_vecs | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use [`StratifiedShuffleSplit`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) from scikit-learn.You can create the splitter like so:```ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)```Then split the data with ```splitter = ss.split(x, y)````ss.split` returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use `next(splitter)` to get the indices. Be sure to read the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) and the [user guide](http://scikit-learn.org/stable/modules/cross_validation.htmlrandom-permutations-cross-validation-a-k-a-shuffle-split).> **Exercise:** Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. | from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels_vecs))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape) | Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
| MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
If you did it right, you should see these sizes for the training sets:```Train shapes (x, y): (2936, 4096) (2936, 5)Validation shapes (x, y): (367, 4096) (367, 5)Test shapes (x, y): (367, 4096) (367, 5)``` Classifier layersOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.> **Exercise:** With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. | inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
Batches!Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. | def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
TrainingHere, we'll train the network.> **Exercise:** So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. | epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt") | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
TestingBelow you see the test accuracy. You can also see the predictions returned for images. | with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_) | _____no_output_____ | MIT | transfer-learning/Transfer_Learning_Solution.ipynb | freedomkwok/deep-learning |
Set up AWS Panorama development environment on SageMaker NotebookThis notebook installs dependencies required for AWS Panorama application development. Run following cells just once, before starting labs. | %pip install panoramacli
%pip install mxnet
%pip install gluoncv
!./scripts/install-docker.sh
# for CPU build
!./scripts/install-dlr.sh
# for p2/p3/g4 instance, we could use pre-built package to skip long building time
#%pip install https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.10.0/gpu/dlr-1.10.0-py3-none-any.whl
!./scripts/install-glibc-sm.sh
!./scripts/create-opt-aws-panorama.sh
!./scripts/install-videos.sh | _____no_output_____ | MIT-0 | setup-sm.ipynb | aws-samples/aws-panorama-immersion-day |
Transfer learning with Tensorflow在这个Notebook当中,我们将介绍如何实用Tensorflow框架实现迁移学习。简单的来说,迁移学习就是利用已经训练好的模型中学习到的特征(Features),再根据用户需要添加额外的网络层,进行快速的针对新的特定的数据集的模型训练。由于这样生成的模型大部分的模型参数已经训练好并且已经学习到一定数量的hidden features,在提供新的数据集的时候再进行训练就能有效利用已学习到的知识来进行预测。本notebook所使用的预训练好的模型是MobileNet V2,其具体的原理就留给负责这部分的同学在后续具体介绍。我们当前只需要了解其大致的网络结构即可(结构如下图)。MobileNet论文:https://arxiv.org/abs/1801.04381我们使用的是“猫猫狗狗”数据集,具体的样例我们会在下文的数据部分展示。下面是大纲:1. 数据集 & 构造模型输入流2. 组合模型 - 载入预训练好的模型及其参数 - 在预训练好的模型的基础上添加额外的分类层3. 训练模型4. 测试模型参考文献:https://tensorflow.google.cn/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory | import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory | _____no_output_____ | MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
1. 数据预处理 数据集分组在这里我们数据集已经按照路径分为Train集和Validation集。 | PATH = os.path.join('./data', 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
train_dataset = image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
| Found 2000 files belonging to 2 classes.
Found 1000 files belonging to 2 classes.
| MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
查看数据集中的样本 | class_names = train_dataset.class_names
plt.figure(figsize=(10, 10))
for images, labels in train_dataset.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i+1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off") | _____no_output_____ | MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
将图片像素值归一化 | rescale_input = tf.keras.layers.experimental.preprocessing.Rescaling(1./255, offset=0) | _____no_output_____ | MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
2.组合模型 载入预训练好的模型(MobileNet-v2) | IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print(feature_batch.shape) | (32, 5, 5, 1280)
| MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
特征提取(Feature Extraction) | # Freeze the MobileNet
base_model.trainable = False
base_model.summary()
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
prediction_layer = tf.keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape) | (32, 1)
| MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
将上述的各个层集合成新的 Model | inputs = tf.keras.Input(shape=(160, 160, 3))
x = rescale_input(inputs)
x = base_model(x, training=False)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs) | _____no_output_____ | MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
编译模型 | lr = 0.0001
model.compile(optimizer=tf.keras.optimizers.Adam(lr=lr),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics='accuracy')
model.summary() | Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 160, 160, 3)] 0
_________________________________________________________________
rescaling (Rescaling) (None, 160, 160, 3) 0
_________________________________________________________________
mobilenetv2_1.00_160 (Functi (None, 5, 5, 1280) 2257984
_________________________________________________________________
dropout_1 (Dropout) (None, 5, 5, 1280) 0
_________________________________________________________________
dense_3 (Dense) (None, 5, 5, 1) 1281
=================================================================
Total params: 2,259,265
Trainable params: 1,281
Non-trainable params: 2,257,984
_________________________________________________________________
| MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
3.训练模型 迁移学习后的Validation准确率 | epochs = 10
history = model.fit(train_dataset,
epochs=initial_epochs,
validation_data=validation_dataset) | Epoch 1/10
63/63 [==============================] - 33s 501ms/step - loss: 0.6727 - accuracy: 0.5925 - val_loss: 0.5018 - val_accuracy: 0.7060
Epoch 2/10
63/63 [==============================] - 23s 360ms/step - loss: 0.4419 - accuracy: 0.7635 - val_loss: 0.3507 - val_accuracy: 0.8390
Epoch 3/10
63/63 [==============================] - 19s 294ms/step - loss: 0.3292 - accuracy: 0.8435 - val_loss: 0.2682 - val_accuracy: 0.9110
Epoch 4/10
63/63 [==============================] - 19s 295ms/step - loss: 0.2573 - accuracy: 0.9000 - val_loss: 0.2208 - val_accuracy: 0.9310
Epoch 5/10
63/63 [==============================] - 19s 296ms/step - loss: 0.2187 - accuracy: 0.9180 - val_loss: 0.1895 - val_accuracy: 0.9430
Epoch 6/10
63/63 [==============================] - 20s 315ms/step - loss: 0.1942 - accuracy: 0.9280 - val_loss: 0.1679 - val_accuracy: 0.9510
Epoch 7/10
63/63 [==============================] - 19s 307ms/step - loss: 0.1683 - accuracy: 0.9360 - val_loss: 0.1523 - val_accuracy: 0.9540
Epoch 8/10
63/63 [==============================] - 19s 292ms/step - loss: 0.1510 - accuracy: 0.9485 - val_loss: 0.1406 - val_accuracy: 0.9550
Epoch 9/10
63/63 [==============================] - 19s 300ms/step - loss: 0.1393 - accuracy: 0.9495 - val_loss: 0.1314 - val_accuracy: 0.9590
Epoch 10/10
63/63 [==============================] - 18s 290ms/step - loss: 0.1318 - accuracy: 0.9570 - val_loss: 0.1239 - val_accuracy: 0.9600
| MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
学习曲线 | acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show() | _____no_output_____ | MIT | TF-transfer.ipynb | HAXRD/TF2vsPTH |
基本程序设计- 一切代码输入,请使用英文输入法 编写一个简单的程序- 圆公式面积: area = radius \* radius \* 3.1415 在Python里面不需要定义数据的类型 控制台的读取与输入- input 输入进去的是字符串- eval - 在jupyter用shift + tab 键可以跳出解释文档 变量命名的规范- 由字母、数字、下划线构成- 不能以数字开头 \*- 标识符不能是关键词(实际上是可以强制改变的,但是对于代码规范而言是极其不适合)- 可以是任意长度- 驼峰式命名 变量、赋值语句和赋值表达式- 变量: 通俗理解为可以变化的量- x = 2 \* x + 1 在数学中是一个方程,而在语言中它是一个表达式- test = test + 1 \* 变量在赋值之前必须有值 同时赋值var1, var2,var3... = exp1,exp2,exp3... 定义常量- 常量:表示一种定值标识符,适合于多次使用的场景。比如PI- 注意:在其他低级语言中如果定义了常量,那么,该常量是不可以被改变的,但是在Python中一切皆对象,常量也是可以被改变的 数值数据类型和运算符- 在Python中有两种数值类型(int 和 float)适用于加减乘除、模、幂次 运算符 /、//、** 运算符 % EP:- 25/4 多少,如果要将其转变为整数该怎么改写- 输入一个数字判断是奇数还是偶数- 进阶: 输入一个秒数,写一个程序将其转换成分和秒:例如500秒等于8分20秒- 进阶: 如果今天是星期六,那么10天以后是星期几? 提示:每个星期的第0天是星期天 科学计数法- 1.234e+2- 1.234e-2 计算表达式和运算优先级 增强型赋值运算 类型转换- float -> int- 四舍五入 round EP:- 如果一个年营业税为0.06%,那么对于197.55e+2的年收入,需要交税为多少?(结果保留2为小数)- 必须使用科学计数法 Project- 用Python写一个贷款计算器程序:输入的是月供(monthlyPayment) 输出的是总还款数(totalpayment) Homework- 1 | C=input()
F=float((9/5))*float(C)+32
print(F) | 43
109.4
| Apache-2.0 | 7.16.ipynb | liangfhaott3/python |
- 2 | r=input()
h=input()
S=float(r)**2*3.14
V=float(S)*float(h)
print('底面积为:%.2f'%S)
print('体积为:%.2f'%V) | 5.5
12
底面积为:94.98
体积为:1139.82
| Apache-2.0 | 7.16.ipynb | liangfhaott3/python |
- 3 | feet=input()
meters=float(feet)*0.305
print(meters) | 16.5
5.0325
| Apache-2.0 | 7.16.ipynb | liangfhaott3/python |
- 4 | k=input()
t1=input()
t2=input()
q=float(k)*(float(t2)-float(t1))*4184
print(q) | 55.5
3.5
10.5
1625484.0
| Apache-2.0 | 7.16.ipynb | liangfhaott3/python |
- 5 | ce=input()
nll=input()
lx=float(ce)*(float(nll)/1200)
print('%.5f'%lx) | 1000
3.5
2.91667
| Apache-2.0 | 7.16.ipynb | liangfhaott3/python |
- 6 | v0=input()
v1=input()
t=input()
pj=(float(v1)-float(v0))/float(t)
print('%.4f'%pj) | 5.5
50.9
4.5
10.0889
| Apache-2.0 | 7.16.ipynb | liangfhaott3/python |
- 7 进阶 | amout=input()
money=0
for i in range(6):
money=(float(amout)+float(money))*(1+0.00417)
print('%.2f'%money)
| 100
608.82
| Apache-2.0 | 7.16.ipynb | liangfhaott3/python |
- 8 进阶 | number=int(input())
if (number>1000)or (number<=0):
print('false')
else:
gewei=number%10
shiwei=(number//10)%10
baiwei=number//100
sum=gewei+shiwei+baiwei
print(sum)
| 999
27
| Apache-2.0 | 7.16.ipynb | liangfhaott3/python |
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started with that workflow. | %load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform | _____no_output_____ | MIT | experiments/tuned_1v2/oracle.run1/trials/2/trial.ipynb | stevester94/csc500-notebooks |
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean | required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"labels_source",
"labels_target",
"domains_source",
"domains_target",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"n_shot",
"n_way",
"n_query",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_transforms_source",
"x_transforms_target",
"episode_transforms_source",
"episode_transforms_target",
"pickle_name",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"torch_default_dtype"
}
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["num_examples_per_domain_per_label_source"]=100
standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 100
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["x_transforms_source"] = ["unit_power"]
standalone_parameters["x_transforms_target"] = ["unit_power"]
standalone_parameters["episode_transforms_source"] = []
standalone_parameters["episode_transforms_target"] = []
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# uncomment for CORES dataset
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
standalone_parameters["labels_source"] = ALL_NODES
standalone_parameters["labels_target"] = ALL_NODES
standalone_parameters["domains_source"] = [1]
standalone_parameters["domains_target"] = [2,3,4,5]
standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl"
# Uncomment these for ORACLE dataset
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS
# standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS
# standalone_parameters["domains_source"] = [8,20, 38,50]
# standalone_parameters["domains_target"] = [14, 26, 32, 44, 56]
# standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl"
# standalone_parameters["num_examples_per_domain_per_label_source"]=1000
# standalone_parameters["num_examples_per_domain_per_label_target"]=1000
# Uncomment these for Metahan dataset
# standalone_parameters["labels_source"] = list(range(19))
# standalone_parameters["labels_target"] = list(range(19))
# standalone_parameters["domains_source"] = [0]
# standalone_parameters["domains_target"] = [1]
# standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl"
# standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# standalone_parameters["num_examples_per_domain_per_label_source"]=200
# standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# Parameters
parameters = {
"experiment_name": "tuned_1v2:oracle.run1",
"device": "cuda",
"lr": 0.0001,
"labels_source": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"labels_target": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"episode_transforms_source": [],
"episode_transforms_target": [],
"domains_source": [8, 32, 50],
"domains_target": [14, 20, 26, 38, 44],
"num_examples_per_domain_per_label_source": -1,
"num_examples_per_domain_per_label_target": -1,
"n_shot": 3,
"n_way": 16,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"pickle_name": "oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"x_transforms_source": ["unit_mag"],
"x_transforms_target": ["unit_mag"],
"dataset_seed": 1337,
"seed": 1337,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
# (This is due to the randomized initial weights)
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
###################################
# Build the dataset
###################################
if p.x_transforms_source == []: x_transform_source = None
else: x_transform_source = get_chained_transform(p.x_transforms_source)
if p.x_transforms_target == []: x_transform_target = None
else: x_transform_target = get_chained_transform(p.x_transforms_target)
if p.episode_transforms_source == []: episode_transform_source = None
else: raise Exception("episode_transform_source not implemented")
if p.episode_transforms_target == []: episode_transform_target = None
else: raise Exception("episode_transform_target not implemented")
eaf_source = Episodic_Accessor_Factory(
labels=p.labels_source,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_source,
example_transform_func=episode_transform_source,
)
train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()
eaf_target = Episodic_Accessor_Factory(
labels=p.labels_target,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_target,
example_transform_func=episode_transform_target,
)
train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
# Some quick unit tests on the data
from steves_utils.transforms import get_average_power, get_average_magnitude
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))
assert q_x.dtype == eval(p.torch_default_dtype)
assert s_x.dtype == eval(p.torch_default_dtype)
print("Visually inspect these to see if they line up with expected values given the transforms")
print('x_transforms_source', p.x_transforms_source)
print('x_transforms_target', p.x_transforms_target)
print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy()))
print("Average power, source:", get_average_power(q_x[0].numpy()))
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))
print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy()))
print("Average power, target:", get_average_power(q_x[0].numpy()))
###################################
# Build the model
###################################
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment) | _____no_output_____ | MIT | experiments/tuned_1v2/oracle.run1/trials/2/trial.ipynb | stevester94/csc500-notebooks |
mentions of facebook | # Search for all tweets
# public_tweets = api.search(target_term, count=300, result_type="recent")
# Twitter API Keys
consumer_key = consumer_key
consumer_secret = consumer_secret
access_token = access_token
access_token_secret = access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
print(target_term)
# Loop through all public_tweets
fb_tweets = []
date = []
oldest_tweet = None
for x in range(1,20):
public_tweets = api.search(target_term, count=100, result_type="recent", max_id=oldest_tweet)
for tweet in public_tweets['statuses']:
tweet_id = tweet["id"]
tweet_author = tweet["user"]["screen_name"]
tweet_text = tweet["text"]
fb_tweets.append(tweet['text'])
date.append(tweet['created_at'])
oldest_tweet = tweet_id - 1
print(len(fb_tweets))
compound_list = []
positive_list = []
negative_list = []
neutral_list = []
for tweet in fb_tweets:
# Run Vader Analysis on each tweet
results = analyzer.polarity_scores(tweet)
compound = results["compound"]
pos = results["pos"]
neu = results["neu"]
neg = results["neg"]
# Add each value to the appropriate list
compound_list.append(compound)
positive_list.append(pos)
negative_list.append(neg)
neutral_list.append(neu)
sentiments = {
'Average Compounded': sum(compound_list) / len(compound_list),
'Average Negative': sum(negative_list) / len(negative_list),
'Average Positive': sum(positive_list) / len(positive_list),
'Average Neutral': sum(neutral_list) / len(neutral_list)
}
sentiments
len(fb_tweets)
date[1891]
june_14 = {
'Text': fb_tweets,
'Compounded': compound_list,
'Negative': negative_list,
'Positive': positive_list,
'Neutral': neutral_list,
'Date': date
}
import pandas as pd
tweets_june_14_df = pd.DataFrame(june_14)
tweets_june_14_df.head()
tweets_june_14_df.to_csv('tweets_june_14.csv')
print(date[0])
print(date[1891]) | Fri Jun 15 00:10:34 +0000 2018
Thu Jun 14 18:50:53 +0000 2018
| MIT | Kara/.ipynb_checkpoints/grabbing_tweets_june_14-checkpoint.ipynb | rglukins/stock-tweet |
1.Importing the Relevant Libraries | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge, Lasso, LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn import metrics
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler | _____no_output_____ | MIT | Wind_tunnel.ipynb | AcharyaRakesh/WindTunnel |
2.Reading Data | df = pd.read_csv("WindTunnel.csv")
df
df = pd.read_csv("WindTunnel.csv")
plt.xlabel('Freqency')
plt.ylabel('Velocity')
plt.plot(df.Freqency,df.Velocity)
reg = linear_model.LinearRegression()
reg.fit(df[["Freqency"]],df.Velocity)
reg.predict([[65.0]])
L=0.2
r=0.1
v=3.45
A=0.0323
CL = (2*L)/(r*(v**2)*A)
CL
df = pd.read_csv("WindTunnel1.csv")
df
df1 = df.copy()
df1["Coef_lift"] = (2*df['Lift'])/(df['Dencity']*df['Area']*(df['Velocity']**2))
df1
X = df1.drop(['Coef_lift'],axis='columns')
y = df1.Coef_lift | _____no_output_____ | MIT | Wind_tunnel.ipynb | AcharyaRakesh/WindTunnel |
Data Pre-process |
X_train, X_valid, y_train, y_valid = train_test_split(X,y,test_size = 0.2,random_state = 10)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_valid)
algos = [LinearRegression(), Ridge(), Lasso(),
KNeighborsRegressor(), DecisionTreeRegressor(),RandomForestRegressor()]
names = ['Linear Regression', 'Ridge Regression', 'Lasso Regression',
'K Neighbors Regressor', 'Decision Tree Regressor', 'RandomForestRegressor']
rmse_list = []
for name in algos:
model = name
model.fit(X_train,y_train)
y_pred = model.predict(X_valid)
MSE= metrics.mean_squared_error(y_valid,y_pred)
rmse = np.sqrt(MSE)
rmse_list.append(rmse)
evaluation = pd.DataFrame({'Model': names,
'RMSE': rmse_list})
evaluation | _____no_output_____ | MIT | Wind_tunnel.ipynb | AcharyaRakesh/WindTunnel |
Building Model |
clf = Ridge()
clf.fit(X_train,y_train)
clf.score(X_test,y_test)
clf.predict([[6.8,0.74,0.0323,0.27]]) | _____no_output_____ | MIT | Wind_tunnel.ipynb | AcharyaRakesh/WindTunnel |
Import packages | # import packages
import requests
from bs4 import BeautifulSoup
url = "https://www.makaan.com/hyderabad-residential-property/rent-property-in-hyderabad-city"
response = requests.get(url)
soup = BeautifulSoup(response.text,"html.parser")
s_tag = soup.find_all('span',attrs={'class' : 'seller-type'})
for each_owner in s_tag:
print(each_owner.text)
s_val = soup.find_all('a',attrs={'class' : 'typelink'})
for price in s_val:
print(price.span.text)
| _____no_output_____ | Apache-2.0 | makaan_webscraping.ipynb | akhila-sakinala/akhila-sakinala.github.io |
Enter State Farm | from theano.sandbox import cuda
cuda.use('gpu0')
%matplotlib inline
from __future__ import print_function, division
path = "data/state/"
#path = "data/state/sample/"
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
batch_size=64 | _____no_output_____ | Apache-2.0 | deeplearning1/nbs/statefarm.ipynb | Fandekasp/fastai_courses |
Setup batches | batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path) | Found 18946 images belonging to 10 classes.
Found 3478 images belonging to 10 classes.
Found 79726 images belonging to 1 classes.
| Apache-2.0 | deeplearning1/nbs/statefarm.ipynb | Fandekasp/fastai_courses |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.