markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
You will be able to check the expected output of `one_step_attention()` after you've coded the `model()` function. **Exercise**: Implement `model()` as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in `model()`.
n_a = 32 n_s = 64 post_activation_LSTM_cell = LSTM(n_s, return_state = True) output_layer = Dense(len(machine_vocab), activation=softmax)
_____no_output_____
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: 1. Propagate the input into a [Bidirectional](https://keras.io/layers/wrappers/bidirectional) [LSTM](https://keras.io/layers/recurrent/lstm)2. Iterate for $t = 0, \dots, T_y-1$: 1. Call `one_step_attention()` on $[\alpha^{},\alpha^{}, ..., \alpha^{}]$ and $s^{}$ to get the context vector $context^{}$. 2. Give $context^{}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM using `initial_state= [previous hidden state, previous cell state]`. Get back the new hidden state $s^{}$ and the new cell state $c^{}$. 3. Apply a softmax layer to $s^{}$, get the output. 4. Save the output by adding it to the list of outputs.3. Create your Keras model instance, it should have three inputs ("inputs", $s^{}$ and $c^{}$) and output the list of "outputs".
# GRADED FUNCTION: model def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size): """ Arguments: Tx -- length of the input sequence Ty -- length of the output sequence n_a -- hidden state size of the Bi-LSTM n_s -- hidden state size of the post-attention LSTM human_vocab_size -- size of the python dictionary "human_vocab" machine_vocab_size -- size of the python dictionary "machine_vocab" Returns: model -- Keras model instance """ # Define the inputs of your model with a shape (Tx,) # Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,) X = Input(shape=(Tx, human_vocab_size)) s0 = Input(shape=(n_s,), name='s0') c0 = Input(shape=(n_s,), name='c0') s = s0 c = c0 # Initialize empty list of outputs outputs = [] ### START CODE HERE ### # Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (β‰ˆ 1 line) a = Bidirectional(LSTM(n_a, return_sequences=True),input_shape=(m, Tx, n_a*2))(X) # Step 2: Iterate for Ty steps for t in range(Ty): # Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (β‰ˆ 1 line) context = one_step_attention(a, s) # Step 2.B: Apply the post-attention LSTM cell to the "context" vector. # Don't forget to pass: initial_state = [hidden state, cell state] (β‰ˆ 1 line) s, _, c = post_activation_LSTM_cell(context,initial_state = [s, c] ) # Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (β‰ˆ 1 line) out = output_layer(s) # Step 2.D: Append "out" to the "outputs" list (β‰ˆ 1 line) outputs.append(out) # Step 3: Create model instance taking three inputs and returning the list of outputs. (β‰ˆ 1 line) model = Model(inputs=[X,s0,c0],outputs=outputs) ### END CODE HERE ### return model
_____no_output_____
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
Run the following cell to create your model.
model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
_____no_output_____
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
Let's get a summary of the model to check if it matches the expected output.
model.summary()
____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 30, 37) 0 ____________________________________________________________________________________________________ s0 (InputLayer) (None, 64) 0 ____________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0] ____________________________________________________________________________________________________ repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] lstm_1[0][0] lstm_1[1][0] lstm_1[2][0] lstm_1[3][0] lstm_1[4][0] lstm_1[5][0] lstm_1[6][0] lstm_1[7][0] lstm_1[8][0] ____________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0] repeat_vector_1[0][0] bidirectional_1[0][0] repeat_vector_1[1][0] bidirectional_1[0][0] repeat_vector_1[2][0] bidirectional_1[0][0] repeat_vector_1[3][0] bidirectional_1[0][0] repeat_vector_1[4][0] bidirectional_1[0][0] repeat_vector_1[5][0] bidirectional_1[0][0] repeat_vector_1[6][0] bidirectional_1[0][0] repeat_vector_1[7][0] bidirectional_1[0][0] repeat_vector_1[8][0] bidirectional_1[0][0] repeat_vector_1[9][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0] concatenate_1[1][0] concatenate_1[2][0] concatenate_1[3][0] concatenate_1[4][0] concatenate_1[5][0] concatenate_1[6][0] concatenate_1[7][0] concatenate_1[8][0] concatenate_1[9][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 30, 1) 11 dense_1[0][0] dense_1[1][0] dense_1[2][0] dense_1[3][0] dense_1[4][0] dense_1[5][0] dense_1[6][0] dense_1[7][0] dense_1[8][0] dense_1[9][0] ____________________________________________________________________________________________________ attention_weights (Activation) (None, 30, 1) 0 dense_2[0][0] dense_2[1][0] dense_2[2][0] dense_2[3][0] dense_2[4][0] dense_2[5][0] dense_2[6][0] dense_2[7][0] dense_2[8][0] dense_2[9][0] ____________________________________________________________________________________________________ dot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0] bidirectional_1[0][0] attention_weights[1][0] bidirectional_1[0][0] attention_weights[2][0] bidirectional_1[0][0] attention_weights[3][0] bidirectional_1[0][0] attention_weights[4][0] bidirectional_1[0][0] attention_weights[5][0] bidirectional_1[0][0] attention_weights[6][0] bidirectional_1[0][0] attention_weights[7][0] bidirectional_1[0][0] attention_weights[8][0] bidirectional_1[0][0] attention_weights[9][0] bidirectional_1[0][0] ____________________________________________________________________________________________________ c0 (InputLayer) (None, 64) 0 ____________________________________________________________________________________________________ lstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0] s0[0][0] c0[0][0] dot_1[1][0] lstm_1[0][0] lstm_1[0][2] dot_1[2][0] lstm_1[1][0] lstm_1[1][2] dot_1[3][0] lstm_1[2][0] lstm_1[2][2] dot_1[4][0] lstm_1[3][0] lstm_1[3][2] dot_1[5][0] lstm_1[4][0] lstm_1[4][2] dot_1[6][0] lstm_1[5][0] lstm_1[5][2] dot_1[7][0] lstm_1[6][0] lstm_1[6][2] dot_1[8][0] lstm_1[7][0] lstm_1[7][2] dot_1[9][0] lstm_1[8][0] lstm_1[8][2] ____________________________________________________________________________________________________ dense_3 (Dense) (None, 11) 715 lstm_1[0][0] lstm_1[1][0] lstm_1[2][0] lstm_1[3][0] lstm_1[4][0] lstm_1[5][0] lstm_1[6][0] lstm_1[7][0] lstm_1[8][0] lstm_1[9][0] ==================================================================================================== Total params: 52,960 Trainable params: 52,960 Non-trainable params: 0 ____________________________________________________________________________________________________
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
**Expected Output**:Here is the summary you should see **Total params:** 52,960 **Trainable params:** 52,960 **Non-trainable params:** 0 **bidirectional_1's output shape ** (None, 30, 64) **repeat_vector_1's output shape ** (None, 30, 64) **concatenate_1's output shape ** (None, 30, 128) **attention_weights's output shape ** (None, 30, 1) **dot_1's output shape ** (None, 1, 64) **dense_3's output shape ** (None, 11) As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, a custom [Adam](https://keras.io/optimizers/adam) [optimizer](https://keras.io/optimizers/usage-of-optimizers) (`learning rate = 0.005`, $\beta_1 = 0.9$, $\beta_2 = 0.999$, `decay = 0.01`) and `['accuracy']` metrics:
### START CODE HERE ### (β‰ˆ2 lines) opt = Adam(lr = 0.005, beta_1 = 0.9, beta_2 = 0.999, decay = 0.01) model.compile(loss='categorical_crossentropy', optimizer=opt,metrics=['accuracy']) ### END CODE HERE ###
_____no_output_____
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
The last step is to define all your inputs and outputs to fit the model:- You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples.- You need to create `s0` and `c0` to initialize your `post_activation_LSTM_cell` with 0s.- Given the `model()` you coded, you need the "outputs" to be a list of 11 elements of shape (m, T_y). So that: `outputs[i][0], ..., outputs[i][Ty]` represent the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`). More generally, `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.
s0 = np.zeros((m, n_s)) c0 = np.zeros((m, n_s)) outputs = list(Yoh.swapaxes(0,1))
_____no_output_____
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
Let's now fit the model and run it for one epoch.
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
Epoch 1/1 10000/10000 [==============================] - 35s - loss: 16.1592 - dense_3_loss_1: 1.1816 - dense_3_loss_2: 0.9146 - dense_3_loss_3: 1.6444 - dense_3_loss_4: 2.6827 - dense_3_loss_5: 0.7530 - dense_3_loss_6: 1.2778 - dense_3_loss_7: 2.5924 - dense_3_loss_8: 0.8461 - dense_3_loss_9: 1.6718 - dense_3_loss_10: 2.5947 - dense_3_acc_1: 0.5434 - dense_3_acc_2: 0.7314 - dense_3_acc_3: 0.3430 - dense_3_acc_4: 0.0705 - dense_3_acc_5: 0.9299 - dense_3_acc_6: 0.3774 - dense_3_acc_7: 0.0745 - dense_3_acc_8: 0.9297 - dense_3_acc_9: 0.2204 - dense_3_acc_10: 0.0945
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)
model.load_weights('models/model.h5')
_____no_output_____
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
You can now see the results on new examples.
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001'] for example in EXAMPLES: source = string_to_int(example, Tx, human_vocab) source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1) prediction = model.predict([source, s0, c0]) prediction = np.argmax(prediction, axis = -1) output = [inv_machine_vocab[int(i)] for i in prediction] print("source:", example) print("output:", ''.join(output))
source: 3 May 1979 output: 1979-05-03 source: 5 April 09 output: 2009-05-05 source: 21th of August 2016 output: 2016-08-21 source: Tue 10 Jul 2007 output: 2007-07-10 source: Saturday May 9 2018 output: 2018-05-09 source: March 3 2001 output: 2001-03-03 source: March 3rd 2001 output: 2001-03-03 source: 1 March 2001 output: 2001-03-01
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. 3 - Visualizing Attention (Optional / Ungraded)Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this: **Figure 8**: Full Attention MapNotice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018." 3.1 - Getting the activations from the networkLets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$. To figure out where the attention values are located, let's start by printing a summary of the model .
model.summary()
____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 30, 37) 0 ____________________________________________________________________________________________________ s0 (InputLayer) (None, 64) 0 ____________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0] ____________________________________________________________________________________________________ repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] lstm_1[0][0] lstm_1[1][0] lstm_1[2][0] lstm_1[3][0] lstm_1[4][0] lstm_1[5][0] lstm_1[6][0] lstm_1[7][0] lstm_1[8][0] ____________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0] repeat_vector_1[0][0] bidirectional_1[0][0] repeat_vector_1[1][0] bidirectional_1[0][0] repeat_vector_1[2][0] bidirectional_1[0][0] repeat_vector_1[3][0] bidirectional_1[0][0] repeat_vector_1[4][0] bidirectional_1[0][0] repeat_vector_1[5][0] bidirectional_1[0][0] repeat_vector_1[6][0] bidirectional_1[0][0] repeat_vector_1[7][0] bidirectional_1[0][0] repeat_vector_1[8][0] bidirectional_1[0][0] repeat_vector_1[9][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0] concatenate_1[1][0] concatenate_1[2][0] concatenate_1[3][0] concatenate_1[4][0] concatenate_1[5][0] concatenate_1[6][0] concatenate_1[7][0] concatenate_1[8][0] concatenate_1[9][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 30, 1) 11 dense_1[0][0] dense_1[1][0] dense_1[2][0] dense_1[3][0] dense_1[4][0] dense_1[5][0] dense_1[6][0] dense_1[7][0] dense_1[8][0] dense_1[9][0] ____________________________________________________________________________________________________ attention_weights (Activation) (None, 30, 1) 0 dense_2[0][0] dense_2[1][0] dense_2[2][0] dense_2[3][0] dense_2[4][0] dense_2[5][0] dense_2[6][0] dense_2[7][0] dense_2[8][0] dense_2[9][0] ____________________________________________________________________________________________________ dot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0] bidirectional_1[0][0] attention_weights[1][0] bidirectional_1[0][0] attention_weights[2][0] bidirectional_1[0][0] attention_weights[3][0] bidirectional_1[0][0] attention_weights[4][0] bidirectional_1[0][0] attention_weights[5][0] bidirectional_1[0][0] attention_weights[6][0] bidirectional_1[0][0] attention_weights[7][0] bidirectional_1[0][0] attention_weights[8][0] bidirectional_1[0][0] attention_weights[9][0] bidirectional_1[0][0] ____________________________________________________________________________________________________ c0 (InputLayer) (None, 64) 0 ____________________________________________________________________________________________________ lstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0] s0[0][0] c0[0][0] dot_1[1][0] lstm_1[0][0] lstm_1[0][2] dot_1[2][0] lstm_1[1][0] lstm_1[1][2] dot_1[3][0] lstm_1[2][0] lstm_1[2][2] dot_1[4][0] lstm_1[3][0] lstm_1[3][2] dot_1[5][0] lstm_1[4][0] lstm_1[4][2] dot_1[6][0] lstm_1[5][0] lstm_1[5][2] dot_1[7][0] lstm_1[6][0] lstm_1[6][2] dot_1[8][0] lstm_1[7][0] lstm_1[7][2] dot_1[9][0] lstm_1[8][0] lstm_1[8][2] ____________________________________________________________________________________________________ dense_3 (Dense) (None, 11) 715 lstm_1[0][0] lstm_1[1][0] lstm_1[2][0] lstm_1[3][0] lstm_1[4][0] lstm_1[5][0] lstm_1[6][0] lstm_1[7][0] lstm_1[8][0] lstm_1[9][0] ==================================================================================================== Total params: 52,960 Trainable params: 52,960 Non-trainable params: 0 ____________________________________________________________________________________________________
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer.The function `attention_map()` pulls out the attention values from your model and plots them.
attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64)
_____no_output_____
MIT
MOOCS/Deeplearing_Specialization/Notebooks/Neural machine translation with attention-v4.ipynb
itismesam/Courses-1
LassoLars Regression with PowerTransformer This Code template is for the regression analysis using a simple LassoLars Regression with Feature Transformation technique PowerTransformer in a pipeline. It is a lasso model implemented using the LARS algorithm. Required Packages
import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.preprocessing import PowerTransformer from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error from sklearn.linear_model import LassoLars warnings.filterwarnings('ignore')
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
InitializationFilepath of CSV file
#filepath file_path= ""
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
List of features which are required for model training .
#x_values features=[]
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Target feature for prediction.
#y_value target=''
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
df=pd.read_csv(file_path) df.head()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
X=df[features] Y=df[target]
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df)
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Calling preprocessing functions on the feature and target set.
x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Feature TransformationPower transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.[More on PowerTransformer module and parameters](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html) ModelLassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients. Tuning parameters> **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations> **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.> **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.> **max_iter** -> Maximum number of iterations to perform.> **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.> **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations.
model = make_pipeline(PowerTransformer(),LassoLars(random_state=123)) model.fit(x_train,y_train)
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
Accuracy score 72.55 %
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
R2 Score: 72.55 % Mean Absolute Error 303.15 Mean Squared Error 126073.78
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show()
_____no_output_____
Apache-2.0
Regression/Linear Models/LassoLars_PowerTransformer.ipynb
mohityogesh44/ds-seed
Pymongo - mongo in pythonTo use python with mongo we need to use the pymongo package - install using `pip install pymongo`, or via the anaconda application ConnectingTo connect to our Database we need to instantiate a client connection. To do this wee need: - hostname or ip-address - port - username - password In addition we may sometimes need to provide an *authSource*. This simply tells Mongo where the information on our user exists.
from pymongo import MongoClient client = MongoClient(host='18.219.151.47', #host is the hostname for the database port=27017, #port is the port number that mongo is running on username='student', #username for the db password='emse6992pass', #password for the db authSource='emse6992') #Since our user only exists for the emse6992 db, we need to specify this
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
***NOTE: NEVER hard encode your password!!!*** Verify the connection is working:
client.server_info()
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Accessing Databases and CollectionsEven if we have authenticated oursevles, we still need to tell Mongo what database and collections we are interested. Once connected those attributes are name addressable: - `conn['database_name']` or `conn.database_name` - `database['coll_name']` or `database.coll_name` **Connecting to the Database:**
db = client.emse6992 # db = client['emse6992'] - Alternative method
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Proof we're connected:
db.list_collection_names()
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
**Connecting to the Collections:**
favs_coll = db.twitter_favorites # favs_coll = db['twitter_favorites']
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Proof this works:
doc = favs_coll.find_one({}) doc doc['favorited_by_screen_name']
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
QueryingOnce connected, we are ready to start querying the database.The great thing about Python is it's integration with both JSON and Mongo, meaning that the Python Mongo API is almost exactly the same as Monog's own query API. find_one()This method works exactly the same as the Mongo equivelant. In addition the interior logic is a direct 1-to-1 with Mongo's
doc = favs_coll.find_one({"favorited_by_screen_name": "elonmusk"}) doc
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
In Class Excercise:Using the **twitter_favorites** collection, find a **singular status** with a **tesla hashtag**
#Room for in-class work doc = favs_coll.find_one({"hashtags.text": "tesla"}, {'hashtags': 1, 'user.screen_name': 1, 'user.description': 1}) print(doc)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
find()Likewise pymongo's **find()** works exactly like mongo's console find() command. One thing to note `find({})` returns a cursor (iterable), not an actual document.**In Class Questions:** 1. What is the advantage to using a generator/iterable in this instance? 2. What is the benefit of being able to query for one document `find_one()` vs a list of documents `find()`?
docs = favs_coll.find({}) print(docs) # notice this is cursor, no actual data print(docs[600]) # By indexing we can extract results from the query
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Iterating Through Our CursorWe can prove the query executed correctly by iterating through all of the documents
# Our query docs = favs_coll.find({"favorited_by_screen_name": "elonmusk"}) # Variable to store the state of the test worked = True # Iterate through each of the docs looking for an invalid state for doc in docs: if doc['favorited_by_screen_name'] != 'elonmusk': worked = False break # If worked is still True, then our query worked (or at least passed this evaluation) if worked: print("Worked!!") else: print("Failed!")
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Instead of iterating through the documents, we can also extract all of the documents at once by calling `list(docs)`. This approach though comes with some drawbacks. - The code will have to wait for all of the records to be pulled (unless threaded) - You'll need to ensure that you have the memory to store all of the results - Any connection errors will reset the process - etc.
docs = favs_coll.find({"favorited_by_screen_name": "elonmusk"}) doc_lst = list(docs) print(len(doc_lst)) docs.count()
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
In Class Excercise:Using the **twitter_statuses** collection, calculate the **total number of favorites** that **elonmusk** has received
stats_coll = db.twitter_statuses #Room for in-class work docs = stats_coll.find({'user.screen_name': 'elonmusk'}) tot = sum([doc.get('favorite_count', 0) for doc in docs]) print(tot)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Would we get the same result if we ran this processes against the **twitter_favorites** collection? Exception to the RuleWhile pymongo's pattern system effectively parallels the mongo shell, there is one key exception: - The use of the **$** In mongo shell the following is valid: - **`db.coll_name.find({"attr": {$exists: true}})`** However, in pymongo this would be phrased as: - **`db.coll_name.find({"attr": {"$exists": True}})`** Since **$** isn't a valid value in python, these functions need to be wrapped as strings. In Class Excercise:Using a mixture of mongo queries and python, determine if the person who has the most favorited tweet (***favorites collection***) in 2021 is a friend of Elon Musks (screen_name - 'elonmusk').Note: Sorting with pymongo is slightly different - `.sort([("field1", 1), ("field2", -1)])`
# Space for work from datetime import datetime date = datetime(2021, 1, 1) docs = favs_coll.find({"created_at": {"$gte": date}}).sort([('favorite_count', -1)]) user = docs[0].get('user').get('screen_name') friends_coll = db.twitter_friends doc = friends_coll.find_one({ "$and": [ {"screen_name": user}, {"friend_of_screen_name": 'elonmusk'} ] }) if doc: print("friends") else: print("not friends")
not friends
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
insert_one() and insert_many()These methods enable us to insert one or more documents into the collection**Do not run the following sections!****Question**:Will the following cell cause an error?
test_coll = db.test_collection doc = test_coll.find_one({"test": "passed!"}) print(doc)
None
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
We can insert any valid object by simply calling: - **`coll_name.insert_one(doc)`** *Note: If we do not provide a `_id` field in the document mongo will automatically create one. This means that there is nothing stopping us from inserting duplicate records*
doc = {"test": "passed!"} result = test_coll.insert_one(doc) result.inserted_id
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
We can verify on the python side by querying for the record
doc = test_coll.find_one({"test": "passed!"}) print(doc)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
We can also insert many documents at once: - **`coll_name.insert_many(docs)`** - where docs is a list of valid BSON documents
#Don't run this - just for demonstration docs = [{'test': 'passed-' + str(x)} for x in range(5)] test_coll.insert_many(docs)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Verification:
# Since it's a sample collection it only has our inserted docs docs = test_coll.find({}) docs_lst = list(docs) for doc in docs_lst: # This will simply help the formatting on the output print(doc)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
update_one() and update_many()As discussed in the slides, these methods are used to modify an existing record.While they are a bit more complexed than the other methods, I did want to provide a little example.**`coll_name.update_one(find_pattern, update_pattern)`** 1. We find the documnet(s) that match the find_pattern - The find_pattern follows the same structure as the mongo shell and pymongo find methods 2. We dictate the update pattern for the identified document(s)
# Here we will be adding an attribute that indicates the document has been updated test_coll.update_one({"test": "passed!"}, {"$set": {"updated": True}}) doc = test_coll.find_one({"test": "updated"}) print(doc)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Works the same way for **`coll_name.update_many(find_pattern, update_pattern)`**
test_coll.update_many({"test": {"$exists": True}}, {"$set": {"updated": True}}) docs = test_coll.find({}) for doc in docs: # This will simply help the formatting on the output print(doc)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
delete_one() and delete_many()Deleting records works almost the same was as updating, except we only provide a **find_pattern** to the method.**`coll_name.delete_one(find_pattern)`**
result = test_coll.delete_one({"test": "updated"})
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Now we shouldn't be able to find that document:
doc = test_coll.find_one({"test": "updated"}) print(doc)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
We can also inspect the **DeleteResult** from the command:
print(result.raw_result) print(result.deleted_count) print(result.acknowledged)
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Small example using **`coll_name.delete_many()`**
def num_field(field): docs = test_coll.find({field: {"$exists": True}}) count = sum(1 for x in docs) return(count) print(num_field('test')) test_coll.delete_many({'test': {"$exists": True}}) print(num_field('test'))
_____no_output_____
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
In Class Excercise: 1. Insert a JSON document into the test_collection with the following structure: ```JSON { "name": `your_name`, "favorite_movie": `movie_name`, "favorite_bands": [ `band_name_1`, `band_name_2`, `etc.` ] }``` 2. Review the response object and execute a query in python to prove your document has sucessfully been inserted 3. Using python, delete your object and verify the results by reviewing the response object and querying the collection.
# Space for work resp = test_coll.insert_one( { "name": "Joel", "favorite_movie": 'Big Fish', "favorite_bands": [ 'Jon Bellion', 'Blink-182' ] } ) if resp.acknowledged: print("Inserted") _id = resp.inserted_id test_coll.find_one({"_id": _id}) resp = test_coll.delete_one({"_id": _id}) if resp.acknowledged: print(f'{resp.deleted_count} documents removed')
1 documents removed
CC-BY-3.0
assets/EMSE6586/PyMongo_Complete.ipynb
ngau9567/ngau9567.github.io
Introduction In this notebook you will learn about the **AR-CNN** - a novel self-correcting, autoregressive model that uses a convolutional neural network in its architecture. By the end of this notebook, you will have trained and ran inference on your very own custom model. This notebook dives into details on the model and assumes a moderate level of understanding of machine learning concepts; as a result, we encourage you to read the introductory [learning capsules](https://console.aws.amazon.com/deepcomposer/home?region=us-east-1learningCapsules) before going through this notebook.Traditionally, there have been two primary approaches to generating music with deep neural network-based generative models. One treats music generation as an image generation problem,while the other treats music as a time series generation problem analogous to autoregressive language modeling. The AR-CNN model uses elements from both approaches to generate music. We view each piece of music as a piano roll (an image representation of music), but generate each note (i.e. pixel) autoregressively.Generating images autoregressively has been an area of interest to researchers. * Orderless NADE showcased an approach to generating images assuming ordering-invariance in the next pixel to be added. * PixelCNN demonstrated with a fixed row by row ordering that an autoregressive approach can generate convincing results for CIFAR-10.In the music domain, CocoNET - the algorithm behind Google’s Bach Doodle - adopts an approach similar to orderless NADE, but using Gibbs Sampling to obtain inference results. One common theme with autoregressive approaches, however, is that they are very prone to accumulation of error. Our approach is novel in that the model is trained to detect mistakes - including those it made itself - and fix them. We do this by viewing music generation as a series of **edit events** which can be either the addition of a new note or removal of an existing note. An **edit sequence** is a series of **edit events** and every edit sequence can directly correspond to a piece of music. By training our model to view the problem as edit events rather than as an entire image or just the addition of notes, we found that our model is able to offset accumulation of error and generate higher quality music.Now that you understand the basic theory behind our approach, let’s dive into the practical code. In the next section we discuss and show examples using the piano roll format. DependenciesFirst, let's install and import all of the python packages we will use throughout the tutorial.
# The MIT-Zero License # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. # Create the environment and install required packages !pip install -r requirements.txt # Imports import os import glob import json import numpy as np import keras from enum import Enum from keras.models import Model from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, concatenate, BatchNormalization, Dropout from keras.optimizers import Adam, RMSprop from keras import backend as K from random import randrange import random import math import pypianoroll from utils.midi_utils import play_midi, plot_pianoroll, get_music_metrics, process_pianoroll, process_midi from constants import Constants from augmentation import AddAndRemoveAPercentageOfNotes from data_generator import PianoRollGenerator from utils.generate_training_plots import GenerateTrainingPlots from inference import Inference
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Dataset SummaryIn this tutorial, we use the [`JSB-Chorales-dataset`](http://www-etud.iro.umontreal.ca/~boulanni/icml2012), comprising 229 chorale snippets. A chorale is a hymn that is usually sung with a single voice playing a simple melody and three lower voices providing harmony. In this dataset, these voices are represented by four piano tracks.In case, you want to train the ArCnn model on your own dataset, please replace the current **data_dir** path with your directory to midi files.Let's listen to a song from this dataset.
# Get The List Of Midi Files data_dir = 'data/*.mid' midi_files = glob.glob(data_dir) random_midi = randrange(len(midi_files)) play_midi(midi_files[random_midi])
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Data Format - Piano Roll For the purpose of this tutorial, we represent music from the JSB-Chorales dataset in the piano roll format.A **piano roll** is a discrete, image-like representation of music which can be viewed as a two-dimensional grid with **"Time"** on the horizontal axis and **"Pitch"** on the vertical axis. In our use case, the presence of a pixel in any particular cell in this grid indicates if a note was played or not at that time and pitch.Let us look at a few piano rolls in our dataset. In this example, a single piano roll track has 128 discrete time steps and 128 pitches. ArCnn model when comes across midi files with multiple tracks, all the tracks are merged to form a single track and this can be visualized below.You might notice this representation looks similar to an image. While the sequence of notes is often the natural way that people view music, many modern machine learning models instead treat music as images and leverage existing techniques within the computer vision domain. You will see such techniques used in our architecture later in this tutorial. **Why 128 time steps?**For the purpose of this tutorial, we sample eight non-empty bars (https://en.wikipedia.org/wiki/Bar_(music)) from each song in the JSB-Chorales dataset. A **bar** (or **measure**) is a unit of composition and contains four beats for songs in our particular dataset (our songs are all in 4/4 time) :We’ve found that using a resolution of four time steps per beat captures enough of the musical detail in this dataset.This yields...$$ \frac{4\;timesteps}{1\;beat} * \frac{4\;beats}{1\;bar} * \frac{8\;bars}{1} = 128\;timesteps $$ Create The Dataset
# Generate Midi File Samples def generate_samples(midi_files, bars, beats_per_bar, beat_resolution, bars_shifted_per_sample): """ dataset_files: All files in the dataset return: piano roll samples sized to X bars """ timesteps_per_nbars = bars * beats_per_bar * beat_resolution time_steps_shifted_per_sample = bars_shifted_per_sample * beats_per_bar * beat_resolution samples = [] for midi_file in midi_files: pianoroll = process_midi(midi_file, beat_resolution) # Parse the midi file and get the piano roll samples.extend(process_pianoroll(pianoroll, time_steps_shifted_per_sample, timesteps_per_nbars)) return samples # Convert Input Midi Files To Tensors dataset_samples = generate_samples(midi_files, Constants.bars, Constants.beats_per_bar, Constants.beat_resolution, Constants.bars_shifted_per_sample) # Shuffle The Dataset random.shuffle(dataset_samples) # Visualize A Random Piano roll random_pianoroll = dataset_samples[randrange(len(dataset_samples))] plot_pianoroll(pianoroll = random_pianoroll, beat_resolution = 4)
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Training AugmentationThe augmented **input piano roll** is created by adding and removing notes from the original piano roll. By keeping the original piano roll as the target, the model learns what edit events (i.e. notes to add and remove) are needed to recreate from the augmented piano roll. The augmented piano roll can represent a user input melody which has some mistakes / off-tune notes that need to be corrected. In this way, the model learns how to fix/improve the input.During training, the data generator creates (input, target) pairs by applying augmentations on the piano rolls present in the dataset. In each epoch, different notes are added and removed from original piano rolls to form the augmented piano rolls (as these notes are added/removed in random pixels). This means that we will have a new set of augmented piano rolls for each epoch, and this effectively creates an unlimited input training dataset. There can be multiple augmented piano rolls for a single original piano roll, and this can be configured using the parameter - **β€œsamples_per_ground_truth_data_item”** in **constants.py**. Details of adding and removing notes during augmentation are explained below. Removing Notes From The Original Piano RollNotes are randomly removed from the original piano roll to form the augmented piano roll. The model learns that it needs to add these notes in the augmented piano roll to recreate the original piano roll. This teaches the model how to fill in missing notes. The percentage of original notes to remove is determined by sampling from a uniform distribution between a lower and upper bound. The default lower bound of notes to remove is 0% as this helps the model learn that it doesn’t need to add notes to the input when the input is already β€œperfect”. The default upper bound is 100% as this helps the model create music when nothing is given as input (the unconditioned music generation case). ![SegmentLocal](images/removenotes.gif "segment") Adding Notes To The Original Piano Roll Notes are randomly added to the original piano roll to form the augmented piano roll. The model learns that it needs to remove these notes in the augmented piano roll to recreate the original. This teaches the model how to remove unnecessary or off-tune notes. The percentage of extra notes to add is determined by sampling from a uniform distribution between a lower and upper bound (similar to the removing notes case). The default lower bound of notes to add is 0% of the current empty notes. This teaches the model to remove no notes when the input is already β€œperfect”. The default upper bound of notes to add is 1.5% of the current empty pixels (that do not have a note). This upper percentage may seem small, but since the percentage is out of the total empty pixels (which are usually far greater than the number of notes), the upper bound ends up being sufficiently large. ![SegmentLocal](images/addnotes.gif "segment") For both percentage of notes to add and remove, sampling is done from a uniform distribution to ensure that the model sees different potential states equally often. During training, this equal representation helps the model learn how to fill in or remove different numbers of notes, and how to recreate the original from any stage of the input. This is useful during the iterative inference process which we describe in more detail in the Inference section.Both adding and removing notes is performed together on each piano roll. The sampling lower bound and sampling upper bound parameters for these can be changed via the parameters - **β€œsampling_lower_bound_remove”**, **β€œsampling_upper_bound_remove”**, **β€œsampling_lower_bound_add”**, and **β€œsampling_upper_bound_add”**.
sampling_lower_bound_remove = 0 sampling_upper_bound_remove = 100 sampling_lower_bound_add = 1 sampling_upper_bound_add = 1.5
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Loss FunctionRather than using a traditional loss function such as binary crossentropy, we calculate a custom loss function for our model. In our augmentation we both add extraneous notes and remove existing notes from the piano roll. Our end goal is to have the model pick the next **edit event**(i.e. the next note to add or remove) so that we can take the input piano roll and bring it closer to the original piano roll, also known as the **target piano roll**. Notice that the model could pick any one of the extraneous or missing notes to bring the input piano roll closer to the target piano roll. These extraneous or missing notes is the **symmetric difference** between the input and target piano rolls. We can calculate the symmetric difference as the **exclusive-or** between the input and target piano rolls. Assuming that choosing any of the notes in the symmetric difference is equally likely, the model’s goal is to minimize the difference between its output and a uniform distribution for the probabilities of each of those notes. This difference in distributions can be calculated as the **Kullback–Leibler divergence**. Thus our loss function is the difference between the model’s output and the uniform distribution of all pixels/note probabilities in the symmetric difference.
# Customized Loss function class Loss(): @staticmethod def built_in_softmax_kl_loss(target, output): ''' Custom Loss Function :param target: ground truth values :param output: predicted values :return kullback_leibler_divergence loss ''' target = K.flatten(target) output = K.flatten(output) target = target / K.sum(target) output = K.softmax(output) return keras.losses.kullback_leibler_divergence(target, output)
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Model Architecture Our Model architecture is adapted from the U-Net architecture (a popular CNN that is used extensively in the computer vision domain), consisting of an **β€œencoder”** that maps the single track music data (represented as piano roll images) to a relatively lower dimensional β€œlatent spaceβ€œ and a **”decoderβ€œ** that maps the latent space back to multi-track music data.Here are the inputs provided to the generator:**Single-track piano roll input**: A single melody track of size (128, 128, 1) => (TimeStep, NumPitches, NumTracks) is provided as the input to the model. Notice from the figure below that the encoding layers of the model on the left side and decoder layer on on the right side are connected to create a U-shape, thereby giving the name U-Net to this architecture.
# Build The Model class ArCnnModel(): def __init__(self, input_dim, num_filters, growth_factor, num_layers, dropout_rate_encoder, dropout_rate_decoder, batch_norm_encoder, batch_norm_decoder, learning_rate, optimizer_enum, pre_trained=None): # Piano roll Input Dimensions self.input_dim = input_dim # Number of filters in the convolution self.num_filters = num_filters # Growth rate of number of filters at each convolution self.growth_factor = growth_factor # Number of Encoder and Decoder layers self.num_layers = num_layers # A list of dropout values at each encoder layer self.dropout_rate_encoder = dropout_rate_encoder # A list of dropout values at each decoder layer self.dropout_rate_decoder = dropout_rate_decoder # A list of flags to ensure if batch_nromalization at each encoder self.batch_norm_encoder = batch_norm_encoder # A list of flags to ensure if batch_nromalization at each decoder self.batch_norm_decoder = batch_norm_decoder # Path to pretrained Model self.pre_trained = pre_trained # Learning rate for the model self.learning_rate = learning_rate # Optimizer to use while training the model self.optimizer_enum = optimizer_enum if self.num_layers < 1: raise ValueError( "Number of layers should be greater than or equal to 1") # Number of times Conv2D to be performed CONV_PER_LAYER = 2 def down_sampling(self, layer_input, num_filters, batch_normalization=False, dropout_rate=0): ''' :param: layer_input: Input Layer to the downsampling block :param: num_filters: Number of filters :param: batch_normalization: Flag to check if batch normalization to be performed :param: dropout_rate: To regularize overfitting ''' encoder = layer_input for _ in range(self.CONV_PER_LAYER): encoder = Conv2D(num_filters, (3, 3), activation='relu', padding='same')(encoder) pooling_layer = MaxPooling2D(pool_size=(2, 2))(encoder) if dropout_rate: pooling_layer = Dropout(dropout_rate)(pooling_layer) if batch_normalization: pooling_layer = BatchNormalization()(pooling_layer) return encoder, pooling_layer def up_sampling(self, layer_input, skip_input, num_filters, batch_normalization=False, dropout_rate=0): ''' :param: layer_input: Input Layer to the downsampling block :param: num_filters: Number of filters :param: batch_normalization: Flag to check if batch normalization to be performed :param: dropout_rate: To regularize overfitting ''' decoder = concatenate( [UpSampling2D(size=(2, 2))(layer_input), skip_input]) if batch_normalization: decoder = BatchNormalization()(decoder) for _ in range(self.CONV_PER_LAYER): decoder = Conv2D(num_filters, (3, 3), activation='relu', padding='same')(decoder) if dropout_rate: decoder = Dropout(dropout_rate)(decoder) return decoder def get_optimizer(self, optimizer_enum, learning_rate): ''' Use either Adam or RMSprop. ''' if OptimizerType.ADAM == optimizer_enum: optimizer = Adam(lr=learning_rate) elif OptimizerType.RMSPROP == optimizer_enum: optimizer = RMSprop(lr=learning_rate) else: raise Exception("Only Adam and RMSProp optimizers are supported") return optimizer def build_model(self): # Create a list of encoder sampling layers down_sampling_layers = [] up_sampling_layers = [] inputs = Input(self.input_dim) layer_input = inputs num_filters = self.num_filters # encoder samplimg layers for layer in range(self.num_layers): encoder, pooling_layer = self.down_sampling( layer_input=layer_input, num_filters=num_filters, batch_normalization=self.batch_norm_encoder[layer], dropout_rate=self.dropout_rate_encoder[layer]) down_sampling_layers.append(encoder) layer_input = pooling_layer # Get the previous pooling_layer_input num_filters *= self.growth_factor # bottle_neck layer bottle_neck = Conv2D(num_filters, (3, 3), activation='relu', padding='same')(pooling_layer) bottle_neck = Conv2D(num_filters, (3, 3), activation='relu', padding='same')(bottle_neck) num_filters //= self.growth_factor # upsampling layers decoder = bottle_neck for index, layer in enumerate(reversed(down_sampling_layers)): decoder = self.up_sampling( layer_input=decoder, skip_input=layer, num_filters=num_filters, batch_normalization=self.batch_norm_decoder[index], dropout_rate=self.dropout_rate_decoder[index]) up_sampling_layers.append(decoder) num_filters //= self.growth_factor output = Conv2D(1, 1, activation='linear')(up_sampling_layers[-1]) model = Model(inputs=inputs, outputs=output) optimizer = self.get_optimizer(self.optimizer_enum, self.learning_rate) model.compile(optimizer=optimizer, loss=Loss.built_in_softmax_kl_loss) if self.pre_trained: model.load_weights(self.pre_trained) model.summary() return model class OptimizerType(Enum): ADAM = "Adam" RMSPROP = "RMSprop"
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
TrainingWe split the dataset into training and validation sets. The default training-validation split is 0.9, but this can be changed with the parameter **β€œtraining_validation_split”** in **constants.py**.During training, the data generator creates (input, target) pairs by applying augmentations on the piano rolls present in the dataset. Details of the augmentation are described in the previous section. In each epoch, different notes will be added and removed from original piano rolls to form the augmented piano rolls (as these notes are added/removed in random spots each time). This means that we will have a new set of augmented piano rolls for each epoch, and this effectively creates an unlimited input training dataset.
dataset_size = len(dataset_samples) dataset_split = math.floor(dataset_size * Constants.training_validation_split) print(0, dataset_split, dataset_split + 1, dataset_size) training_samples = dataset_samples[0:dataset_split] print("training samples length: {}".format(len(training_samples))) validation_samples = dataset_samples[dataset_split + 1:dataset_size] print("validation samples length: {}".format(len(validation_samples)))
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
All the ArCnn model related hyperparameters can be changed from below. For instance, to decrease the model size, change the default value of num_layers from 5, and update the dropout_rate_encoder, dropout_rate_deoder, batch_norm_encoder and batch_norm_decoder lists accordingly.
# Piano Roll Input Dimensions input_dim = (Constants.bars * Constants.beats_per_bar * Constants.beat_resolution, Constants.number_of_pitches, Constants.number_of_channels) # Number of Filters In The Convolution num_filters = 32 # Growth Rate Of Number Of Filters At Each Convolution growth_factor = 2 # Number Of Encoder And Decoder Layers num_layers = 5 # A List Of Dropout Values At Each Encoder Layer dropout_rate_encoder = [0, 0.5, 0.5, 0.5, 0.5] # A List Of Dropout Values At Each Decoder Layer dropout_rate_decoder = [0.5, 0.5, 0.5, 0.5, 0] # A List Of Flags To Ensure If batch_normalization Should be performed At Each Encoder batch_norm_encoder = [True, True, True, True, False] # A List Of Flags To Ensure If batch_normalization Should be performed At Each Decoder batch_norm_decoder = [True, True, True, True, False] # Path to Pretrained Model If You Want To Initialize Weights Of The Network With The Pretrained Model pre_trained = False # Learning Rate Of The Model learning_rate = 0.001 # Optimizer To Use While Training The Model optimizer_enum = OptimizerType.ADAM # Batch Size batch_size = 32 # Number Of Epochs epochs = 500 # The Number of Batch Iterations Before A Training Epoch Is Considered Finsihed steps_per_epoch = int( len(training_samples) * Constants.samples_per_ground_truth_data_item / int(batch_size)) print("The Total Number Of Steps Per Epoch Are: "+ str(steps_per_epoch)) # Total Number Of Time Steps n_timesteps = Constants.bars * Constants.beat_resolution * Constants.beats_per_bar
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Build The Data GeneratorsNow let's build the training and validation data generators to create data on the fly during training.
## Training Data Generator training_data_generator = PianoRollGenerator(sample_list=training_samples, sampling_lower_bound_remove = sampling_lower_bound_remove, sampling_upper_bound_remove = sampling_upper_bound_remove, sampling_lower_bound_add = sampling_lower_bound_add, sampling_upper_bound_add = sampling_upper_bound_add, batch_size = batch_size, bars = Constants.bars, samples_per_data_item = Constants.samples_per_ground_truth_data_item, beat_resolution = Constants.beat_resolution, beats_per_bar = Constants.beats_per_bar, number_of_pitches = Constants.number_of_pitches, number_of_channels = Constants.number_of_channels) # Vaalidation Data Generator validation_data_generator = PianoRollGenerator(sample_list = validation_samples, sampling_lower_bound_remove = sampling_lower_bound_remove, sampling_upper_bound_remove = sampling_upper_bound_remove, sampling_lower_bound_add = sampling_lower_bound_add, sampling_upper_bound_add = sampling_upper_bound_add, batch_size = batch_size, bars = Constants.bars, samples_per_data_item = Constants.samples_per_ground_truth_data_item, beat_resolution = Constants.beat_resolution, beats_per_bar = Constants.beats_per_bar, number_of_pitches = Constants.number_of_pitches, number_of_channels = Constants.number_of_channels)
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Create Callbacks for the model. 1. Create **Training Vs Validation** loss plots during training.2. Save model checkpoints based on the **Best Validation Loss**.
# Callback For Loss Plots plot_losses = GenerateTrainingPlots() ## Checkpoint Path checkpoint_filepath = 'checkpoints/-best-model-epoch:{epoch:04d}.hdf5' # Callback For Saving Model Checkpoints model_checkpoint_callback = keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath, save_weights_only=False, monitor='val_loss', mode='min', save_best_only=True) # Create A List Of Callbacks callbacks_list = [plot_losses, model_checkpoint_callback] # Create A Model Instance MusicModel = ArCnnModel(input_dim = input_dim, num_filters = num_filters, growth_factor = growth_factor, num_layers = num_layers, dropout_rate_encoder = dropout_rate_encoder, dropout_rate_decoder = dropout_rate_decoder, batch_norm_encoder = batch_norm_encoder, batch_norm_decoder = batch_norm_decoder, pre_trained = pre_trained, learning_rate = learning_rate, optimizer_enum = optimizer_enum) model = MusicModel.build_model() # Start Training history = model.fit_generator(training_data_generator, validation_data = validation_data_generator, steps_per_epoch = steps_per_epoch, epochs = epochs, callbacks = callbacks_list)
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Inference Generating Bach Like Enhanced Melody For Custom InputCongratulations! You have trained your very own AutoRegressive model to generate music. Let us see how our music model performs on a custom input.Before loading the model, we need to load inference related parameters. After that, we load our pretrained model and generate a new melody based on **"Twinkle Twinkle Little Star"**. Inference is done by sampling from the model’s predicted probability distribution across the entire piano roll. It is an iterative process, and a note is added or removed to the input in every iteration via sampling. After adding or removing a note to the input in an iteration, this new input is fed back into the model. The model is trained to both remove and add notes, and it can improve the input melody, and can correct mistakes that it may have made in earlier iterations as well.You can change certain inference parameters to observe the differences in the generated music as described below. * **Sampling iterations** - his specifies the number of iterations during inference. Larger number of sampling iterations can ensure that the model has had enough time to improve the input melody and also correct any mistakes it may have made along the way. Beyond a certain number of sampling iterations, it can be observed that the model keeps adding notes and then removing those notes in a subsequent iterations or vice-versa. This implies that convergence has been reached.* **Maximum Notes to Remove** - This specifies the maximum percentage of notes of the original input melody that can be removed during inference. If you choose this to be 0%, then none of your original melody will be removed during inference. * **Maximum Notes to Add** - This specifies the maximum number of new notes to add to the original input melody during inference.With the β€œMaximum Notes to Remove” and β€œMaximum Notes to Add”, you can choose the degree to which you would like to preserve your original input melody. However, by restricting the model’s ability to add or remove notes, you may risk losing some music quality. * **Creativity**- The output probability distribution generated by the model is obtained via softmax, and you can change the temperature for softmax to get different levels of β€œcreativity”. By using lower temperatures, the output probability distribution would have more distinct peaks, and the model would be more confident in its predictions. By using higher temperatures, the output probability distribution would be flatter, and the model would have a higher chance of choosing less likely notes to add/remove. By increasing the temperature, you can give the model the ability to take more risks, and increase its β€œcreativity”. Let us first load our last saved or pretrained checkpoint and inference related parameters. To modify the inference related parameters, please navigate to **inference_parameters.json** and change the values in the json file.
# Load The Inference Related Parameters with open('inference_parameters.json') as json_file: inference_params = json.load(json_file) # Create An Inference Object inference_obj = Inference() # Load The Checkpoint inference_obj.load_model('checkpoints/-best-model-epoch:0001.hdf5')
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Please navigate to **sample_inputs** directory to find different input melodies we have already created for you to help generating novel compositions.To download the novel compositions, you have created using the model we just trained, please navigate to **outputs** directory and download the midi file.
# Generate The Composition inference_obj.generate_composition('sample_inputs/twinkle_twinkle.midi', inference_params)
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Now, Let's Play The Generated Output And Listen To It
play_midi("outputs/output_0.mid")
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Evaluate ResultsNow that we have finished generating our enhanced melody, let's find out how we did. We will analyze our output using below three metrics and compare them with the sample input:- **Empty Bar Rate:** The ratio of empty bars to total number of bars.- **Pitch Histogram Distance:** A metric that captures the distribution and position of pitches.- **In Scale Ratio:** Ratio of the number of notes that are in C major key, which is a common key found in music, to the total number of notes. After computing the metrics, let's also visualize the input piano roll and compare it with the generated output piano roll to notice the notes added.
# Input Midi Metrics: print("The input midi metrics are:") get_music_metrics("sample_inputs/twinkle_twinkle.midi", beat_resolution=4) print("\n") # Generated Output Midi Metrics: print("The generated output midi metrics are:") get_music_metrics("outputs/output_0.mid", beat_resolution=4) # Convert The Input and Generated Midi To Tensors input_pianoroll = process_midi("sample_inputs/twinkle_twinkle.midi", beat_resolution=4) output_pianoroll = process_midi("outputs/output_0.mid", beat_resolution=4) # Plot Input Piano Roll plot_pianoroll(input_pianoroll, beat_resolution=4) # Plot Output Piano Roll plot_pianoroll(output_pianoroll, beat_resolution=4)
_____no_output_____
MIT-0
ar-cnn/ AutoRegressiveCNN.ipynb
byhqsr/aws-samples-aws-deepcomposer-samples
Redshift fittingJavier SΓ‘nchez, 06/09/2016 A big part of the astrophysical and cosmological information comes from geometry, i.e., we can infer a lot of properties of our observable Universe using the positions of stars, galaxies and other objects. The sky appears to us as a 2D projection of our 3D Universe. The angular position can be inferred straightforwardly, however, how far away is one object from us given its angular coordinates is quite challenging and encodes very valuable information.A simple way to compute the distance between us and a light source is by measuring its redshift $z$. If the source emits at wavelength $\lambda_{em}$ and is observed by us at wavelength $\lambda_{obs}$, $z$ is given by:$$z = \lambda_{obs}/\lambda_{em}-1$$We saw this in Chapter 8.
%pylab inline import time import os import urllib2 import numpy as np import pylab as pl from matplotlib.patches import Arrow REFSPEC_URL = 'http://www.astro.washington.edu/users/ivezic/DMbook/data/1732526_nic_002.ascii' URL = 'http://www.sdss.org/dr7/instruments/imager/filters/%s.dat' def fetch_filter(filt): assert filt in 'ugriz' url = URL % filt if not os.path.exists('downloads'): os.makedirs('downloads') loc = os.path.join('downloads', '%s.dat' % filt) if not os.path.exists(loc): print "downloading from %s" % url F = urllib2.urlopen(url) open(loc, 'w').write(F.read()) F = open(loc) data = np.loadtxt(F) return data def fetch_vega_spectrum(): if not os.path.exists('downloads'): os.makedirs('downloads') refspec_file = os.path.join('downloads', REFSPEC_URL.split('/')[-1]) if not os.path.exists(refspec_file): print "downloading from %s" % REFSPEC_URL F = urllib2.urlopen(REFSPEC_URL) open(refspec_file, 'w').write(F.read()) F = open(refspec_file) data = np.loadtxt(F) return data Xref = fetch_vega_spectrum() Xref[:, 1] /= 2.1 * Xref[:, 1].max() #---------------------------------------------------------------------- # Plot filters in color with a single spectrum pl.figure() pl.plot(Xref[:, 0], Xref[:, 1], '-k', lw=2) for f,c in zip('ugriz', 'bgrmk'): X = fetch_filter(f) pl.fill(X[:, 0], X[:, 1], ec=c, fc=c, alpha=0.4) kwargs = dict(fontsize=20, ha='center', va='center', alpha=0.5) pl.text(3500, 0.02, 'u', color='b', **kwargs) pl.text(4600, 0.02, 'g', color='g', **kwargs) pl.text(6100, 0.02, 'r', color='r', **kwargs) pl.text(7500, 0.02, 'i', color='m', **kwargs) pl.text(8800, 0.02, 'z', color='k', **kwargs) pl.xlim(3000, 11000) pl.title('SDSS Filters and Reference Spectrum') pl.xlabel('Wavelength (Angstroms)') pl.ylabel('normalized flux / filter transmission') #---------------------------------------------------------------------- # Plot filters in gray with several redshifted spectra pl.figure() redshifts = [0.0, 0.4, 0.8] colors = 'bgr' for z, c in zip(redshifts, colors): pl.plot((1. + z) * Xref[:, 0], Xref[:, 1], color=c) pl.gca().add_patch(Arrow(4200, 0.47, 1300, 0, lw=0, width=0.05, color='r')) pl.gca().add_patch(Arrow(5800, 0.47, 1250, 0, lw=0, width=0.05, color='r')) pl.text(3800, 0.49, 'z = 0.0', fontsize=14, color=colors[0]) pl.text(5500, 0.49, 'z = 0.4', fontsize=14, color=colors[1]) pl.text(7300, 0.49, 'z = 0.8', fontsize=14, color=colors[2]) for f in 'ugriz': X = fetch_filter(f) pl.fill(X[:, 0], X[:, 1], ec='k', fc='k', alpha=0.2) kwargs = dict(fontsize=20, color='gray', ha='center', va='center') pl.text(3500, 0.02, 'u', **kwargs) pl.text(4600, 0.02, 'g', **kwargs) pl.text(6100, 0.02, 'r', **kwargs) pl.text(7500, 0.02, 'i', **kwargs) pl.text(8800, 0.02, 'z', **kwargs) pl.xlim(3000, 11000) pl.ylim(0, 0.55) pl.title('Redshifting of a Spectrum') pl.xlabel('Observed Wavelength (Angstroms)') pl.ylabel('normalized flux / filter transmission') pl.show()
_____no_output_____
MIT
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
Idea: Measure light at different wavelengths from the sources to determine their redshift Spectra If we measure the spectra at different wavelengths with certain resolution we can compare with an object with the same characteristics and a known redshift and compute it. Photometry Instead of using a spectrograph we use filters and take images of the objects to build a low resolution spectrum and infer the redshfit. Photometry has the advantage of speed: we can measure more objects simultaneously. The problem is that these objects have very low resolution spectra (5 points across the 3000-10000 Angstroms for SDSS, DES and LSST) range. Spectroscopy gives a way higher resolution ($\lambda/\Delta \lambda$ ~ 1500 in BOSS at 3800 Angstroms and 2500 at 9000 Angstroms $\Rightarrow$ ~ 2.5/3.6 Angstroms pixels. 1 Angstrom pixels for DESI), the problem is that it requires more time. Redsfhit fitting techniques There are a lot of different options to retrieve the redshift information from an astronomical source. All of them have their advantages and disadvantages and depend on the nature of the data.For spectra, the most usual technique is to compare with a collection of spectral templates and minimize a $\chi^{2}$. For example, in SDSS-III/BOSS a PCA analysis is performed and then a $\chi^{2}$ minimization of the principal components (http://www.sdss.org/dr12/algorithms/redshifts/ -- http://arxiv.org/pdf/1207.7326v2.pdf).Other approaches: * Cross-correlation with templates * Emission line fitting * Pure $\chi^{2}$ * Bayesian (bayez) For photometric redshifts there is a wider variety of methods given that the number of inputs is lower and thus, a ML approach is easier to treat: * Artificial Neural Networks [Multilayer perceptron] (ANNz/Skynet) * Random forests/Boosted Decision Trees (TPZ/ArborZ) * Bayesian (BPZ) * $\chi^{2}$ minimization using templates (LePhare) * Nearest neighbors (KNN) * Gaussian processes (http://arxiv.org/pdf/1505.05489v3.pdf) * Linear regression/polynomial regression (outdated) Examples Linear regression
""" Photometric Redshifts via Linear Regression ------------------------------------------- Linear Regression for photometric redshifts We could use sklearn.linear_model.LinearRegression, but to be more transparent, we'll do it by hand using linear algebra. """ # Author: Jake VanderPlas # License: BSD # The figure produced by this code is published in the textbook # "Statistics, Data Mining, and Machine Learning in Astronomy" (2013) # For more information, see http://astroML.github.com # To report a bug or issue, use the following forum: # https://groups.google.com/forum/#!forum/astroml-general import itertools from sklearn.linear_model import LinearRegression from sklearn.metrics.pairwise import euclidean_distances from astroML.datasets import fetch_sdss_specgals #---------------------------------------------------------------------- # This function adjusts matplotlib settings for a uniform feel in the textbook. # Note that with usetex=True, fonts are rendered with LaTeX. This may # result in an error if LaTeX is not installed on your system. In that case, # you can set usetex to False. from astroML.plotting import setup_text_plots setup_text_plots(fontsize=8, usetex=True) np.random.seed(0) data = fetch_sdss_specgals() # put magnitudes in a matrix # with a constant (for the intercept) at position zero mag = np.vstack([np.ones(data.shape)] + [data['modelMag_%s' % f] for f in 'ugriz']).T z = data['z'] # train on ~60,000 points mag_train = mag[::10] z_train = z[::10] # test on ~6,000 distinct points mag_test = mag[1::100] z_test = z[1::100] def plot_results(z, z_fit, plotlabel=None, xlabel=True, ylabel=True): plt.scatter(z, z_fit, s=1, lw=0, c='k') plt.plot([-0.1, 0.4], [-0.1, 0.4], ':k') plt.xlim(-0.05, 0.4001) plt.ylim(-0.05, 0.4001) plt.gca().xaxis.set_major_locator(plt.MultipleLocator(0.1)) plt.gca().yaxis.set_major_locator(plt.MultipleLocator(0.1)) if plotlabel: plt.text(0.03, 0.97, plotlabel, ha='left', va='top', transform=ax.transAxes) if xlabel: plt.xlabel(r'$\rm z_{true}$') else: plt.gca().xaxis.set_major_formatter(plt.NullFormatter()) if ylabel: plt.ylabel(r'$\rm z_{fit}$') else: plt.gca().yaxis.set_major_formatter(plt.NullFormatter()) def combinations_with_replacement(iterable, r): pool = tuple(iterable) n = len(pool) for indices in itertools.product(range(n), repeat=r): if sorted(indices) == list(indices): yield tuple(pool[i] for i in indices) def poly_features(X, p): """Compute polynomial features Parameters ---------- X: array_like shape (n_samples, n_features) p: int degree of polynomial Returns ------- X_p: array polynomial feature matrix """ X = np.asarray(X) N, D = X.shape ind = list(combinations_with_replacement(range(D), p)) X_poly = np.empty((X.shape[0], len(ind))) for i in range(len(ind)): X_poly[:, i] = X[:, ind[i]].prod(1) return X_poly def gaussian_RBF_features(X, centers, widths): """Compute gaussian Radial Basis Function features Parameters ---------- X: array_like shape (n_samples, n_features) centers: array_like shape (n_centers, n_features) widths: array_like shape (n_centers, n_features) or (n_centers,) Returns ------- X_RBF: array RBF feature matrix, shape=(n_samples, n_centers) """ X, centers, widths = map(np.asarray, (X, centers, widths)) if widths.ndim == 1: widths = widths[:, np.newaxis] return np.exp(-0.5 * ((X[:, np.newaxis, :] - centers) / widths) ** 2).sum(-1) plt.figure(figsize=(10, 10)) plt.subplots_adjust(hspace=0.05, wspace=0.05, left=0.1, right=0.95, bottom=0.1, top=0.95) #---------------------------------------------------------------------- # first do a simple linear regression between the r-band and redshift, # ignoring uncertainties ax = plt.subplot(221) X_train = mag_train[:, [0, 3]] X_test = mag_test[:, [0, 3]] z_fit = LinearRegression().fit(X_train, z_train).predict(X_test) plot_results(z_test, z_fit, plotlabel='Linear Regression:\n r-band', xlabel=False) #---------------------------------------------------------------------- # next do a linear regression with all bands ax = plt.subplot(222) z_fit = LinearRegression().fit(mag_train, z_train).predict(mag_test) plot_results(z_test, z_fit, plotlabel="Linear Regression:\n ugriz bands", xlabel=False, ylabel=False) #---------------------------------------------------------------------- # next do a 3rd-order polynomial regression with all bands ax = plt.subplot(223) X_train = poly_features(mag_train, 3) X_test = poly_features(mag_test, 3) z_fit = LinearRegression().fit(X_train, z_train).predict(X_test) plot_results(z_test, z_fit, plotlabel="3rd order Polynomial\nRegression") #---------------------------------------------------------------------- # next do a radial basis function regression with all bands ax = plt.subplot(224) # remove bias term mag = mag[:, 1:] mag_train = mag_train[:, 1:] mag_test = mag_test[:, 1:] centers = mag[np.random.randint(mag.shape[0], size=100)] centers_dist = euclidean_distances(centers, centers, squared=True) widths = np.sqrt(centers_dist[:, :10].mean(1)) X_train = gaussian_RBF_features(mag_train, centers, widths) X_test = gaussian_RBF_features(mag_test, centers, widths) z_fit = LinearRegression().fit(X_train, z_train).predict(X_test) plot_results(z_test, z_fit, plotlabel="Gaussian Basis Function\nRegression", ylabel=False) plt.show()
/Users/javiers/anaconda/lib/python2.7/site-packages/scipy/linalg/basic.py:884: RuntimeWarning: internal gelsd driver lwork query error, required iwork dimension not returned. This is likely the result of LAPACK bug 0038, fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to 'gelss' driver. warnings.warn(mesg, RuntimeWarning)
MIT
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
Decision trees
""" Photometric Redshifts by Decision Trees --------------------------------------- Figure 9.14 Photometric redshift estimation using decision-tree regression. The data is described in Section 1.5.5. The training set consists of u, g , r, i, z magnitudes of 60,000 galaxies from the SDSS spectroscopic sample. Cross-validation is performed on an additional 6000 galaxies. The left panel shows training error and cross-validation error as a function of the maximum depth of the tree. For a number of nodes N > 13, overfitting is evident. """ # Author: Jake VanderPlas # License: BSD # The figure produced by this code is published in the textbook # "Statistics, Data Mining, and Machine Learning in Astronomy" (2013) # For more information, see http://astroML.github.com # To report a bug or issue, use the following forum: # https://groups.google.com/forum/#!forum/astroml-general from sklearn.tree import DecisionTreeRegressor from astroML.datasets import fetch_sdss_specgals #---------------------------------------------------------------------- # This function adjusts matplotlib settings for a uniform feel in the textbook. # Note that with usetex=True, fonts are rendered with LaTeX. This may # result in an error if LaTeX is not installed on your system. In that case, # you can set usetex to False. from astroML.plotting import setup_text_plots setup_text_plots(fontsize=8, usetex=True) #------------------------------------------------------------ # Fetch data and prepare it for the computation data = fetch_sdss_specgals() # put magnitudes in a matrix mag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T z = data['z'] # train on ~60,000 points mag_train = mag[::10] z_train = z[::10] # test on ~6,000 separate points mag_test = mag[1::100] z_test = z[1::100] #------------------------------------------------------------ # Compute the cross-validation scores for several tree depths depth = np.arange(1, 21) rms_test = np.zeros(len(depth)) rms_train = np.zeros(len(depth)) i_best = 0 z_fit_best = None for i, d in enumerate(depth): clf = DecisionTreeRegressor(max_depth=d, random_state=0) clf.fit(mag_train, z_train) z_fit_train = clf.predict(mag_train) z_fit = clf.predict(mag_test) rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2)) rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2)) if rms_test[i] <= rms_test[i_best]: i_best = i z_fit_best = z_fit best_depth = depth[i_best] #------------------------------------------------------------ # Plot the results fig = plt.figure(figsize=(10, 5)) fig.subplots_adjust(wspace=0.25, left=0.1, right=0.95, bottom=0.15, top=0.9) # first panel: cross-validation ax = fig.add_subplot(121) ax.plot(depth, rms_test, '-k', label='cross-validation') ax.plot(depth, rms_train, '--k', label='training set') ax.set_xlabel('depth of tree') ax.set_ylabel('rms error') ax.yaxis.set_major_locator(plt.MultipleLocator(0.01)) ax.set_xlim(0, 21) ax.set_ylim(0.009, 0.04) ax.legend(loc=1) # second panel: best-fit results ax = fig.add_subplot(122) ax.scatter(z_test, z_fit_best, s=1, lw=0, c='k') ax.plot([-0.1, 0.4], [-0.1, 0.4], ':k') ax.text(0.04, 0.96, "depth = %i\nrms = %.3f" % (best_depth, rms_test[i_best]), ha='left', va='top', transform=ax.transAxes) ax.set_xlabel(r'$z_{\rm true}$') ax.set_ylabel(r'$z_{\rm fit}$') ax.set_xlim(-0.02, 0.4001) ax.set_ylim(-0.02, 0.4001) ax.xaxis.set_major_locator(plt.MultipleLocator(0.1)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.1)) plt.show()
_____no_output_____
MIT
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
Boosted decision trees
""" Photometric Redshifts by Random Forests --------------------------------------- Figure 9.16 Photometric redshift estimation using gradient-boosted decision trees, with 100 boosting steps. As with random forests (figure 9.15), boosting allows for improved results over the single tree case (figure 9.14). Note, however, that the computational cost of boosted decision trees is such that it is computationally prohibitive to use very deep trees. By stringing together a large number of very naive estimators, boosted trees improve on the underfitting of each individual estimator. """ # Author: Jake VanderPlas # License: BSD # The figure produced by this code is published in the textbook # "Statistics, Data Mining, and Machine Learning in Astronomy" (2013) # For more information, see http://astroML.github.com # To report a bug or issue, use the following forum: # https://groups.google.com/forum/#!forum/astroml-general from sklearn.ensemble import GradientBoostingRegressor from astroML.datasets import fetch_sdss_specgals from astroML.decorators import pickle_results #---------------------------------------------------------------------- # This function adjusts matplotlib settings for a uniform feel in the textbook. # Note that with usetex=True, fonts are rendered with LaTeX. This may # result in an error if LaTeX is not installed on your system. In that case, # you can set usetex to False. from astroML.plotting import setup_text_plots setup_text_plots(fontsize=8, usetex=True) #------------------------------------------------------------ # Fetch and prepare the data data = fetch_sdss_specgals() # put magnitudes in a matrix mag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T z = data['z'] # train on ~60,000 points mag_train = mag[::10] z_train = z[::10] # test on ~6,000 distinct points mag_test = mag[1::100] z_test = z[1::100] #------------------------------------------------------------ # Compute the results # This is a long computation, so we'll save the results to a pickle. @pickle_results('photoz_boosting.pkl') def compute_photoz_forest(N_boosts): rms_test = np.zeros(len(N_boosts)) rms_train = np.zeros(len(N_boosts)) i_best = 0 z_fit_best = None for i, Nb in enumerate(N_boosts): try: # older versions of scikit-learn clf = GradientBoostingRegressor(n_estimators=Nb, learn_rate=0.1, max_depth=3, random_state=0) except TypeError: clf = GradientBoostingRegressor(n_estimators=Nb, learning_rate=0.1, max_depth=3, random_state=0) clf.fit(mag_train, z_train) z_fit_train = clf.predict(mag_train) z_fit = clf.predict(mag_test) rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2)) rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2)) if rms_test[i] <= rms_test[i_best]: i_best = i z_fit_best = z_fit return rms_test, rms_train, i_best, z_fit_best N_boosts = (10, 100, 200, 300, 400, 500) rms_test, rms_train, i_best, z_fit_best = compute_photoz_forest(N_boosts) best_N = N_boosts[i_best] #------------------------------------------------------------ # Plot the results fig = plt.figure(figsize=(10, 5)) fig.subplots_adjust(wspace=0.25, left=0.1, right=0.95, bottom=0.15, top=0.9) # left panel: plot cross-validation results ax = fig.add_subplot(121) ax.plot(N_boosts, rms_test, '-k', label='cross-validation') ax.plot(N_boosts, rms_train, '--k', label='training set') ax.legend(loc=1) ax.set_xlabel('number of boosts') ax.set_ylabel('rms error') ax.set_xlim(0, 510) ax.set_ylim(0.009, 0.032) ax.yaxis.set_major_locator(plt.MultipleLocator(0.01)) ax.text(0.03, 0.03, "Tree depth: 3", ha='left', va='bottom', transform=ax.transAxes) # right panel: plot best fit ax = fig.add_subplot(122) ax.scatter(z_test, z_fit_best, s=1, lw=0, c='k') ax.plot([-0.1, 0.4], [-0.1, 0.4], ':k') ax.text(0.04, 0.96, "N = %i\nrms = %.3f" % (best_N, rms_test[i_best]), ha='left', va='top', transform=ax.transAxes) ax.set_xlabel(r'$z_{\rm true}$') ax.set_ylabel(r'$z_{\rm fit}$') ax.set_xlim(-0.02, 0.4001) ax.set_ylim(-0.02, 0.4001) ax.xaxis.set_major_locator(plt.MultipleLocator(0.1)) ax.yaxis.set_major_locator(plt.MultipleLocator(0.1)) plt.show()
@pickle_results: using precomputed results from 'photoz_boosting.pkl'
MIT
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
KNN
""" K-Neighbors for Photometric Redshifts ------------------------------------- Estimate redshifts from the colors of sdss galaxies and quasars. This uses colors from a sample of 50,000 objects with SDSS photometry and ugriz magnitudes. The example shows how far one can get with an extremely simple machine learning approach to the photometric redshift problem. The function :func:`fetch_sdss_galaxy_colors` used below actually queries the SDSS CASjobs server for the colors of the 50,000 galaxies. """ # Author: Jake VanderPlas <[email protected]> # License: BSD # The figure is an example from astroML: see http://astroML.github.com from sklearn.neighbors import KNeighborsRegressor from astroML.plotting import scatter_contour n_neighbors=10 N = len(data) # shuffle data np.random.seed(0) np.random.shuffle(data) # put colors in a matrix X = np.zeros((N, 4)) X[:, 0] = data['modelMag_u'] - data['modelMag_g'] X[:, 1] = data['modelMag_g'] - data['modelMag_r'] X[:, 2] = data['modelMag_r'] - data['modelMag_i'] X[:, 3] = data['modelMag_i'] - data['modelMag_z'] z = data['z'] # divide into training and testing data Ntrain = N // 2 Xtrain = X[:Ntrain] ztrain = z[:Ntrain] Xtest = X[Ntrain:] ztest = z[Ntrain:] knn = KNeighborsRegressor(n_neighbors, weights='distance') zpred = knn.fit(Xtrain, ztrain).predict(Xtest) axis_lim = np.array([-0.1, 0.4]) rms = np.sqrt(np.mean((ztest - zpred) ** 2)) print("RMS error = %.2g" % rms) ax = plt.axes() plt.scatter(ztest, zpred, c='k', lw=0, s=4) plt.plot(axis_lim, axis_lim, '--k') plt.plot(axis_lim, axis_lim + rms, ':k') plt.plot(axis_lim, axis_lim - rms, ':k') plt.xlim(axis_lim) plt.ylim(axis_lim) plt.text(0.99, 0.02, "RMS error = %.2g" % rms, ha='right', va='bottom', transform=ax.transAxes, bbox=dict(ec='w', fc='w'), fontsize=16) plt.title('Photo-z: Nearest Neigbor Regression') plt.xlabel(r'$\mathrm{z_{spec}}$', fontsize=14) plt.ylabel(r'$\mathrm{z_{phot}}$', fontsize=14) plt.show()
RMS error = 0.024
MIT
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
Neural Network In this case I am going to use a Recurrent Neural Network (Long Short Term Memory). More info on: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
from keras.models import Sequential model = Sequential() from keras.layers import Dense, Activation from keras.layers.recurrent import GRU, SimpleRNN from keras.layers.recurrent import LSTM from keras.layers import Embedding model.add(LSTM(64,input_dim=4, return_sequences=False, activation='tanh')) model.add(Dense(64)) model.add(Dense(32, init='normal', activation='tanh')) model.add(Dense(16, init='normal', activation='tanh')) model.add(Dense(8)) model.add(Dense(4, init='normal', activation='tanh')) model.add(Dense(1, init='normal')) model.compile(loss='mse', optimizer='rmsprop') #model.train_on_batch(X[:60000].reshape(60000,4,1), z[:60000]) batch_size=60000 model.fit(X[:batch_size].reshape(-1,1,4), z[:batch_size], batch_size=batch_size, nb_epoch=300, verbose=0, validation_split=0.5) test_size=6000 predicted_output = model.predict_on_batch(X[batch_size:batch_size+test_size].reshape(-1,1,4)) plt.hist(predicted_output) print predicted_output[:,0].shape print z.shape diff = np.sqrt((predicted_output[:1000,0]-z[batch_size:1000+batch_size])**2) plt.hist(diff, bins=100, range=(0,0.15)); np.percentile(diff,68) axis_lim = np.array([-0.1, 0.4]) rms = np.sqrt(np.mean((predicted_output - z[batch_size:batch_size+test_size]) ** 2)) print("RMS error = %.2g" % rms) ax = plt.axes() plt.scatter(z[batch_size:batch_size+test_size], predicted_output, c='k', lw=0, s=4) plt.plot(axis_lim, axis_lim, '--k') plt.plot(axis_lim, axis_lim + rms, ':k') plt.plot(axis_lim, axis_lim - rms, ':k') plt.xlim(axis_lim) plt.ylim(axis_lim) plt.text(0.99, 0.02, "RMS error = %.2g" % rms, ha='right', va='bottom', transform=ax.transAxes, bbox=dict(ec='w', fc='w'), fontsize=16) plt.title('Photo-z: Recurrent Neural Network') plt.xlabel(r'$\mathrm{z_{spec}}$', fontsize=14) plt.ylabel(r'$\mathrm{z_{phot}}$', fontsize=14) plt.show()
RMS error = 0.072
MIT
Extra/Redshift Fitting -- Bayez.ipynb
dkirkby/astroml-study
Lecture 06: Recap and overview [Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2021)[](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2021/master?urlpath=lab/tree/06/Examples_and_overview.ipynb) 1. [Lecture 02: Fundamentals](Lecture-02:-Fundamentals)2. [Lecture 03: Optimize, print and plot](Lecture-03:-Optimize,-print-and-plot)3. [Lecture 04: Random numbers and simulation](Lecture-04:-Random-numbers-and-simulation)4. [Lectue 05: Workflow and debugging](Lectue-05:-Workflow-and-debugging)5. [Summary](Summary) This lecture recaps and overviews central concepts and methods from lecture 1-5.**Note:**1. I will focus on answering **general questions** repeatingly asked in the survey.2. If your **more specific questions** are not covered, ask them here: https://github.com/NumEconCopenhagen/lectures-2020/issues.
import itertools as it import numpy as np from scipy import optimize %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid')
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
1. Lecture 02: Fundamentals **Abstract:** You will be given an in-depth introduction to the **fundamentals of Python** (objects, variables, operators, classes, methods, functions, conditionals, loops). You learn to discriminate between different **types** such as integers, floats, strings, lists, tuples and dictionaries, and determine whether they are **subscriptable** (slicable) and/or **mutable**. You will learn about **referencing** and **scope**. You will learn a tiny bit about **floating point arithmetics**. 1.1 For vs. while loops **For loop**: A loop where you know beforehand when it will stop.
np.random.seed(1917) Nx = 10 x = np.random.uniform(0,1,size=(Nx,)) for i in range(Nx): print(x[i])
0.15451797797720246 0.20789496806883712 0.0027198495778043563 0.1729632542127988 0.855555830200955 0.584099749650399 0.011903025078194518 0.0682582385196221 0.24917894776796679 0.8936630858183269
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**While loop**: A loop which continues until some condition is met.
i = 0 while i < Nx: print(x[i]) i += 1
0.15451797797720246 0.20789496806883712 0.0027198495778043563 0.1729632542127988 0.855555830200955 0.584099749650399 0.011903025078194518 0.0682582385196221 0.24917894776796679 0.8936630858183269
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Find first number less than 0.1:**
i = 0 while i < Nx and x[i] >= 0.1: i += 1 print(x[i])
0.0027198495778043563
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
Using a break:
i = 0 while i < Nx: i += 1 if x[i] < 0.1: break print(x[i]) for i in range(Nx): if x[i] < 0.1: break print(x[i])
0.0027198495778043563
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Conclusion:** When you can use a for-loop it typically gives you more simple code. 1.2 Nested loops
Nx = 5 Ny = 5 Nz = 5 x = np.random.uniform(0,1,size=(Nx)) y = np.random.uniform(0,1,size=(Ny)) z = np.random.uniform(0,1,size=(Nz)) mysum = 0 for i in range(Nx): for j in range(Ny): mysum += x[i]*y[j] print(mysum) mysum = 0 for i,j in it.product(range(Nx),range(Ny)): mysum += x[i]*y[j] print(mysum)
4.689237201743941
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Meshgrid:**
xmat,ymat = np.meshgrid(x,y,indexing='ij') mysum = xmat*ymat print(np.sum(mysum)) I,J = np.meshgrid(range(Nx),range(Ny),indexing='ij') mysum = x[I]*y[J] print(np.sum(mysum))
4.689237201743942
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
1.3 Classes
class Fraction: def __init__(self,numerator,denominator): # called when created self.num = numerator self.denom = denominator def __str__(self): # called when using print return f'{self.num}/{self.denom}' # string = self.nom/self.denom def __add__(self,other): # called when using + new_num = self.num*other.denom + other.num*self.denom new_denom = self.denom*other.denom return Fraction(new_num,new_denom) def reduce(self): divisor = min(self.num,self.denom) while divisor >= 2: if self.num%divisor == 0 and self.denom%divisor == 0: self.num //= divisor self.denom //= divisor divisor = min(self.num,self.denom) else: divisor -= 1
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
In `__add__` we use$$\frac{a}{b}+\frac{c}{d}=\frac{a \cdot d+c \cdot b}{b \cdot d}$$
x = Fraction(1,3) print(x) x = Fraction(1,3) # 1/3 = 5/15 y = Fraction(3,9) # 2/5 = 6/15 z = x+y # 5/15 + 6/15 = 11/15 print(z) z.reduce() print(z)
2/3
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Check which methods a class have:**
dir(Fraction)
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
1.4 A consumer class $$\begin{aligned}V(p_{1},p_{2},I) & = \max_{x_{1},x_{2}}x_1^{\alpha}x_2^{1-\alpha}\\ \text{s.t.}\\p_{1}x_{1}+p_{2}x_{2} & \leq I,\,\,\,p_{1},p_{2},I>0\\x_{1},x_{2} & \geq 0\end{aligned}$$ **Goal:** Create a model-class to solve this problem. **Utility function:**
def u_func(model,x1,x2): return x1**model.alpha*x2**(1-model.alpha)
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Solution function:**
def solve(model): # a. objective function (to minimize) obj = lambda x: -model.u_func(x[0],x[1]) # minimize -> negtive of utility # b. constraints and bounds con = lambda x: model.I-model.p1*x[0]-model.p2*x[1] # violated if negative constraints = ({'type':'ineq','fun':con}) bounds = ((0,model.I/model.p1),(0,model.I/model.p2)) # c. call solver x0 = [(model.I/model.p1)/2,(model.I/model.p2)/2] sol = optimize.minimize(obj,x0,method='SLSQP',bounds=bounds,constraints=constraints) # d. save model.x1 = sol.x[0] model.x2 = sol.x[1] model.u = model.u_func(model.x1,model.x2)
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Create consumer class:**
class ConsumerClass: def __init__(self): self.alpha = 0.5 self.p1 = 1 self.p2 = 2 self.I = 10 u_func = u_func solve = solve
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Solve consumer problem**:
jeppe = ConsumerClass() jeppe.alpha = 0.75 jeppe.solve() print(f'(x1,x2) = ({jeppe.x1:.3f},{jeppe.x2:.3f}), u = {jeppe.u:.3f}')
(x1,x2) = (7.500,1.250), u = 4.792
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
Easy to loop over:
for alpha in np.linspace(0.1,0.9,10): jeppe.alpha = alpha jeppe.solve() print(f'alpha = {alpha:.3f} -> (x1,x2) = ({jeppe.x1:.3f},{jeppe.x2:.3f}), u = {jeppe.u:.3f}')
alpha = 0.100 -> (x1,x2) = (1.000,4.500), u = 3.872 alpha = 0.189 -> (x1,x2) = (1.890,4.055), u = 3.510 alpha = 0.278 -> (x1,x2) = (2.778,3.611), u = 3.357 alpha = 0.367 -> (x1,x2) = (3.667,3.167), u = 3.342 alpha = 0.456 -> (x1,x2) = (4.554,2.723), u = 3.442 alpha = 0.544 -> (x1,x2) = (5.446,2.277), u = 3.661 alpha = 0.633 -> (x1,x2) = (6.331,1.834), u = 4.020 alpha = 0.722 -> (x1,x2) = (7.221,1.389), u = 4.569 alpha = 0.811 -> (x1,x2) = (8.111,0.945), u = 5.404 alpha = 0.900 -> (x1,x2) = (9.001,0.499), u = 6.741
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
2. Lecture 03: Optimize, print and plot **Abstract:** You will learn how to work with numerical data (**numpy**) and solve simple numerical optimization problems (**scipy.optimize**) and report the results both in text (**print**) and in figures (**matplotlib**). 2.1 Numpy
x = np.random.uniform(0,1,size=6) print(x)
[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
Consider the following code with loop:
y = np.empty(x.size*2) for i in range(x.size): y[i] = x[i] for i in range(x.size): y[x.size + i] = x[i] print(y)
[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Vertical extension of vector** (more columns)
y = np.tile(x,2) # tiling (same x repated) print(y) y = np.hstack((x,x)) # stacking print(y) y = np.insert(x,0,x) # insert vector at place 0 print(y) y = np.insert(x,6,x) # insert vector at place 0 print(y) print(y.shape)
[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102 0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102] (12,)
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Horizontal extension of vector** (more columns)
y = np.vstack((x,x)) # stacking print(y) print(y.shape) z = y.ravel() print(z) print(z.shape) y_ = np.tile(x,2) # tiling (same x repated) print(y_) print(y_.shape) print('') y = np.reshape(y_,(2,6)) print(y) print(y.shape) y_ = np.repeat(x,2) # repeat each element print(y_) print('') y__ = np.reshape(y_,(6,2)) print(y__) print('') y = np.transpose(y__) print(y)
[0.50162377 0.50162377 0.58786823 0.58786823 0.6692749 0.6692749 0.67937905 0.67937905 0.87084325 0.87084325 0.30623102 0.30623102] [[0.50162377 0.50162377] [0.58786823 0.58786823] [0.6692749 0.6692749 ] [0.67937905 0.67937905] [0.87084325 0.87084325] [0.30623102 0.30623102]] [[0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102] [0.50162377 0.58786823 0.6692749 0.67937905 0.87084325 0.30623102]]
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
2.2 Numpy vs. dictionary vs. list vs. tuple
x_np = np.zeros(0) x_list = [] x_dict = {} x_tuple = ()
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
1. If you data is **numeric**, and is changing on the fly, use **numpy**2. If your data is **heterogenous**, and is changing on the fly, use a **list** or a **dictionary**3. If your data is **fixed** use a tuple 2.3 Optimizers All **optimization problems** are characterized by:1. Control vector (choices), $\boldsymbol{x} \in \mathbb{R}^k$2. Objective function (payoff) to minimize, $f:\mathbb{R}^k \rightarrow \mathbb{R}$ (differentiable or not)3. Constraints, i.e. $\boldsymbol{x} \in C \subseteq \mathbb{R}^k$ (linear or non-linear interdependence) **Maximization** is just **minimization** of $-f$. All **optimizers** (minimizers) have the follow steps:1. Make initial guess2. Evaluate the function (and perhaps gradients)3. Check for convergence4. Update guess and return to step 2 **Convergence:** "Small" change in function value since last iteration or zero gradient. **Characteristics** of optimizers:1. Use gradients or not.2. Allow for specifying bounds.3. Allow for specifying general constraints. **Gradients** provide useful information, but can be costly to compute (using analytical formula or numerically). 2.4 Loops vs. optimizer **Define function:**
def f(x): return np.sin(x)+0.05*x**2
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Solution with loop:**
N = 100 x_vec = np.linspace(-10,10,N) f_vec = np.empty(N) f_best = np.inf # initial maximum x_best = np.nan # not-a-number for i,x in enumerate(x_vec): f_now = f_vec[i] = f(x) if f_now < f_best: x_best = x f_best = f_now print(f'best with loop is {f_best:.8f} at x = {x_best:.8f}')
best with loop is -0.88366802 at x = -1.51515152
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Solution with scipy optimize:**
x_guess = [0] obj = lambda x: f(x[0]) res = optimize.minimize(obj, x_guess, method='Nelder-Mead') x_best_scipy = res.x[0] f_best_scipy = res.fun print(f'best with scipy.optimize is {f_best_scipy:.8f} at x = {x_best_scipy:.8f}')
best with scipy.optimize is -0.88786283 at x = -1.42756250
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Link:** [Scipy on the choice of optimizer](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html) **Comparison:**
fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(x_vec,f_vec,ls='--',lw=2,color='black',label='$f(x)$') ax.plot(x_best,f_best,ls='',marker='s',label='loop') ax.plot(x_best_scipy,f_best_scipy,ls='',marker='o', markeredgecolor='red',label='scipy.optimize') ax.set_xlabel('x') ax.set_ylabel('f') ax.legend(loc='upper center');
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
2.5 Gradient descent optimizer **Algorithm:** `minimize_gradient_descent()`1. Choose tolerance $\epsilon>0$, step size $\alpha > 0$, and guess on $x_0$, set $n=0$.2. Compute $f(x_n)$ and $f^\prime(x_n) \approx \frac{f(\boldsymbol{x}_{n}+\Delta)-f(\boldsymbol{x}_{n})}{\Delta}$.3. If $|f^\prime(x_n)| < \epsilon$ then stop.4. Compute new guess "down the hill": $$ \boldsymbol{x}_{n+1} = \boldsymbol{x}_{n} - \alpha f^\prime(x_n) $$5. Set $n = n + 1$ and return to step 2. **Code for algorithm:**
def gradient_descent(f,x0,alpha=1,Delta=1e-8,max_iter=500,eps=1e-8): """ minimize function with gradient descent Args: f (callable): function x0 (float): initial value alpha (float,optional): step size factor in search Delta (float,optional): step size in numerical derivative max_iter (int,optional): maximum number of iterations eps (float,optional): tolerance Returns: x (float): minimum fx (float): funciton value at minimum trials (list): list with tuple (x,value,derivative) """ # step 1: initialize x = x0 n = 0 trials = [] # step 2-4: while n < max_iter: # step 2: compute function value and derivative fx = f(x) fp = (f(x+Delta)-fx)/Delta trials.append({'x':x,'fx':fx,'fp':fp}) # step 3: check convergence print(f'n = {n:3d}: x = {x:12.8f}, f = {fx:12.8f}, fp = {fp:12.8f}') if np.abs(fp) < eps: break # step 4: update x and n x -= alpha*fp n += 1 return x,fx,trials
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Call the optimizer:**
x0 = 0 alpha = 0.5 x,fx,trials = gradient_descent(f,x0,alpha) print(f'best with gradient_descent is {fx:.8f} at x = {x:.8f}')
n = 0: x = 0.00000000, f = 0.00000000, fp = 1.00000000 n = 1: x = -0.50000000, f = -0.46692554, fp = 0.82758257 n = 2: x = -0.91379128, f = -0.75007422, fp = 0.51936899 n = 3: x = -1.17347578, f = -0.85324884, fp = 0.26960144 n = 4: x = -1.30827650, f = -0.88015974, fp = 0.12868722 n = 5: x = -1.37262011, f = -0.88622298, fp = 0.05961955 n = 6: x = -1.40242989, f = -0.88751934, fp = 0.02732913 n = 7: x = -1.41609445, f = -0.88779134, fp = 0.01247611 n = 8: x = -1.42233251, f = -0.88784799, fp = 0.00568579 n = 9: x = -1.42517540, f = -0.88785975, fp = 0.00258927 n = 10: x = -1.42647003, f = -0.88786219, fp = 0.00117876 n = 11: x = -1.42705941, f = -0.88786269, fp = 0.00053655 n = 12: x = -1.42732769, f = -0.88786280, fp = 0.00024420 n = 13: x = -1.42744979, f = -0.88786282, fp = 0.00011114 n = 14: x = -1.42750536, f = -0.88786283, fp = 0.00005058 n = 15: x = -1.42753065, f = -0.88786283, fp = 0.00002303 n = 16: x = -1.42754217, f = -0.88786283, fp = 0.00001048 n = 17: x = -1.42754741, f = -0.88786283, fp = 0.00000477 n = 18: x = -1.42754979, f = -0.88786283, fp = 0.00000218 n = 19: x = -1.42755088, f = -0.88786283, fp = 0.00000099 n = 20: x = -1.42755137, f = -0.88786283, fp = 0.00000043 n = 21: x = -1.42755159, f = -0.88786283, fp = 0.00000021 n = 22: x = -1.42755170, f = -0.88786283, fp = 0.00000010 n = 23: x = -1.42755175, f = -0.88786283, fp = 0.00000004 n = 24: x = -1.42755177, f = -0.88786283, fp = 0.00000001 n = 25: x = -1.42755177, f = -0.88786283, fp = 0.00000002 n = 26: x = -1.42755179, f = -0.88786283, fp = 0.00000000 best with gradient_descent is -0.88786283 at x = -1.42755179
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Illusstration:**
fig = plt.figure(figsize=(10,10)) # a. main figure ax = fig.add_subplot(2,2,(1,2)) trial_x_vec = [trial['x'] for trial in trials] trial_f_vec = [trial['fx'] for trial in trials] trial_fp_vec = [trial['fp'] for trial in trials] ax.plot(x_vec,f_vec,ls='--',lw=2,color='black',label='$f(x)$') ax.plot(trial_x_vec,trial_f_vec,ls='',marker='s',ms=4,color='blue',label='iterations') ax.set_xlabel('$x$') ax.set_ylabel('$f$') ax.legend(loc='upper center') # sub figure 1 ax = fig.add_subplot(2,2,3) ax.plot(np.arange(len(trials)),trial_x_vec) ax.set_xlabel('iteration') ax.set_ylabel('x') # sub figure 2 ax = fig.add_subplot(2,2,4) ax.plot(np.arange(len(trials)),trial_fp_vec) ax.set_xlabel('iteration') ax.set_ylabel('derivative of f');
_____no_output_____
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
3. Lecture 04: Random numbers and simulation **Abstract:** You will learn how to use a random number generator with a seed and produce simulation results (**numpy.random**, **scipy.stats**), and calcuate the expected value of a random variable through Monte Carlo integration. You will learn how to save your results for later use (**pickle**). Finally, you will learn how to make your figures interactive (**ipywidgets**). **Baseline code:**
def f(x,y): return (np.var(x)-np.var(y))**2 np.random.seed(1917) x = np.random.normal(0,1,size=100) print(f'mean(x) = {np.mean(x):.3f}') for sigma in [0.5,1.0,0.5]: y = np.random.normal(0,sigma,size=x.size) print(f'sigma = {sigma:2f}: f = {f(x,y):.4f}')
mean(x) = -0.007 sigma = 0.500000: f = 0.5522 sigma = 1.000000: f = 0.0001 sigma = 0.500000: f = 0.4985
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**Question:** How can we make the loop give the same result for the same value of `sigma`? **Option 1:** Reset seed
np.random.seed(1917) x = np.random.normal(0,1,size=100) print(f'var(x) = {np.var(x):.3f}') for sigma in [0.5,1.0,0.5]: np.random.seed(1918) y = np.random.normal(0,sigma,size=x.size) print(f'sigma = {sigma:2f}: f = {f(x,y):.4f}')
var(x) = 0.951 sigma = 0.500000: f = 0.4908 sigma = 1.000000: f = 0.0025 sigma = 0.500000: f = 0.4908
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021
**BAD SOLUTION:** Never reset the seed. Variables `x` and `y` are not ensured to be random relative to each other with this method. **Option 2:** Set and get state
np.random.seed(1917) x = np.random.normal(0,1,size=100) print(f'var(x) = {np.var(x):.3f}') state = np.random.get_state() for sigma in [0.5,1.0,0.5]: np.random.set_state(state) y = np.random.normal(0,sigma,size=x.size) print(f'sigma = {sigma:2f}: f = {f(x,y):.4f}')
var(x) = 0.951 sigma = 0.500000: f = 0.5522 sigma = 1.000000: f = 0.0143 sigma = 0.500000: f = 0.5522
MIT
web/06/Examples_and_overview.ipynb
Jovansam/lectures-2021