markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Now let's split the data into a training set and a test set:
test_ratio = 0.2 test_size = int(m * test_ratio) X_train = X_moons_with_bias[:-test_size] X_test = X_moons_with_bias[-test_size:] y_train = y_moons_column_vector[:-test_size] y_test = y_moons_column_vector[-test_size:]
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Ok, now let's create a small function to generate training batches. In this implementation we will just pick random instances from the training set for each batch. This means that a single batch may contain the same instance multiple times, and also a single epoch may not cover all the training instances (in fact it will generally cover only about two thirds of the instances). However, in practice this is not an issue and it simplifies the code:
def random_batch(X_train, y_train, batch_size): rnd_indices = np.random.randint(0, len(X_train), batch_size) X_batch = X_train[rnd_indices] y_batch = y_train[rnd_indices] return X_batch, y_batch
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Let's look at a small batch:
X_batch, y_batch = random_batch(X_train, y_train, 5) X_batch y_batch
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Great! Now that the data is ready to be fed to the model, we need to build that model. Let's start with a simple implementation, then we will add all the bells and whistles. First let's reset the default graph.
reset_graph()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
The _moons_ dataset has two input features, since each instance is a point on a plane (i.e., 2-Dimensional):
n_inputs = 2
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Now let's build the Logistic Regression model. As we saw in chapter 4, this model first computes a weighted sum of the inputs (just like the Linear Regression model), and then it applies the sigmoid function to the result, which gives us the estimated probability for the positive class:$\hat{p} = h_\mathbf{\theta}(\mathbf{x}) = \sigma(\mathbf{\theta}^T \cdot \mathbf{x})$ Recall that $\mathbf{\theta}$ is the parameter vector, containing the bias term $\theta_0$ and the weights $\theta_1, \theta_2, \dots, \theta_n$. The input vector $\mathbf{x}$ contains a constant term $x_0 = 1$, as well as all the input features $x_1, x_2, \dots, x_n$.Since we want to be able to make predictions for multiple instances at a time, we will use an input matrix $\mathbf{X}$ rather than a single input vector. The $i^{th}$ row will contain the transpose of the $i^{th}$ input vector $(\mathbf{x}^{(i)})^T$. It is then possible to estimate the probability that each instance belongs to the positive class using the following equation:$ \hat{\mathbf{p}} = \sigma(\mathbf{X} \cdot \mathbf{\theta})$That's all we need to build the model:
X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X") y = tf.placeholder(tf.float32, shape=(None, 1), name="y") theta = tf.Variable(tf.random_uniform([n_inputs + 1, 1], -1.0, 1.0, seed=42), name="theta") logits = tf.matmul(X, theta, name="logits") y_proba = 1 / (1 + tf.exp(-logits))
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
In fact, TensorFlow has a nice function `tf.sigmoid()` that we can use to simplify the last line of the previous code:
y_proba = tf.sigmoid(logits)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
As we saw in chapter 4, the log loss is a good cost function to use for Logistic Regression:$J(\mathbf{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$One option is to implement it ourselves:
epsilon = 1e-7 # to avoid an overflow when computing the log loss = -tf.reduce_mean(y * tf.log(y_proba + epsilon) + (1 - y) * tf.log(1 - y_proba + epsilon))
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
But we might as well use TensorFlow's `tf.losses.log_loss()` function:
loss = tf.losses.log_loss(y, y_proba) # uses epsilon = 1e-7 by default
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
The rest is pretty standard: let's create the optimizer and tell it to minimize the cost function:
learning_rate = 0.01 optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(loss)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
All we need now (in this minimal version) is the variable initializer:
init = tf.global_variables_initializer()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
And we are ready to train the model and use it for predictions! There's really nothing special about this code, it's virtually the same as the one we used earlier for Linear Regression:
n_epochs = 1000 batch_size = 50 n_batches = int(np.ceil(m / batch_size)) with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): for batch_index in range(n_batches): X_batch, y_batch = random_batch(X_train, y_train, batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val = loss.eval({X: X_test, y: y_test}) if epoch % 100 == 0: print("Epoch:", epoch, "\tLoss:", loss_val) y_proba_val = y_proba.eval(feed_dict={X: X_test, y: y_test})
Epoch: 0 Loss: 0.792602 Epoch: 100 Loss: 0.343463 Epoch: 200 Loss: 0.30754 Epoch: 300 Loss: 0.292889 Epoch: 400 Loss: 0.285336 Epoch: 500 Loss: 0.280478 Epoch: 600 Loss: 0.278083 Epoch: 700 Loss: 0.276154 Epoch: 800 Loss: 0.27552 Epoch: 900 Loss: 0.274912
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Note: we don't use the epoch number when generating batches, so we could just have a single `for` loop rather than 2 nested `for` loops, but it's convenient to think of training time in terms of number of epochs (i.e., roughly the number of times the algorithm went through the training set). For each instance in the test set, `y_proba_val` contains the estimated probability that it belongs to the positive class, according to the model. For example, here are the first 5 estimated probabilities:
y_proba_val[:5]
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
To classify each instance, we can go for maximum likelihood: classify as positive any instance whose estimated probability is greater or equal to 0.5:
y_pred = (y_proba_val >= 0.5) y_pred[:5]
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Depending on the use case, you may want to choose a different threshold than 0.5: make it higher if you want high precision (but lower recall), and make it lower if you want high recall (but lower precision). See chapter 3 for more details. Let's compute the model's precision and recall:
from sklearn.metrics import precision_score, recall_score precision_score(y_test, y_pred) recall_score(y_test, y_pred)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Let's plot these predictions to see what they look like:
y_pred_idx = y_pred.reshape(-1) # a 1D array rather than a column vector plt.plot(X_test[y_pred_idx, 1], X_test[y_pred_idx, 2], 'go', label="Positive") plt.plot(X_test[~y_pred_idx, 1], X_test[~y_pred_idx, 2], 'r^', label="Negative") plt.legend() plt.show()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Well, that looks pretty bad, doesn't it? But let's not forget that the Logistic Regression model has a linear decision boundary, so this is actually close to the best we can do with this model (unless we add more features, as we will show in a second). Now let's start over, but this time we will add all the bells and whistles, as listed in the exercise:* Define the graph within a `logistic_regression()` function that can be reused easily.* Save checkpoints using a `Saver` at regular intervals during training, and save the final model at the end of training.* Restore the last checkpoint upon startup if training was interrupted.* Define the graph using nice scopes so the graph looks good in TensorBoard.* Add summaries to visualize the learning curves in TensorBoard.* Try tweaking some hyperparameters such as the learning rate or the mini-batch size and look at the shape of the learning curve. Before we start, we will add 4 more features to the inputs: ${x_1}^2$, ${x_2}^2$, ${x_1}^3$ and ${x_2}^3$. This was not part of the exercise, but it will demonstrate how adding features can improve the model. We will do this manually, but you could also add them using `sklearn.preprocessing.PolynomialFeatures`.
X_train_enhanced = np.c_[X_train, np.square(X_train[:, 1]), np.square(X_train[:, 2]), X_train[:, 1] ** 3, X_train[:, 2] ** 3] X_test_enhanced = np.c_[X_test, np.square(X_test[:, 1]), np.square(X_test[:, 2]), X_test[:, 1] ** 3, X_test[:, 2] ** 3]
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
This is what the "enhanced" training set looks like:
X_train_enhanced[:5]
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Ok, next let's reset the default graph:
reset_graph()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Now let's define the `logistic_regression()` function to create the graph. We will leave out the definition of the inputs `X` and the targets `y`. We could include them here, but leaving them out will make it easier to use this function in a wide range of use cases (e.g. perhaps we will want to add some preprocessing steps for the inputs before we feed them to the Logistic Regression model).
def logistic_regression(X, y, initializer=None, seed=42, learning_rate=0.01): n_inputs_including_bias = int(X.get_shape()[1]) with tf.name_scope("logistic_regression"): with tf.name_scope("model"): if initializer is None: initializer = tf.random_uniform([n_inputs_including_bias, 1], -1.0, 1.0, seed=seed) theta = tf.Variable(initializer, name="theta") logits = tf.matmul(X, theta, name="logits") y_proba = tf.sigmoid(logits) with tf.name_scope("train"): loss = tf.losses.log_loss(y, y_proba, scope="loss") optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(loss) loss_summary = tf.summary.scalar('log_loss', loss) with tf.name_scope("init"): init = tf.global_variables_initializer() with tf.name_scope("save"): saver = tf.train.Saver() return y_proba, loss, training_op, loss_summary, init, saver
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Let's create a little function to get the name of the log directory to save the summaries for Tensorboard:
from datetime import datetime def log_dir(prefix=""): now = datetime.utcnow().strftime("%Y%m%d%H%M%S") root_logdir = "tf_logs" if prefix: prefix += "-" name = prefix + "run-" + now return "{}/{}/".format(root_logdir, name)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Next, let's create the graph, using the `logistic_regression()` function. We will also create the `FileWriter` to save the summaries to the log directory for Tensorboard:
n_inputs = 2 + 4 logdir = log_dir("logreg") X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X") y = tf.placeholder(tf.float32, shape=(None, 1), name="y") y_proba, loss, training_op, loss_summary, init, saver = logistic_regression(X, y) file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
At last we can train the model! We will start by checking whether a previous training session was interrupted, and if so we will load the checkpoint and continue training from the epoch number we saved. In this example we just save the epoch number to a separate file, but in chapter 11 we will see how to store the training step directly as part of the model, using a non-trainable variable called `global_step` that we pass to the optimizer's `minimize()` method.You can try interrupting training to verify that it does indeed restore the last checkpoint when you start it again.
n_epochs = 10001 batch_size = 50 n_batches = int(np.ceil(m / batch_size)) checkpoint_path = "/tmp/my_logreg_model.ckpt" checkpoint_epoch_path = checkpoint_path + ".epoch" final_model_path = "./my_logreg_model" with tf.Session() as sess: if os.path.isfile(checkpoint_epoch_path): # if the checkpoint file exists, restore the model and load the epoch number with open(checkpoint_epoch_path, "rb") as f: start_epoch = int(f.read()) print("Training was interrupted. Continuing at epoch", start_epoch) saver.restore(sess, checkpoint_path) else: start_epoch = 0 sess.run(init) for epoch in range(start_epoch, n_epochs): for batch_index in range(n_batches): X_batch, y_batch = random_batch(X_train_enhanced, y_train, batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, summary_str = sess.run([loss, loss_summary], feed_dict={X: X_test_enhanced, y: y_test}) file_writer.add_summary(summary_str, epoch) if epoch % 500 == 0: print("Epoch:", epoch, "\tLoss:", loss_val) saver.save(sess, checkpoint_path) with open(checkpoint_epoch_path, "wb") as f: f.write(b"%d" % (epoch + 1)) saver.save(sess, final_model_path) y_proba_val = y_proba.eval(feed_dict={X: X_test_enhanced, y: y_test}) os.remove(checkpoint_epoch_path)
Epoch: 0 Loss: 0.629985 Epoch: 500 Loss: 0.161224 Epoch: 1000 Loss: 0.119032 Epoch: 1500 Loss: 0.0973292 Epoch: 2000 Loss: 0.0836979 Epoch: 2500 Loss: 0.0743758 Epoch: 3000 Loss: 0.0675021 Epoch: 3500 Loss: 0.0622069 Epoch: 4000 Loss: 0.0580268 Epoch: 4500 Loss: 0.054563 Epoch: 5000 Loss: 0.0517083 Epoch: 5500 Loss: 0.0492377 Epoch: 6000 Loss: 0.0471673 Epoch: 6500 Loss: 0.0453766 Epoch: 7000 Loss: 0.0438187 Epoch: 7500 Loss: 0.0423742 Epoch: 8000 Loss: 0.0410892 Epoch: 8500 Loss: 0.0399709 Epoch: 9000 Loss: 0.0389202 Epoch: 9500 Loss: 0.0380107 Epoch: 10000 Loss: 0.0371557
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Once again, we can make predictions by just classifying as positive all the instances whose estimated probability is greater or equal to 0.5:
y_pred = (y_proba_val >= 0.5) precision_score(y_test, y_pred) recall_score(y_test, y_pred) y_pred_idx = y_pred.reshape(-1) # a 1D array rather than a column vector plt.plot(X_test[y_pred_idx, 1], X_test[y_pred_idx, 2], 'go', label="Positive") plt.plot(X_test[~y_pred_idx, 1], X_test[~y_pred_idx, 2], 'r^', label="Negative") plt.legend() plt.show()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Now that's much, much better! Apparently the new features really helped a lot. Try starting the tensorboard server, find the latest run and look at the learning curve (i.e., how the loss evaluated on the test set evolves as a function of the epoch number):```$ tensorboard --logdir=tf_logs``` Now you can play around with the hyperparameters (e.g. the `batch_size` or the `learning_rate`) and run training again and again, comparing the learning curves. You can even automate this process by implementing grid search or randomized search. Below is a simple implementation of a randomized search on both the batch size and the learning rate. For the sake of simplicity, the checkpoint mechanism was removed.
from scipy.stats import reciprocal n_search_iterations = 10 for search_iteration in range(n_search_iterations): batch_size = np.random.randint(1, 100) learning_rate = reciprocal(0.0001, 0.1).rvs(random_state=search_iteration) n_inputs = 2 + 4 logdir = log_dir("logreg") print("Iteration", search_iteration) print(" logdir:", logdir) print(" batch size:", batch_size) print(" learning_rate:", learning_rate) print(" training: ", end="") reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs + 1), name="X") y = tf.placeholder(tf.float32, shape=(None, 1), name="y") y_proba, loss, training_op, loss_summary, init, saver = logistic_regression( X, y, learning_rate=learning_rate) file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph()) n_epochs = 10001 n_batches = int(np.ceil(m / batch_size)) final_model_path = "./my_logreg_model_%d" % search_iteration with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): for batch_index in range(n_batches): X_batch, y_batch = random_batch(X_train_enhanced, y_train, batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) loss_val, summary_str = sess.run([loss, loss_summary], feed_dict={X: X_test_enhanced, y: y_test}) file_writer.add_summary(summary_str, epoch) if epoch % 500 == 0: print(".", end="") saver.save(sess, final_model_path) print() y_proba_val = y_proba.eval(feed_dict={X: X_test_enhanced, y: y_test}) y_pred = (y_proba_val >= 0.5) print(" precision:", precision_score(y_test, y_pred)) print(" recall:", recall_score(y_test, y_pred))
Iteration 0 logdir: tf_logs/logreg-run-20171017023201/ batch size: 54 learning_rate: 0.00443037524522 training: ..................... precision: 0.979797979798 recall: 0.979797979798 Iteration 1 logdir: tf_logs/logreg-run-20171017023408/ batch size: 22 learning_rate: 0.00178264971514 training: ..................... precision: 0.979797979798 recall: 0.979797979798 Iteration 2 logdir: tf_logs/logreg-run-20171017024015/ batch size: 74 learning_rate: 0.00203228544324 training: ..................... precision: 0.969696969697 recall: 0.969696969697 Iteration 3 logdir: tf_logs/logreg-run-20171017024240/ batch size: 58 learning_rate: 0.00449152382514 training: ..................... precision: 0.979797979798 recall: 0.979797979798 Iteration 4 logdir: tf_logs/logreg-run-20171017024543/ batch size: 61 learning_rate: 0.0796323472178 training: ..................... precision: 0.980198019802 recall: 1.0 Iteration 5 logdir: tf_logs/logreg-run-20171017024839/ batch size: 92 learning_rate: 0.000463425058329 training: ..................... precision: 0.912621359223 recall: 0.949494949495 Iteration 6 logdir: tf_logs/logreg-run-20171017025008/ batch size: 74 learning_rate: 0.0477068184194 training: ..................... precision: 0.98 recall: 0.989898989899 Iteration 7 logdir: tf_logs/logreg-run-20171017025145/ batch size: 58 learning_rate: 0.000169404470952 training: ..................... precision: 0.9 recall: 0.909090909091 Iteration 8 logdir: tf_logs/logreg-run-20171017025352/ batch size: 61 learning_rate: 0.0417146119941 training: ..................... precision: 0.980198019802 recall: 1.0 Iteration 9 logdir: tf_logs/logreg-run-20171017025548/ batch size: 92 learning_rate: 0.000107429229684 training: ..................... precision: 0.882352941176 recall: 0.757575757576
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Problem set 2: Finding the Walras equilibrium in a multi-agent economy [](https://mybinder.org/v2/gh/NumEconCopenhagen/exercises-2020/master?urlpath=lab/tree/PS2/problem_set_2.ipynb)
%load_ext autoreload %autoreload 2
_____no_output_____
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
Tasks Drawing random numbers Replace the missing lines in the code below to get the same output as in the answer.
import numpy as np np.random.seed(1986) # Define state, which makes sure that the code is randomized. state = np.random.get_state() for i in range(3): # Reset the random state three times, because the range is 3. The state makes sure that if we change the range # we will not change the random numbers generated in the first numbers of the range. np.random.set_state(state) for j in range(2): x = np.random.uniform() print(f'({i},{j}): x = {x:.3f}')
(0,0): x = 0.569 (0,1): x = 0.077 (1,0): x = 0.569 (1,1): x = 0.077 (2,0): x = 0.569 (2,1): x = 0.077
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
**Answer:** See A1.py Find the expectated value Find the expected value and the expected variance$$ \mathbb{E}[g(x)] \approx \frac{1}{N}\sum_{i=1}^{N} g(x_i)$$$$ \mathbb{VAR}[g(x)] \approx \frac{1}{N}\sum_{i=1}^{N} \left( g(x_i) - \frac{1}{N}\sum_{i=1}^{N} g(x_i) \right)^2$$where $ x_i \sim \mathcal{N}(0,\sigma) $ and$$ g(x,\omega)=\begin{cases}x & \text{if }x\in[-\omega,\omega]\\-\omega & \text{if }x<-\omega\\\omega & \text{if }x>\omega\end{cases} $$
sigma = 3.14 omega = 2 N = 10000 np.random.seed(1986) # Set state state = np.random.get_state() np.random.set_state(state) # Define x as a normal distribution x = np.random.normal(loc=0, scale=sigma, size=N) # Define function g(x,omega) def g_function(x,omega): # g_function has to give the value g. Because x is an array changes in g must not affect x. g = x.copy() # We describe the conditions in the function. g[x < -omega] = -omega g[x > omega] = omega # Define what the function has to return, in this case the value which is given by the condition. return g # Calculate mean and variance mean = np.mean(g_function(x,omega)) variance = np.var(g_function(x-mean,omega)) # Print the results print(f'mean = {mean:.5f} variance = {variance:.5f}')
mean = -0.00264 variance = 2.69804
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
**Answer:** See A2.py Interactive histogram **First task:** Consider the code below. Fill in the missing lines so the figure is plotted.
# a. import import math import pickle import numpy as np from scipy.stats import norm # normal distribution %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import ipywidgets as widgets # b. plotting figure def fitting_normal(X,mu_guess,sigma_guess): # i. normal distribution from guess F = norm(loc=mu_guess,scale=sigma_guess) # ii. x-values x_low = F.ppf(0.001) x_high = F.ppf(0.999) x = np.linspace(x_low,x_high,100) # iii. figure fig = plt.figure(dpi=100) ax = fig.add_subplot(1,1,1) ax.plot(x,F.pdf(x),lw=2) ax.hist(X,bins=100,density=True,histtype='stepfilled'); ax.set_ylim([0,0.5]) ax.set_xlim([-6,6]) # c. parameters mu_true = 2 sigma_true = 1 mu_guess = 1 sigma_guess = 2 # d. random draws X = np.random.normal(loc=mu_true,scale=sigma_true,size=10**6) # e. figure try: fitting_normal(X,mu_guess,sigma_guess) except: print('failed')
_____no_output_____
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
**Second task:** Create an interactive version of the figure with sliders for $\mu$ and $\sigma$.
# Write out which arguments to interactive_figure you want to be changing or staying fixed widgets.interact(fitting_normal, X=widgets.fixed(X), mu_guess=widgets.FloatSlider(description="$\mu$", min=-5, max=5, step=1, value=1), sigma_guess=widgets.FloatSlider(description="$\sigma$", min=0.1, max=10, step = 0.1, value=2) );
_____no_output_____
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
**Answer:** See A3.py Modules 1. Call the function `myfun` from the module `mymodule` present in this folder.2. Open VSCode and open the `mymodule.py`, add a new function and call it from this notebook.
import mymodule as mm from mymodule import myfun mm.myfun(1) mm.gitgood(1)
hello world! Git Good!
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
**Answer:** See A4.py Git 1. Try to go to your own personal GitHub main page and create a new repository. Then put your solution to this problem set in it.2. Pair up with a fellow student. Clone each others repositories and run the code in them. **IMPORTANT:** You will need **git** for the data project in a few needs. Better learn it know. Remember, that the teaching assistants are there to help you. Problem Consider an **exchange economy** with1. 2 goods, $(x_1,x_2)$2. $N$ consumers indexed by $j \in \{1,2,\dots,N\}$3. Preferences are Cobb-Douglas with truncated normally *heterogenous* coefficients $$ \begin{aligned} u^{j}(x_{1},x_{2}) & = x_{1}^{\alpha_{j}}x_{2}^{1-\alpha_{j}}\\ & \tilde{\alpha}_{j}\sim\mathcal{N}(\mu,\sigma)\\ & \alpha_j = \max(\underline{\mu},\min(\overline{\mu},\tilde{\alpha}_{j})) \end{aligned} $$4. Endowments are *heterogenous* and given by $$ \begin{aligned} \boldsymbol{e}^{j}&=(e_{1}^{j},e_{2}^{j}) \\ & & e_i^j \sim f, f(x,\beta_i) = 1/\beta_i \exp(-x/\beta) \end{aligned} $$ **Problem:** Write a function to solve for the equilibrium. You can use the following parameters:
# a. parameters N = 10000 mu = 0.5 sigma = 0.2 mu_low = 0.1 mu_high = 0.9 beta1 = 1.3 beta2 = 2.1 seed = 1986 # b. draws of random numbers np.random.seed(seed) alphatilde = np.random.normal(loc=mu, scale=sigma, size=N) alpha = np.fmax(mu_low,np.fmin(mu_high, alphatilde)) e1 = np.random.exponential(scale=beta1, size=N) e2 = np.random.exponential(scale=beta2, size=N) # c. demand function def demand_good_1_func(alpha, p1, p2, e1, e2): I = e1*p1+e2*p2 return alpha*I/p1 # d. excess demand function def excess_demand_func(alpha, p1, p2, e1, e2): # Define aggregate supply and demand for good 1 demand = np.sum(demand_good_1_func(alpha, p1, p2, e1, e2)) supply = sum(e1) # Excess demand is demand supply subtracted from demand excess_demand = demand - supply return excess_demand # e. find equilibrium function def find_equilibrium(alphas, p1, p2, e1, e2, kappa=0.5, eps=1e-8, maxiter=500): t = 0 # using a while loop as we don't know number of iterations a priori while True: # a. step 1: excess demand Z1 = excess_demand_func(alpha, p1, p2, e1, e2) # b: step 2: stop? if np.abs(Z1) < eps or t >= maxiter: print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {Z1:14.8f}') break # c. step 3: update p1 p1 = p1 + kappa*Z1/alphas.size # d. step 4: print only every 25th iteration using the modulus operator if t < 5 or t%25 == 0: print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {Z1:14.8f}') elif t == 5: print(' ...') t += 1 return p1 # f. call find equilibrium function p1 = 1.8 p2 = 1 kappa = 0.5 eps = 1e-8 find_equilibrium(alpha,p1,p2,e1,e2,kappa=kappa,eps=eps)
0: p1 = 1.76747251 -> excess demand -> -650.54980224 1: p1 = 1.74035135 -> excess demand -> -542.42310867 2: p1 = 1.71789246 -> excess demand -> -449.17798560 3: p1 = 1.69940577 -> excess demand -> -369.73361992 4: p1 = 1.68426754 -> excess demand -> -302.76467952 ... 25: p1 = 1.62115861 -> excess demand -> -3.00036721 50: p1 = 1.62056537 -> excess demand -> -0.01087860 75: p1 = 1.62056322 -> excess demand -> -0.00003940 100: p1 = 1.62056321 -> excess demand -> -0.00000014 112: p1 = 1.62056321 -> excess demand -> -0.00000001
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
**Hint:** The code structure is exactly the same as for the exchange economy considered in the lecture. The code for solving that exchange economy is reproduced in condensed form below.
# a. parameters N = 1000 k = 2 mu_low = 0.1 mu_high = 0.9 seed = 1986 # b. draws of random numbers np.random.seed(seed) alphas = np.random.uniform(low=mu_low,high=mu_high,size=N) # c. demand function def demand_good_1_func(alpha,p1,p2,k): I = k*p1+p2 return alpha*I/p1 # d. excess demand function def excess_demand_good_1_func(alphas,p1,p2,k): # a. demand demand = np.sum(demand_good_1_func(alphas,p1,p2,)) # b. supply supply = k*alphas.size # c. excess demand excess_demand = demand-supply return excess_demand # e. find equilibrium function def find_equilibrium(alphas,p1,p2,k,kappa=0.5,eps=1e-8,maxiter=500): t = 0 while True: # a. step 1: excess demand Z1 = excess_demand_good_1_func(alphas,p1,p2,k) # b: step 2: stop? if np.abs(Z1) < eps or t >= maxiter: print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {Z1:14.8f}') break # c. step 3: update p1 p1 = p1 + kappa*Z1/alphas.size # d. step 4: return if t < 5 or t%25 == 0: print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {Z1:14.8f}') elif t == 5: print(' ...') t += 1 return p1 # e. call find equilibrium function p1 = 1.4 p2 = 1 kappa = 0.1 eps = 1e-8 p1 = find_equilibrium(alphas,p1,p2,k,kappa=kappa,eps=eps)
0: p1 = 1.33690689 -> excess demand -> -630.93108302 1: p1 = 1.27551407 -> excess demand -> -613.92820358 2: p1 = 1.21593719 -> excess demand -> -595.76882769 3: p1 = 1.15829785 -> excess demand -> -576.39340748 4: p1 = 1.10272273 -> excess demand -> -555.75114178 ... 25: p1 = 0.53269252 -> excess demand -> -53.80455643 50: p1 = 0.50897770 -> excess demand -> -0.27125769 75: p1 = 0.50886603 -> excess demand -> -0.00120613 100: p1 = 0.50886553 -> excess demand -> -0.00000536 125: p1 = 0.50886553 -> excess demand -> -0.00000002 130: p1 = 0.50886553 -> excess demand -> -0.00000001
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
**Answers:** See A5.py Save and load Consider the code below and fill in the missing lines so the code can run without any errors.
import pickle # a. create some data my_data = {} my_data['A'] = {'a':1,'b':2} my_data['B'] = np.array([1,2,3]) my_data['C'] = (1,4,2) my_np_data = {} my_np_data['D'] = np.array([1,2,3]) my_np_data['E'] = np.zeros((5,8)) my_np_data['F'] = np.ones((7,3,8)) # c. save with pickle with open(f'data.p', 'wb') as f: pickle.dump(my_data,f) # d. save with numpy np.savez(f'data.npz', **my_np_data) # a. try def load_all(): with open(f'data.p', 'rb') as f: data = pickle.load(f) A = data['A'] B = data['B'] C = data['C'] with np.load(f'data.npz') as data: D = data['D'] E = data['E'] F = data['F'] print('variables loaded without error') try: load_all() except: print('failed')
variables loaded without error
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
**Answer:** See A6.py Extra Problems Multiple goods Solve the main problem extended with multiple goods: $$\begin{aligned}u^{j}(x_{1},x_{2}) & = x_{1}^{\alpha^1_{j}} \cdot x_{2}^{\alpha^2_{j}} \cdots x_{M}^{\alpha^M_{j}}\\ & \alpha_j = [\alpha^1_{j},\alpha^2_{j},\dots,\alpha^M_{j}] \\ & \log(\alpha_j) \sim \mathcal{N}(0,\Sigma) \\\end{aligned}$$where $\Sigma$ is a valid covariance matrix.
# a. choose parameters N = 10000 J = 3 # b. choose Sigma Sigma_lower = np.array([[1, 0, 0], [0.5, 1, 0], [0.25, -0.5, 1]]) Sigma_upper = Sigma_lower.T Sigma = Sigma_upper@Sigma_lower print(Sigma) # c. draw random numbers alphas = np.exp(np.random.multivariate_normal(np.zeros(J), Sigma, 10000)) print(np.mean(alphas,axis=0)) print(np.corrcoef(alphas.T)) def demand_good_1_func(alpha,p1,p2,k): I = k*p1+p2 return alpha*I/p1 def demand_good_2_func(alpha,p1,p2,k): I = k*p1+p2 return (1-alpha)*I/p2 def excess_demand_good_1_func(alphas,p1,p2,k): # a. demand demand = np.sum(demand_good_1_func(alphas,p1,p2,k)) # b. supply supply = k*alphas.size # c. excess demand excess_demand = demand-supply return excess_demand def excess_demand_good_2_func(alphas,p1,p2,k): # a. demand demand = np.sum(demand_good_2_func(alphas,p1,p2,k)) # b. supply supply = alphas.size # c. excess demand excess_demand = demand-supply return excess_demand def find_equilibrium(alphas,p1_guess,p2,k,kappa=0.5,eps=1e-8,maxiter=500): t = 0 p1 = p1_guess # using a while loop as we don't know number of iterations a priori while True: # a. step 1: excess demand Z1 = excess_demand_good_1_func(alphas,p1,p2,k) # b: step 2: stop? if np.abs(Z1) < eps or t >= maxiter: print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {Z1:14.8f}') break # c. step 3: update p1 p1 = p1 + kappa*Z1/alphas.size # d. step 4: print only every 25th iteration using the modulus operator if t < 5 or t%25 == 0: print(f'{t:3d}: p1 = {p1:12.8f} -> excess demand -> {Z1:14.8f}') elif t == 5: print(' ...') t += 1 return p1
[[ 1.3125 0.375 0.25 ] [ 0.375 1.25 -0.5 ] [ 0.25 -0.5 1. ]] [1.91709082 1.91100849 1.63670693] [[ 1. 0.19955924 0.15149459] [ 0.19955924 1. -0.16150109] [ 0.15149459 -0.16150109 1. ]]
MIT
Magnus/Problem sets/PS2/problem_set_2.ipynb
NumEconCopenhagen/projects-2022-git-good
Cleaning the Philippine Standard Geographic Code Dataset
import pandas as pd import xlrd import re
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Import the PSGC Excel file.The Philippine Statistics Authority publishes an updated PSGC file every quarter in the form of an Excel file. The latest link is here: https://psa.gov.ph/classification/psgc/
psgc_excel = pd.read_excel("data/raw/PSGC_Publication_Sept2018.xlsx",sheet_name="PSGC") psgc_excel.to_csv('data/raw/raw-psgc.csv.gz',encoding="utf-8",compression="gzip") psgc = pd.read_csv('data/raw/raw-psgc.csv.gz',encoding="utf-8") psgc.info()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Convert "Code" column to a string and ensure it has leading zeros and is 9-char long.
psgc.loc[:,"Code"] = psgc.Code.astype(str).str.zfill(9)
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Drop unused columns:
psgc = psgc.loc[:,['Code','Name','Inter-Level']]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Normalize column names
psgc.columns = ['code','location','interlevel'] psgc.head() psgc['interlevel'].value_counts() psgc.head()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Create a duplicate of the original PSGC dataframe
og_psgc = psgc.copy()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Helpers We see that a lot of the locations in the PSGC have alternate names or aliases for each location contained in parentheses. Let's create a regular expression pattern that will extract these as aliases and append these as additional rows to each subset of the data.
extract_in_paren = re.compile(r'\(+([^\(\)]+)\)*') remove_in_paren = "\(.+\)" def expand_in_paren(df): ''' Denotes original locations ''' df['original'] = True ''' Creates a copy of the rows that contain parentheses or have aliases. ''' has_paren = df[df.location.str.contains("[\(\)]")] has_paren['original'] = False ''' Splits locations that contain parentheses into two elements -- what's before the parentheses, and what's within them Each of these items is treated as a separate possible alias and appended to the original datasete ''' for i in [0,1]: aliases = has_paren.copy() aliases['location'] = has_paren.location.str.replace("\)","").str.split("\(").str.get(i).str.strip() df = df.append(aliases,ignore_index=True) return df.sort_values(by=["code","original"]).reset_index(drop=True)
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Clean regions
regions = psgc[psgc['interlevel'] == 'Reg'].copy()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Alternate names inside parens so we expand those out to a new column named `alias`.
regions = expand_in_paren(regions) regions
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Clean provinces
provinces = psgc[psgc['interlevel'] == 'Prov'].copy() provinces.head()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Seems normal... But let's check for parens just in case:
provinces[provinces['location'].str.contains('[\(\)]')]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Sneaky alternate names!
provinces = expand_in_paren(provinces) provinces
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Clean districts
districts = psgc[psgc['interlevel'] == 'Dist'].copy() districts
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
No one writes `NTH DISTRICT (Not a Province)` in their addresses. Let's remove these instances altogether rather than extract these as aliases.
districts['location'] = (districts['location'] .str.replace('\(Not a Province\)', '') .str.strip() .str.split(',',n=1) .str.get(1)) districts
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Clean municipalities
municipalities = psgc[psgc['interlevel'] == 'Mun'].copy()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Checking for alternate names in parentheses:
municipalities[municipalities['location'].str.contains('[\(\)]')]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
In some cases the words "Capital" are contained in parentheses but these are not aliases. Safe to strip!
municipalities['location'] = municipalities['location'].str.replace('\(Capital\)', '').str.strip() municipalities municipalities = expand_in_paren(municipalities) municipalities.head(30)
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Clean cities
cities = psgc[psgc['interlevel'] == 'City'].copy() cities.head(30)
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Here we go with the `(Capital)` thing again.
cities['location'] = cities['location'].str.replace('\(Capital\)', '').str.strip()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Checking if there are still stuff with parens:
cities[cities['location'].str.contains('[\(\)]')].head()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
A few alterate names!
cities = expand_in_paren(cities) cities
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Now what about those `CITY` pre/suffixes?
cities[cities['location'].str.contains('CITY')]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Let's strip any prefixes of "CITY OF" and suffixes of "CITY."
cities['location'] = (cities['location'] .str.replace('^.*CITY OF', '') #stripping prefixes .str.strip() .str.replace('CITY$', '') #stripping suffixes .str.strip()) cities
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Clean sub-municipalitiesManila is the only city-slash-district that has submunicipalities.
sub_municipalities = psgc[psgc['interlevel'] == 'SubMun'].copy() sub_municipalities
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Nothing special! Clean barangays
barangays = psgc[psgc['interlevel'] == 'Bgy'].copy() barangays
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
We see alternate names again but notice the `(Pob.)` suffixes. A quick Google search shows that it's short for `Poblacion` which is used to denote the commercial and industrial center of a city. Let's stash those and add them as aliases
barangays_pob = barangays[barangays.location.str.contains('\(Pob.\)')].copy() barangays['location'] = (barangays['location'] .str.replace('(\(Pob\.\))', '') #totally do away with any poblacion suffixes .str.strip()) barangays['location'].head(30)
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
How many other barangay names contain parentheses?
barangays[barangays.location.str.contains(r'[\(\)]')]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
While parentheses often contain aliases, sometimes, these are not aliases but the name of the municipality in which the barangay is located. For example, barangays in the municipality of Dumalneg have the `(Dumalneg)` denoted in parentheses. We'll go ahead and extract parenthetical names as aliases for now, but we'll later remove instances in which aliases are equal to the municipality name.
barangays = expand_in_paren(barangays)
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Let's check for more weird characters:
barangays[barangays['location'].str.contains(r'[^a-zA-Z0-9\sÑñ\(\)]')]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Lets extract the strings that follow a "Brgy No. X" as aliases.
pat_barangay = re.compile('(B[gr]y. No. \d+\-?\w?),? (.+)') len(barangays[barangays.location.str.contains(pat_barangay)]) def expand_barangays(df): ''' Denotes original locations ''' df['original'] = True ''' Creates a copy of the rows that contain barangay pattern ''' matches_pattern = df[df.location.str.contains(pat_barangay)] matches_pattern['original'] = False ''' Splits locations that into two elements -- Brgy No X and the name that comes after it Each of these items is treated as a separate possible alias and appended to the original datasete ''' for i in [0,1]: aliases = matches_pattern.copy() aliases['location'] = matches_pattern.location.str.extract(pat_barangay)[i]#.str.get(i).str.strip() aliases['location'] = aliases['location'].str.strip() df = df.append(aliases,ignore_index=True) return df.sort_values(by=["code","original"]).reset_index(drop=True) #print len(barangays) barangays = expand_barangays(barangays) #print len(barangays) barangays.head()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Add barangays that are `Poblacion` as aliases
barangays_pob['original'] = False barangays = barangays.append(barangays_pob, ignore_index=True) barangays[barangays.code == '012801001']
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Last check!
barangays.info() barangays[barangays.code == "012812026"]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
ARMM: Cotabato and Isabela City
armm = psgc[psgc['interlevel'].isnull()].copy() armm armm['location'] = armm['location'].str.replace('\(Not a Province\)', '') armm armm['location'] = (armm['location'] .str.replace('^.*CITY OF', '') .str.strip() .str.replace('CITY$', '') .str.strip()) armm armm['original'] = True armm
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
All together now
merged = pd.concat([ regions, provinces, districts, municipalities, cities, sub_municipalities, barangays, armm ],ignore_index=True).sort_index().fillna('') merged.info()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Are counts still correct?
psgc['interlevel'].value_counts() merged['interlevel'].value_counts() merged.code.nunique(), psgc.code.nunique()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Normalize numbers:
spanish = merged[merged['location'].str.contains(' (UNO|DOS|TRES|KUATRO|SINGKO)$',case=False)].copy() spanish for i, s in enumerate([ 'Uno', 'Dos', 'Tres', 'Kuatro', 'Singko', ]): spanish['location'] = spanish['location'].str.replace(' {}$'.format(s), ' {}'.format(i + 1)) spanish spanish['original'] = False spanish roman = merged[merged['location'].str.contains('\s(X{0,3})(IX|IV|V?I{0,3})$')].copy() for i, s in enumerate('I,II,III,IV,V,VI,VII,VIII,IX,X,XI,XII,XIII,XIV,XV,XVI,XVII,XVIII,XIX,XX,XXI,XXII'.split(',')): roman['location'] = roman['location'].str.replace(' {}$'.format(s), ' {}'.format(i + 1)) roman['original'] = False roman
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Provide alternate names for locations with President names
president = merged[merged.location.str.contains('PRES\.', flags=re.IGNORECASE)].copy() president['location'] = president['location'].str.replace('^PRES\.', 'PRESIDENT') president['location'] = president['location'].str.replace('^Pres\.', 'President') president['original'] = False president
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Add alternative names to Metro Manila
metro_manila = pd.DataFrame([{"code":"130000000","interlevel":"Reg","location":"Metro Manila","original":False}, {"code":"130000000","interlevel":"Reg","location":"Metropolitan Manila","original":False}]) metro_manila
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Add Ñ -> N as an alternate name
merged[merged.location.str.contains('Las Piñas',case=False)] enye = merged[merged.location.str.contains(r'[Ññ]')].copy() enye.head() enye['location'] = (enye['location'].str.replace('Ñ', 'N') .str.replace('ñ','n')) enye.head()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Concat the alternates to the main dataframe
clean_psgc = (pd.concat([merged, spanish, roman, president], ignore_index=True) .sort_values('code') .reset_index(drop=True))
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Last check for weird stuff!
clean_psgc[clean_psgc['location'].str.contains('[^a-zA-Z0-9 \-.,\']')]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
We can probably still split with `&` and `/` but this is good enough for now. Combine the cleaned up PSGC and remove the duplicates
clean_psgc.drop_duplicates(subset=['code', 'location', 'interlevel'], inplace=True) clean_psgc.reset_index(drop=True).sort_values('code', inplace=True)
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Check that we have both the original name and the alternate ones
clean_psgc[clean_psgc.code.str.contains('086000000')] clean_psgc[clean_psgc.code.str.contains('012801001')] clean_psgc.info()
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Cleaning out rows in which the alternate name of the barangay was just the name of its parent municipality or city
clean_psgc['municipality_code'] = clean_psgc.code.str.slice(0,6)+"000" clean_psgc['municipality'] = clean_psgc['municipality_code'].map(municipalities[municipalities.original==True].set_index('code').location) clean_psgc.head(10) clean_psgc['drop'] = (clean_psgc.municipality == clean_psgc.location.str.upper()) & (clean_psgc.interlevel == "Bgy") barangay_and_muni_same_name = clean_psgc.groupby('code').drop.value_counts().unstack()[False][clean_psgc.groupby('code').drop.value_counts().unstack()[False].isnull()].index clean_psgc.loc[clean_psgc.code.isin(barangay_and_muni_same_name),"drop"] = False clean_psgc[clean_psgc.code == '013301034'] clean_psgc = clean_psgc.loc[clean_psgc['drop'] ==False,['code','interlevel','location','original']].reset_index(drop=True) clean_psgc[clean_psgc.code == "133900000"]
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Create aliases for Legazpi and Ozamiz
zplaces = clean_psgc[clean_psgc.location.str.upper().isin(["LEGAZPI","OZAMIZ"])].copy() zplaces.loc[:,'location'] = ["LEGASPI","OZAMIS"] zplaces clean_psgc = clean_psgc.append(zplaces,ignore_index=True) clean_psgc clean_psgc.to_csv('data/processed/clean-psgc.csv.gz', index=False, compression='gzip')
_____no_output_____
MIT
Cleaning_PSGC.ipynb
thinkingmachines/psgc
Structural Topic ModelThis R notebook reproduces the Structural Topic Model used in Step 1 of the Computational Grounded Theory project.Note: This notebook produces the model and then saves it. Producing the model can take quite a bit of time to run, upwards of four hours. To explore the topic models produced skip directly to the next notebook, `02-TopicExploration.ipynb`. Requirements and DependenciesModel created using R 3.4.0 Main libary: stm_1.2.2 Dependencies:* tm_0.7-1* NLP_0.1-10* SnowballC_0.5.1
library(stm) ### Load Data df <- read.csv('../data/comparativewomensmovement_dataset.csv', sep='\t') ##Pre-Processing temp<-textProcessor(documents=df$text_string,metadata=df) meta<-temp$meta vocab<-temp$vocab docs<-temp$documents out <- prepDocuments(docs, vocab, meta) docs<-out$documents vocab<-out$vocab meta <-out$meta ##Produce Models ### Model search across numbers of topics storage <- manyTopics(docs,vocab,K=c(20,30,40,50), prevalence=~org, data=meta, seed = 1234) mod.20 <- storage$out[[1]] mod.30 <- storage$out[[2]] mod.40 <- storage$out[[3]] mod.50 <- storage$out[[4]] ##Save Full Model, with four different topic models saved save.image("../data/stm_all.RData")
_____no_output_____
BSD-3-Clause
01-Step1-PatternDetection/.ipynb_checkpoints/01-StructuralTopicModel-checkpoint.ipynb
cxomni/computational-grounded-theory
IntroductionThe bike has 20 gears which are the categories/labels of the classification. Features are cadence and speed with data of the trainings app. We train our model with data sets of all 20 gears (means 20 tcx files loaded with labeled oberservations).
from sklearn.model_selection import train_test_split from src.regression import validate_lin_reg from src.tcx import Tcx, COLUMN_NAME_SPEED, COLUMN_NAME_WATTS, COLUMN_NAME_CADENCE from src.test_data import TrainDataSet from src.visu import plot2d import matplotlib.pyplot as plt tcx_app_gear7: Tcx = Tcx.read_tcx(file_path='test/tcx/cadence_1612535177298-gear7.tcx') tcx_app_gear20: Tcx = Tcx.read_tcx(file_path='test/tcx/cadence_1612535671464-gear20.tcx') tcx_tacx_gear7: Tcx = Tcx.read_tcx(file_path='test/tcx/tacx-activity_6225123072-gear7-resistance3.tcx') tcx_tacx_gear20: Tcx = Tcx.read_tcx(file_path='test/tcx/tacx-activity_6225123072-gear7-resistance3.tcx') # generate test data dts_gear7: TrainDataSet = TrainDataSet(tcx_app_gear7) dts_gear20: TrainDataSet = TrainDataSet(tcx_app_gear20) dts_tacx_gear7: TrainDataSet = TrainDataSet(tcx_tacx_gear7)
_____no_output_____
MIT
regression_by_cadence.ipynb
bruennijs/indoor-virtual-power-prediction
ProblemFind cadence for a gear that the tacx data set is of. the app data will measure speed and a linear regression model of the same gear predicts the cadence by that speed. A second linear regression model maps cadence to power of the tacx data set. Solution Train (app data)* X of of gear _n_ in app data set: [speed]* Y -> [cadence] Linear model
from sklearn.linear_model import LinearRegression X_train, y_train = dts_gear7.cadence_to_speed() lr_app_gear7 = LinearRegression().fit(X_train, y_train)
_____no_output_____
MIT
regression_by_cadence.ipynb
bruennijs/indoor-virtual-power-prediction
Train (tacx)* X of of gear _n_ in app data set: [cadence]* Y -> [power] AnalyzeLet us first plot the features to see which regression model fits best
X, y = dts_tacx_gear7.cadence_to_power() plot2d(X.iloc[:,0], y, point_color='red', legend_label='gear 7 (tacx)') plt.show()
_____no_output_____
MIT
regression_by_cadence.ipynb
bruennijs/indoor-virtual-power-prediction
Linear model
from sklearn.linear_model import LinearRegression lr_tacx_gear7 = LinearRegression().fit(X, y)
_____no_output_____
MIT
regression_by_cadence.ipynb
bruennijs/indoor-virtual-power-prediction
ValidationCross validation with X_test of tacx data and validate the score of the predicted values
random_state = 2 X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=random_state) validate_lin_reg(X_train, y_train, X_test, y_test, LinearRegression())
Shape X_train/X_test: (357, 1)/(90, 1) Error R²: 1.00 MSE error (mean squared error / variance): 1.06 sqrt(MSE) (standard deviation): 1.03 Max error: 2.8912017526499483 estimator.coefficients: [1.70889424] Cross validation: [0.99506023 0.99816251 0.9957887 0.99589043 0.99734035]
MIT
regression_by_cadence.ipynb
bruennijs/indoor-virtual-power-prediction
Learning objectives:* Introduction to lists.* How to create and access elements from a list.* Adding, removing and changing the elements of the list.
alist = [2,3,45,'python', -98] alist names = ['Tom', 'Mak', 'Arjun', 'Rahul'] names blist = [1,2,3,4, [-1, -2, -3], 'python', names] blist
_____no_output_____
MIT
module_2_programming/lists.ipynb
wiplane/foundations-of-datascience-ml
Access elements of a list
alist alist[2] alist[3] alist[4] alist[-1] alist[-5]
_____no_output_____
MIT
module_2_programming/lists.ipynb
wiplane/foundations-of-datascience-ml
Modifying a list
alist alist[3] = 50 print(alist) alist.append(100) alist alist.insert(3, 1000) print(alist) alist.pop() alist alist.remove(1000) alist
_____no_output_____
MIT
module_2_programming/lists.ipynb
wiplane/foundations-of-datascience-ml
Slicing a list to obtain a subset of values
alist = [9, 10, -1, 2, 5, 7] alist alist[1:4:1] alist[1:4] alist[1:4:2] alist[:] alist[2:] alist[:4]
_____no_output_____
MIT
module_2_programming/lists.ipynb
wiplane/foundations-of-datascience-ml
Sort a list
item_prices = [1200, 200, 25, 500.45, 234, 540] item_prices ##sort the list item_prices.sort() print(item_prices) item_prices.sort(reverse=True) print(item_prices) names names.sort() names len(names) len(item_prices) item_prices.
_____no_output_____
MIT
module_2_programming/lists.ipynb
wiplane/foundations-of-datascience-ml
Dragon curve example from the [L-systems](../../topics/geometry/lsystems.ipynb) topic notebook in ``examples/topics/geometry``.Most examples work across multiple plotting backends, this example is also available for:* [Bokeh - dragon_curve](../bokeh/dragon_curve.ipynb)
import holoviews as hv import numpy as np hv.extension('matplotlib')
_____no_output_____
BSD-3-Clause
examples/gallery/demos/matplotlib/dragon_curve.ipynb
jsignell/holoviews
L-system definition The following class is a simplified version of the approach used in the [L-systems](../../topics/geometry/lsystems.ipynb) notebook, made specifically for plotting the [Dragon Curve](https://en.wikipedia.org/wiki/Dragon_curve).
class DragonCurve(object): "L-system agent that follows rules to generate the Dragon Curve" initial ='FX' productions = {'X':'X+YF+', 'Y':'-FX-Y'} dragon_rules = {'F': lambda t,d,a: t.forward(d), 'B': lambda t,d,a: t.back(d), '+': lambda t,d,a: t.rotate(-a), '-': lambda t,d,a: t.rotate(a), 'X':lambda t,d,a: None, 'Y':lambda t,d,a: None } def __init__(self, x=0,y=0, iterations=1): self.heading = 0 self.distance = 5 self.angle = 90 self.x, self.y = x,y self.trace = [(self.x, self.y)] self.process(self.expand(iterations), self.distance, self.angle) def process(self, instructions, distance, angle): for i in instructions: self.dragon_rules[i](self, distance, angle) def expand(self, iterations): "Expand an initial symbol with the given production rules" expansion = self.initial for i in range(iterations): intermediate = "" for ch in expansion: intermediate = intermediate + self.productions.get(ch,ch) expansion = intermediate return expansion def forward(self, distance): self.x += np.cos(2*np.pi * self.heading/360.0) self.y += np.sin(2*np.pi * self.heading/360.0) self.trace.append((self.x,self.y)) def rotate(self, angle): self.heading += angle def back(self, distance): self.heading += 180 self.forward(distance) self.heading += 180 @property def path(self): return hv.Path([self.trace])
_____no_output_____
BSD-3-Clause
examples/gallery/demos/matplotlib/dragon_curve.ipynb
jsignell/holoviews
Plot
%%output size=200 %%opts Path {+framewise} [xaxis=None yaxis=None title_format=''] (color='black' linewidth=1) def pad_extents(path): "Add 5% padding around the path" minx, maxx = path.range('x') miny, maxy = path.range('y') xpadding = ((maxx-minx) * 0.1)/2 ypadding = ((maxy-miny) * 0.1)/2 path.extents = (minx-xpadding, miny-ypadding, maxx+xpadding, maxy+ypadding) return path hmap = hv.HoloMap(kdims='Iteration') for i in range(7,17): path = DragonCurve(-200, 0, i).path hmap[i] = pad_extents(path) hmap
_____no_output_____
BSD-3-Clause
examples/gallery/demos/matplotlib/dragon_curve.ipynb
jsignell/holoviews
Netflix user behaviour Requirements[Jupyter Notebook](https://jupyter.org/install) [Apache Toree](https://toree.incubator.apache.org/) [sampleDataNetflix.tsv](https://guicaro.com/sampleDataNetflix.tsv) placed in local filesystem and path updated in 1) below Notes* I used a combination of Jupyter notebook and the Apache Toree project as it makes it easy and fast to explore a dataset. * I was part of the team that came up with [Apache Toree (aka The Spark Kernel)](https://twitter.com/guicaro/status/543541995247910917), and till now I think it's still the only Jupyter kernel that ties to a Spark Session and is backed by Apache. It solved many issues for us back when we were developing applications in Spark. Future* I was hoping to use [Voila](https://github.com/voila-dashboards/voila) project to create an interactive dashboard for data scientists where they could move a slider widget to change the parameters in my SQL queries, thus, change the time window to search. So, for example, say a data scientist would want to search for users only between 8 and 9 in the morning.* I wanted to randomly generate a bigger dataset using rules so that we could at least have more data to play with 1. Let's read our data We will read in a TSV file and try to infer schema since it is not very complex data types we are using
val sessions = spark.read.option("header", "true") .option("sep", "\t") .option("inferSchema","true") .csv("/Users/memo/Desktop/netflixSpark/sampleDataNetflix.tsv") sessions.printSchema sessions.show(2)
+-------+---------------+--------------------+----------+--------+----+----------+ |user_id|navigation_page| url|session_id| date|hour| timestamp| +-------+---------------+--------------------+----------+--------+----+----------+ | 1001| HomePage|https://www.netfl...| 6001|20181125| 11|1543145019| | 1001| OriginalsGenre|https://www.netfl...| 6001|20181125| 11|1543144483| +-------+---------------+--------------------+----------+--------+----+----------+ only showing top 2 rows
MIT
Netflix Exploration.ipynb
guicaro/guicaro.github.io
2. Let's create a temp SQL table to use of the SQL magic in Apache Toree to get our information
sessions.registerTempTable("SESSIONS")
_____no_output_____
MIT
Netflix Exploration.ipynb
guicaro/guicaro.github.io
a) Find all users who have visited OurPlanetTitle Page. Using DISTINCT to show unique users
%%SQL select distinct user_id from SESSIONS where navigation_page = 'OurPlanetTitle'
_____no_output_____
MIT
Netflix Exploration.ipynb
guicaro/guicaro.github.io
b) Find all users who have visited OurPlanetTitle Page only once. Showing the page visits just for validation, can be easily removed from the projection list in query
%%SQL select user_id, count(user_id) as page_visits from SESSIONS where navigation_page = 'OurPlanetTitle' group by user_id having page_visits == 1
_____no_output_____
MIT
Netflix Exploration.ipynb
guicaro/guicaro.github.io
c) Find all users who have visited HomePage -> OriginalsGenre -> OurPlanetTitle -> HomePage Making sure we filter for the same path using the timestamps and making sure it's all within the same `session_id`
%%SQL select distinct a.user_id from sessions a, sessions b, sessions c, sessions d where a.user_id = b.user_id and b.user_id = c.user_id and c.user_id = d.user_id and a.navigation_page = 'HomePage' and b.navigation_page = 'OriginalsGenre' and c.navigation_page = 'OurPlanetTitle' and d.navigation_page = 'HomePage' and a.timestamp < b.timestamp and b.timestamp < c.timestamp and c.timestamp < d.timestamp and a.session_id = b.session_id and b.session_id = c.session_id and c.session_id = d.session_id
_____no_output_____
MIT
Netflix Exploration.ipynb
guicaro/guicaro.github.io
d) Find all users who landed on LogIn Page from a Title Page The like operator is not the most performant but the SQL optimizer should be able to tell that my 2nd where clause can improve selectivity of this query. I am using the `timestamp` column to make sure that a before landing on a **Login** page, the user first comes from a **Title** page
%%SQL select a.user_id from sessions a, sessions b where a.user_id = b.user_id and b.navigation_page = 'LogIn' and a.navigation_page like '%Title' and a.timestamp < b.timestamp
_____no_output_____
MIT
Netflix Exploration.ipynb
guicaro/guicaro.github.io