markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Analyze Gradient Descent ProgressThe plot below illustrates how the cost function value changes over each iteration. You should see it decreasing. In case if cost function value increases it may mean that gradient descent missed the cost function minimum and with each step it goes further away from it.From this plot you may also get an understanding of how many iterations you need to get an optimal value of the cost function.
# Draw gradient descent progress for each label. labels = logistic_regression.unique_labels for index, label in enumerate(labels): plt.plot(range(len(costs[index])), costs[index], label=labels[index]) plt.xlabel('Gradient Steps') plt.ylabel('Cost') plt.legend() plt.show()
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Calculate Model Training PrecisionCalculate how many of training and test examples have been classified correctly. Normally we need test precission to be as high as possible. In case if training precision is high and test precission is low it may mean that our model is overfitted (it works really well with the training data set but it is not good at classifying new unknown data from the test dataset). In this case you may want to play with `regularization_param` parameter to fighth the overfitting.
# Make training set predictions. y_train_predictions = logistic_regression.predict(x_train) y_test_predictions = logistic_regression.predict(x_test) # Check what percentage of them are actually correct. train_precision = np.sum(y_train_predictions == y_train) / y_train.shape[0] * 100 test_precision = np.sum(y_test_predictions == y_test) / y_test.shape[0] * 100 print('Training Precision: {:5.4f}%'.format(train_precision)) print('Test Precision: {:5.4f}%'.format(test_precision))
Training Precision: 96.6833% Test Precision: 90.4500%
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Plot Test Dataset PredictionsIn order to illustrate how our model classifies unknown examples let's plot first 64 predictions for testing dataset. All green digits on the plot below have been recognized corrctly but all the red digits have not been recognized correctly by our classifier. On top of each digit image you may see the class (the number) that has been recognized on the image.
# How many numbers to display. numbers_to_display = 64 # Calculate the number of cells that will hold all the numbers. num_cells = math.ceil(math.sqrt(numbers_to_display)) # Make the plot a little bit bigger than default one. plt.figure(figsize=(15, 15)) # Go through the first numbers in a test set and plot them. for plot_index in range(numbers_to_display): # Extrace digit data. digit_label = y_test[plot_index, 0] digit_pixels = x_test[plot_index, :] # Predicted label. predicted_label = y_test_predictions[plot_index][0] # Calculate image size (remember that each picture has square proportions). image_size = int(math.sqrt(digit_pixels.shape[0])) # Convert image vector into the matrix of pixels. frame = digit_pixels.reshape((image_size, image_size)) # Plot the number matrix. color_map = 'Greens' if predicted_label == digit_label else 'Reds' plt.subplot(num_cells, num_cells, plot_index + 1) plt.imshow(frame, cmap=color_map) plt.title(predicted_label) plt.tick_params(axis='both', which='both', bottom=False, left=False, labelbottom=False, labelleft=False) # Plot all subplots. plt.subplots_adjust(hspace=0.5, wspace=0.5) plt.show()
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Question: What is the safest type of intersection?Let's see how accidents are splitted based on the place of the event and see where we can feel to be the safest.First step before any data analysis is to import required libraries and data. Any information required to understand columns is available here: https://www.kaggle.com/ahmedlahlou/accidents-in-france-from-2005-to-2016.
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns caracteristics = pd.read_csv('data/caracteristics.csv', encoding='latin1') caracteristics.head()
_____no_output_____
CC0-1.0
Which intersection has the highest number of accidents.ipynb
danpeczek/france-accidents
Let's change the values in intersection column from numbers to categorical values, below the look-up for development.* 1 - Out of intersection* 2 - Intersection in X* 3 - Intersection in T* 4 - Intersection in Y* 5 - Intersection with more than 4 branches* 6 - Giratory* 7 - Place* 8 - Level crossing* 9 - Other intersectionBut first let's check for missing values in 'int' (intersection) column.
caracteristics.columns[caracteristics.isna().sum() != 0]
_____no_output_____
CC0-1.0
Which intersection has the highest number of accidents.ipynb
danpeczek/france-accidents
So it looks like 'int' column is not having missing values, which is super for us in this case. Let's go with renaming values in 'int' column.
int_dict = { '1': 'Out of intersection', '2': 'X intersection', '3': 'T intersection', '4': 'Y intersection', '5': 'More than 4 branches intersection', '6': 'Giratory', '7': 'Place', '8': 'Level crossing', '9': 'Other' } caracteristics['int'] = caracteristics['int'].astype(str) caracteristics['int'] = caracteristics['int'].replace(int_dict) caracteristics['int'] = pd.Categorical(caracteristics['int'], list(int_dict.values())) caracteristics.head() plt.clf() plt.figure(figsize=(10,10)) ax = sns.countplot(y = 'int', data=caracteristics) ax.set_title('Number of accidents based on the intersection type') ax.set_xlabel('Number of accidents') ax.set_ylabel('Intersection') plt.show()
_____no_output_____
CC0-1.0
Which intersection has the highest number of accidents.ipynb
danpeczek/france-accidents
install pymongo package- mac - pip(3) install pymongo- window - conda install -c anaconda pymongo
import pymongo, requests
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
1. server에 연결(client생성)
client = pymongo.MongoClient('mongodb://13.125.237.246:27017') client
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
2. db선택
db = client.dss db
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
3. db의 collection 리스트를 확인
db.collection_names()
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
4. collection 선택
collection = db.info collection
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
5. find
# find_one : 한 개의 document를 가져옵니다. document = collection.find_one({"subject" : "java"}) type(document), document # find : 여러 개의 documents를 가져옵니다 documents = collection.find({"subject": "java"}) documents datas = list(documents) len(datas) datas list(documents)
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
다 사라짐.
# count - documents의 갯수를 가져오는 함수 documents = collection.find() documents.count() # sort - 정렬 documents = collection.find({"level":{"$lte":3}}).sort("level", pymongo.DESCENDING) list(documents)
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
6. insert
# insert_one data = {"subject":"css", "level":1, "comments":[{"name":"peter", "msg":"easy"}]} result = collection.insert_one(data) result result.inserted_id # insert_many datas = [ {"subject":"webpack", "level":2, "comments":[{"name":"peter", "msg":"easy"}]}, {"subject":"gulp", "level":3, "comments":[{"name":"peter", "msg":"easy"}]}, {"subject":"bower", "level":4, "comments":[{"name":"peter", "msg":"easy"}]} ] result = collection.insert_many(datas) result result.inserted_ids
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
직방 데이터 크롤링 후 저장
url = "https://api.zigbang.com/v3/items?detail=true&item_ids=[12258942,12217921,12251354,12042761,12270198,12263778,12149733,12263079,12046500,12227516,12245261,12258364,11741210,11947081,12081429,12248641,12039772,12148952,12271001,12201879,12269163,12268373,12268568,12204018,12247416,12241201,12174611,12254380,12233724,12139836,11869595,12178704,12262681,12261598,12106912,12248115,12154374,12240537,12245412,12155533,12198385,12203883,12251810,12239779,12013638,12218505,12249844,12184761,12258707,12096937,12191641,12256520,12163720,12241556,12245758,12272387,12256200,12260120,12195600,12263256]" response = requests.get(url) response # parsing - [{},{},{},{},{},..........] zigbang_dict_list = response.json().get("items") # 최상단 items를 벗겨냄 len(zigbang_dict_list) items = [item["item"] for item in zigbang_dict_list] len(items) items[:2] collection = client.crawling.zigbang result_zigbang = collection.insert_many(items) result_zigbang
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
렌트비용이 50이하인 데이터 추출
query = {"rent":{"$lte":50}} documents = collection.find(query) documents datas = list(documents) len(datas) # pandas로 만들어보자 df = pd.DataFrame(datas) df.tail() filtered_df = df[['rent','options','size','deposit']] filtered_df.tail() query = {"rent":{"$lte":50}} documents = collection.find(query, {"_id":False,"deposit":True, "rent":True, "options":True,"size":True}) documents df = pd.DataFrame(list(documents)) df.tail()
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
delete - database
client.drop_database("crawling")
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
delete - collection
client.crawling.drop_collection("zigbang")
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
Synthetic data: Catogorical variables
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold from sklearn.utils import shuffle from sklearn.metrics import accuracy_score from synthesize_data import synthesize_data import expectation_reflection as ER from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier import matplotlib.pyplot as plt %matplotlib inline np.random.seed(1) def inference(X_train,y_train,X_test,y_test,method='expectation_reflection'): if method == 'expectation_reflection': h0,w = ER.fit(X_train,y_train,niter_max=100,regu=0.001) y_pred = ER.predict(X_test,h0,w) else: if method == 'logistic_regression': model = LogisticRegression(solver='liblinear') if method == 'naive_bayes': model = GaussianNB() if method == 'random_forest': model = RandomForestClassifier(criterion = "gini", random_state = 1, max_depth=3, min_samples_leaf=5,n_estimators=100) if method == 'decision_tree': model = DecisionTreeClassifier() model.fit(X_train, y_train) y_pred = model.predict(X_test) accuracy = accuracy_score(y_test,y_pred) return accuracy def compare_inference(X,y,train_size): npred = 100 accuracy = np.zeros((len(list_methods),npred)) for ipred in range(npred): X, y = shuffle(X, y) X_train0,X_test,y_train0,y_test = train_test_split(X,y,test_size=0.2,random_state = ipred) idx_train = np.random.choice(len(y_train0),size=int(train_size*len(y)),replace=False) X_train,y_train = X_train0[idx_train],y_train0[idx_train] for i,method in enumerate(list_methods): accuracy[i,ipred] = inference(X_train,y_train,X_test,y_test,method) return accuracy.mean(axis=1),accuracy.std(axis=1) l = 10000 ; n = 40 ; g = 4. X,y = synthesize_data(l,n,g,data_type='categorical') np.unique(y,return_counts=True) list_train_size = [0.8,0.6,0.4,0.2,0.1] list_methods=['logistic_regression','naive_bayes','random_forest','decision_tree','expectation_reflection'] acc = np.zeros((len(list_train_size),len(list_methods))) acc_std = np.zeros((len(list_train_size),len(list_methods))) for i,train_size in enumerate(list_train_size): acc[i,:],acc_std[i,:] = compare_inference(X,y,train_size) print(train_size,acc[i,:]) acc_std df = pd.DataFrame(acc,columns = list_methods) df.insert(0, "train_size",list_train_size, True) df plt.figure(figsize=(4,3)) plt.plot(list_train_size,acc[:,0],'k--',marker='o',mfc='none',label='Logistic Regression') plt.plot(list_train_size,acc[:,1],'b--',marker='s',mfc='none',label='Naive Bayes') plt.plot(list_train_size,acc[:,2],'r--',marker='^',mfc='none',label='Random Forest') plt.plot(list_train_size,acc[:,4],'k-',marker='o',label='Expectation Reflection') plt.xlabel('train size') plt.ylabel('accuracy mean') plt.legend() plt.figure(figsize=(4,3)) plt.plot(list_train_size,acc_std[:,0],'k--',marker='o',mfc='none',label='Logistic Regression') plt.plot(list_train_size,acc_std[:,1],'b--',marker='s',mfc='none',label='Naive Bayes') plt.plot(list_train_size,acc_std[:,2],'r--',marker='^',mfc='none',label='Random Forest') plt.plot(list_train_size,acc_std[:,4],'k-',marker='o',label='Expectation Reflection') plt.xlabel('train size') plt.ylabel('accuracy standard deviation') plt.legend()
_____no_output_____
MIT
.ipynb_checkpoints/category-checkpoint.ipynb
danhtaihoang/expectation-reflection
DDPG - BipedalWalker-v2- Xinyao Qian- Tianhao Liu - Get familiar with the BipedalWalker-v2 environment firstFind that BipedalWalker behaves embarrasingly bad if taking random walking strategy.
import tensorflow as tf import numpy as np import gym # Load Environment ENV_NAME = 'BipedalWalker-v2' env = gym.make(ENV_NAME) # Repeoducible environment parameters env.seed(1) s=env.reset() episode=100 steps=5000 while i in range(episode): for j in range(steps): env.render() a=env.action_space.sample() s_,r,d,_=env.step(a) if d: s=env.reset()
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
MIT
ActorCritic/.ipynb_checkpoints/DDPG-Copy1-checkpoint.ipynb
bluemapleman/Maple-Reinforcement-Learning
Our solution **Since the action space of BipedalWalker is consecutive, that means value based models such as Q-Learning or DQN, are not applicable**, because value based models generally try to fit a better value function that tells us how good it is to be at a certain state s (V(s)) or to take action a at the state s (Q(s,a)), and then we still need to choose specific action based on our exploring strategy (e.g. $\epsilon$-greedy). Obviously, it can't work when our actions are consecutive/countless.So then we consider using **policy based models**, for example, REINFORCE. However, there is another problem that REINFORCE can only update parameters/learn everytime an episode ends, which slowed the convergence process. Then we get to know that there is another series of models that called **Actor Critic which combines the advantages of both the value based model and the policy based model and make it possible for policy based models to update itself at every step**. Specifically, we simultaneously train a policy gradients network and a Q-Learning network. The policy network behaves as the actor which takes in observations and outputs best actions to be taken, while the value network will behave as a critic to take in observations and tell the actor how 'good' to be at the current state, so that the actor can know how good its last action that brought it here was, and update its parameters according to this feedback, while the critic can also update its own parameters in the way Q-Learning does. **In a sense, actor and critic are supervising each other to become better and better**.![](https://morvanzhou.github.io/static/results/ML-intro/AC3.png)> https://morvanzhou.github.io/static/results/ML-intro/AC3.png Environment preparation & Definition of Classes: Actor, Critic, Memory
import gym import os import tensorflow as tf import numpy as np import shutil np.random.seed(1) tf.set_random_seed(1) # Load Environment ENV_NAME = 'BipedalWalker-v2' env = gym.make(ENV_NAME) # Repeoducible environment parameters env.seed(1) STATE_DIM = env.observation_space.shape[0] # 24 environment variables ACTION_DIM = env.action_space.shape[0] # 4 consecutive actions ACTION_BOUND = env.action_space.high # [1, 1, 1, 1] # all placeholder for tf with tf.name_scope('S'): S = tf.placeholder(tf.float32, shape=[None, STATE_DIM], name='s') with tf.name_scope('R'): R = tf.placeholder(tf.float32, [None, 1], name='r') with tf.name_scope('S_'): S_ = tf.placeholder(tf.float32, shape=[None, STATE_DIM], name='s_') ############################### Actor #################################### class Actor(object): def __init__(self, sess, action_dim, action_bound, learning_rate, t_replace_iter): self.sess = sess self.a_dim = action_dim self.action_bound = action_bound self.lr = learning_rate self.t_replace_iter = t_replace_iter self.t_replace_counter = 0 with tf.variable_scope('Actor'): # input s, output a self.a = self._build_net(S, scope='eval_net', trainable=True) # input s_, output a, get a_ for critic self.a_ = self._build_net(S_, scope='target_net', trainable=False) self.e_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Actor/eval_net') self.t_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Actor/target_net') def _build_net(self, s, scope, trainable): with tf.variable_scope(scope): init_w = tf.random_normal_initializer(0., 0.01) init_b = tf.constant_initializer(0.01) net = tf.layers.dense(s, 500, activation=tf.nn.relu, kernel_initializer=init_w, bias_initializer=init_b, name='l1', trainable=trainable) net = tf.layers.dense(net, 200, activation=tf.nn.relu, kernel_initializer=init_w, bias_initializer=init_b, name='l2', trainable=trainable) with tf.variable_scope('a'): actions = tf.layers.dense(net, self.a_dim, activation=tf.nn.tanh, kernel_initializer=init_w, bias_initializer=init_b, name='a', trainable=trainable) scaled_a = tf.multiply(actions, self.action_bound, name='scaled_a') # Scale output to -action_bound to action_bound return scaled_a def learn(self, s): # batch update self.sess.run(self.train_op, feed_dict={S: s}) if self.t_replace_counter % self.t_replace_iter == 0: self.sess.run([tf.assign(t, e) for t, e in zip(self.t_params, self.e_params)]) self.t_replace_counter += 1 def choose_action(self, s): s = s[np.newaxis, :] # single state return self.sess.run(self.a, feed_dict={S: s})[0] # single action def add_grad_to_graph(self, a_grads): with tf.variable_scope('policy_grads'): # ys = policy; # xs = policy's parameters; # self.a_grads = the gradients of the policy to get more Q # tf.gradients will calculate dys/dxs with a initial gradients for ys, so this is dq/da * da/dparams self.policy_grads_and_vars = tf.gradients(ys=self.a, xs=self.e_params, grad_ys=a_grads) with tf.variable_scope('A_train'): opt = tf.train.RMSPropOptimizer(-self.lr) # (- learning rate) for ascent policy self.train_op = opt.apply_gradients(zip(self.policy_grads_and_vars, self.e_params), global_step=GLOBAL_STEP) ######################################## Critic ######################################### class Critic(object): def __init__(self, sess, state_dim, action_dim, learning_rate, gamma, t_replace_iter, a, a_): self.sess = sess self.s_dim = state_dim self.a_dim = action_dim self.lr = learning_rate self.gamma = gamma self.t_replace_iter = t_replace_iter self.t_replace_counter = 0 with tf.variable_scope('Critic'): # Input (s, a), output q self.a = a self.q = self._build_net(S, self.a, 'eval_net', trainable=True) # Input (s_, a_), output q_ for q_target self.q_ = self._build_net(S_, a_, 'target_net', trainable=False) # target_q is based on a_ from Actor's target_net self.e_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Critic/eval_net') self.t_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='Critic/target_net') with tf.variable_scope('target_q'): self.target_q = R + self.gamma * self.q_ with tf.variable_scope('abs_TD'): self.abs_td = tf.abs(self.target_q - self.q) self.ISWeights = tf.placeholder(tf.float32, [None, 1], name='IS_weights') with tf.variable_scope('TD_error'): self.loss = tf.reduce_mean(self.ISWeights * tf.squared_difference(self.target_q, self.q)) with tf.variable_scope('C_train'): self.train_op = tf.train.AdamOptimizer(self.lr).minimize(self.loss, global_step=GLOBAL_STEP) with tf.variable_scope('a_grad'): self.a_grads = tf.gradients(self.q, a)[0] # tensor of gradients of each sample (None, a_dim) def _build_net(self, s, a, scope, trainable): with tf.variable_scope(scope): init_w = tf.random_normal_initializer(0., 0.01) init_b = tf.constant_initializer(0.01) with tf.variable_scope('l1'): n_l1 = 700 # combine the action and states together in this way w1_s = tf.get_variable('w1_s', [self.s_dim, n_l1], initializer=init_w, trainable=trainable) w1_a = tf.get_variable('w1_a', [self.a_dim, n_l1], initializer=init_w, trainable=trainable) b1 = tf.get_variable('b1', [1, n_l1], initializer=init_b, trainable=trainable) net = tf.nn.relu(tf.matmul(s, w1_s) + tf.matmul(a, w1_a) + b1) with tf.variable_scope('l2'): net = tf.layers.dense(net, 20, activation=tf.nn.relu, kernel_initializer=init_w, bias_initializer=init_b, name='l2', trainable=trainable) with tf.variable_scope('q'): q = tf.layers.dense(net, 1, kernel_initializer=init_w, bias_initializer=init_b, trainable=trainable) # Q(s,a) return q def learn(self, s, a, r, s_, ISW): _, abs_td = self.sess.run([self.train_op, self.abs_td], feed_dict={S: s, self.a: a, R: r, S_: s_, self.ISWeights: ISW}) if self.t_replace_counter % self.t_replace_iter == 0: self.sess.run([tf.assign(t, e) for t, e in zip(self.t_params, self.e_params)]) self.t_replace_counter += 1 return abs_td ######################################## Assistanting Class: SumTree and Memory ######################################### class SumTree(object): """ This SumTree code is modified version and the original code is from: https://github.com/jaara/AI-blog/blob/master/SumTree.py Story the data with it priority in tree and data frameworks. """ data_pointer = 0 def __init__(self, capacity): self.capacity = capacity # for all priority values self.tree = np.zeros(2 * capacity - 1)+1e-5 # [--------------Parent nodes-------------][-------leaves to recode priority-------] # size: capacity - 1 size: capacity self.data = np.zeros(capacity, dtype=object) # for all transitions # [--------------data frame-------------] # size: capacity def add_new_priority(self, p, data): leaf_idx = self.data_pointer + self.capacity - 1 self.data[self.data_pointer] = data # update data_frame self.update(leaf_idx, p) # update tree_frame self.data_pointer += 1 if self.data_pointer >= self.capacity: # replace when exceed the capacity self.data_pointer = 0 def update(self, tree_idx, p): change = p - self.tree[tree_idx] self.tree[tree_idx] = p self._propagate_change(tree_idx, change) def _propagate_change(self, tree_idx, change): """change the sum of priority value in all parent nodes""" parent_idx = (tree_idx - 1) // 2 self.tree[parent_idx] += change if parent_idx != 0: self._propagate_change(parent_idx, change) def get_leaf(self, lower_bound): leaf_idx = self._retrieve(lower_bound) # search the max leaf priority based on the lower_bound data_idx = leaf_idx - self.capacity + 1 return [leaf_idx, self.tree[leaf_idx], self.data[data_idx]] def _retrieve(self, lower_bound, parent_idx=0): """ Tree structure and array storage: Tree index: 0 -> storing priority sum / \ 1 2 / \ / \ 3 4 5 6 -> storing priority for transitions Array type for storing: [0,1,2,3,4,5,6] """ left_child_idx = 2 * parent_idx + 1 right_child_idx = left_child_idx + 1 if left_child_idx >= len(self.tree): # end search when no more child return parent_idx if self.tree[left_child_idx] == self.tree[right_child_idx]: return self._retrieve(lower_bound, np.random.choice([left_child_idx, right_child_idx])) if lower_bound <= self.tree[left_child_idx]: # downward search, always search for a higher priority node return self._retrieve(lower_bound, left_child_idx) else: return self._retrieve(lower_bound - self.tree[left_child_idx], right_child_idx) @property def root_priority(self): return self.tree[0] # the root class Memory(object): # stored as ( s, a, r, s_ ) in SumTree """ This SumTree code is modified version and the original code is from: https://github.com/jaara/AI-blog/blob/master/Seaquest-DDQN-PER.py """ epsilon = 0.001 # small amount to avoid zero priority alpha = 0.6 # [0~1] convert the importance of TD error to priority beta = 0.4 # importance-sampling, from initial value increasing to 1 beta_increment_per_sampling = 1e-5 # annealing the bias abs_err_upper = 1 # for stability refer to paper def __init__(self, capacity): self.tree = SumTree(capacity) def store(self, error, transition): p = self._get_priority(error) self.tree.add_new_priority(p, transition) def prio_sample(self, n): batch_idx, batch_memory, ISWeights = [], [], [] segment = self.tree.root_priority / n self.beta = np.min([1, self.beta + self.beta_increment_per_sampling]) # max = 1 min_prob = np.min(self.tree.tree[-self.tree.capacity:]) / self.tree.root_priority maxiwi = np.power(self.tree.capacity * min_prob, -self.beta) # for later normalizing ISWeights for i in range(n): a = segment * i b = segment * (i + 1) lower_bound = np.random.uniform(a, b) while True: idx, p, data = self.tree.get_leaf(lower_bound) if type(data) is int: i -= 1 lower_bound = np.random.uniform(segment * i, segment * (i+1)) else: break prob = p / self.tree.root_priority ISWeights.append(self.tree.capacity * prob) batch_idx.append(idx) batch_memory.append(data) ISWeights = np.vstack(ISWeights) ISWeights = np.power(ISWeights, -self.beta) / maxiwi # normalize return batch_idx, np.vstack(batch_memory), ISWeights def random_sample(self, n): idx = np.random.randint(0, self.tree.capacity, size=n, dtype=np.int) return np.vstack(self.tree.data[idx]) def update(self, idx, error): p = self._get_priority(error) self.tree.update(idx, p) def _get_priority(self, error): error += self.epsilon # avoid 0 clipped_error = np.clip(error, 0, self.abs_err_upper) return np.power(clipped_error, self.alpha) print('Finished!')
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. Finished!
MIT
ActorCritic/.ipynb_checkpoints/DDPG-Copy1-checkpoint.ipynb
bluemapleman/Maple-Reinforcement-Learning
Main loop for trainning
######################################## Hyperparameters ######################################## MAX_EPISODES = 500 LR_A = 0.000005 # learning rate for actor LR_C = 0.000005 # learning rate for mcritic GAMMA = 0.999 # reward discount REPLACE_ITER_A = 1700 REPLACE_ITER_C = 1500 MEMORY_CAPACITY = 200000 BATCH_SIZE = 32 DISPLAY_THRESHOLD = 100 # display until the running reward > 100 DATA_PATH = './data' SAVE_MODEL_ITER = 100000 RENDER = False OUTPUT_GRAPH = False GLOBAL_STEP = tf.Variable(0, trainable=False) INCREASE_GS = GLOBAL_STEP.assign(tf.add(GLOBAL_STEP, 1)) LR_A = tf.train.exponential_decay(LR_A, GLOBAL_STEP, 10000, .97, staircase=True) LR_C = tf.train.exponential_decay(LR_C, GLOBAL_STEP, 10000, .97, staircase=True) END_POINT = (200 - 10) * (14/30) # from game ################################################## LOAD_MODEL = True # Whether to load trained model# ################################################## sess = tf.Session() # Create actor and critic. actor = Actor(sess, ACTION_DIM, ACTION_BOUND, LR_A, REPLACE_ITER_A) critic = Critic(sess, STATE_DIM, ACTION_DIM, LR_C, GAMMA, REPLACE_ITER_C, actor.a, actor.a_) actor.add_grad_to_graph(critic.a_grads) M = Memory(MEMORY_CAPACITY) saver = tf.train.Saver(max_to_keep=100) # Maximum number of recent checkpoints to keep. Defaults to 5. ################################# Determine whether it's a new training or going-on training ###############3 if LOAD_MODEL: # Returns CheckpointState proto from the "checkpoint" file. all_ckpt = tf.train.get_checkpoint_state('./data', 'checkpoint').all_model_checkpoint_paths saver.restore(sess, all_ckpt[-1]) # reload trained parameters into the tf session else: if os.path.isdir(DATA_PATH): shutil.rmtree(DATA_PATH) # recursively remove all files under directory os.mkdir(DATA_PATH) sess.run(tf.global_variables_initializer()) if OUTPUT_GRAPH: tf.summary.FileWriter('logs', graph=sess.graph) var = 0.0000001 # control exploration var_min = 0.000001 ################################# Main loop for training ################################# for i_episode in range(MAX_EPISODES): s = env.reset() ep_r = 0 # the episode reward while True: if RENDER: env.render() a = actor.choose_action(s) a = np.clip(np.random.normal(a, var), -1, 1) # explore using randomness s_, r, done, _ = env.step(a) # r = total 300+ points up to the far end. If the robot falls, it gets -100. # when r=-100, that means BipedalWalker has falled to the groud if r == -100: r = -2 ep_r += r transition = np.hstack((s, a, [r], s_)) max_p = np.max(M.tree.tree[-M.tree.capacity:]) M.store(max_p, transition) # when the training reaches certain stage, we lessen the probability of exploration if GLOBAL_STEP.eval(sess) > MEMORY_CAPACITY/20: var = max([var*0.9999, var_min]) # decay the action randomness tree_idx, b_M, ISWeights = M.prio_sample(BATCH_SIZE) # for critic update b_s = b_M[:, :STATE_DIM] b_a = b_M[:, STATE_DIM: STATE_DIM + ACTION_DIM] b_r = b_M[:, -STATE_DIM - 1: -STATE_DIM] b_s_ = b_M[:, -STATE_DIM:] # Critic updates its parameters abs_td = critic.learn(b_s, b_a, b_r, b_s_, ISWeights) # Actor updates its parameters actor.learn(b_s) for i in range(len(tree_idx)): # update priority idx = tree_idx[i] M.update(idx, abs_td[i]) if GLOBAL_STEP.eval(sess) % SAVE_MODEL_ITER == 0: ckpt_path = os.path.join(DATA_PATH, 'DDPG.ckpt') save_path = saver.save(sess, ckpt_path, global_step=GLOBAL_STEP, write_meta_graph=False) print("\nSave Model %s\n" % save_path) if done: if "running_r" not in globals(): running_r = ep_r else: running_r = 0.95*running_r + 0.05*ep_r if running_r > DISPLAY_THRESHOLD: RENDER = True else: RENDER = False done = '| Achieve ' if env.unwrapped.hull.position[0] >= END_POINT else '| -----' print('Episode:', i_episode, done, '| Running_r: %i' % int(running_r), '| Epi_r: %.2f' % ep_r, '| Exploration: %.3f' % var, '| Pos: %.i' % int(env.unwrapped.hull.position[0]), '| LR_A: %.6f' % sess.run(LR_A), '| LR_C: %.6f' % sess.run(LR_C), ) break s = s_ sess.run(INCREASE_GS)
INFO:tensorflow:Restoring parameters from ./data/DDPG.ckpt-1200000 Episode: 0 | Achieve | Running_r: 271 | Epi_r: 271.74 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 1 | Achieve | Running_r: 271 | Epi_r: 269.24 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 2 | Achieve | Running_r: 271 | Epi_r: 273.15 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 3 | Achieve | Running_r: 271 | Epi_r: 271.24 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 4 | Achieve | Running_r: 271 | Epi_r: 269.90 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 5 | Achieve | Running_r: 271 | Epi_r: 268.49 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 6 | Achieve | Running_r: 271 | Epi_r: 271.28 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 7 | Achieve | Running_r: 271 | Epi_r: 269.52 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 8 | Achieve | Running_r: 271 | Epi_r: 270.98 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 9 | Achieve | Running_r: 271 | Epi_r: 270.82 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 10 | Achieve | Running_r: 271 | Epi_r: 268.31 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000
MIT
ActorCritic/.ipynb_checkpoints/DDPG-Copy1-checkpoint.ipynb
bluemapleman/Maple-Reinforcement-Learning
ES Module 3 Welcome to Module 3!Last time, we went over: 1. Strings and Intergers 2. Arrays 3. Tables Today we will continue working with tables, and introduce a new procedure called filtering. Before you start, run the following cell.
# Loading our libraries, i.e. tool box for our module import numpy as np from datascience import *
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Paired Programming Today we want to introduce a new system of work called paired programming. Wikipedia defines paired programming in the following way:Pair programming is an agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.This methodolgy is quite known in the computer science realm, and we want to try and see how well it would work in our little class room. Hopefully we would all benefit from this, by closing the gap between more experienced coders and less so we could move forward to more advanced topics! Additionally, there is always the benefit of having a friend when all hell breaks loose (or the code just would not work..)So after this brief introduction, please team up with a class-mate, hopefully someone you did not know from before that is at a slightly different level of programming experience. Please start now, as one takes the controls and the other is reviewing the code. 0. Comments Comments are ways of making your code more human readable. It's good practice to add comments to your code so someone else reading your code can get an idea of what's going on. You can add a comment to your code by preceeding it with a `` symbol. When the computer sees any line preceeded by a `` symbol, it'll ignore it. Here's an example below:
# Calculating the total number of pets in my house. num_cats = 4 num_dogs = 10 total = num_cats + num_dogs total
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, write a comment in the cell below explaining what it is doing, then run the cell to see if you're correct.
animals = make_array('Cat', 'Dog', 'Bird', 'Spider') num_legs = make_array(4, 4, 2, 8) my_table = Table().with_columns('Animal', animals, 'Number of Legs', num_legs) my_table
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
1. Tables (Continued) It is time to practice tables again. We want to load the table files you have uploaded last module. This time, you do it by yourself. Load the table "inmates_by_year.csv" and "correctional_population.csv" and assign it to a variable. Remember, to load a table we use `Table.read_table()` and pass the name of the table as an argument to the function.
inmates_by_year = Table.read_table('inmates_by_year.csv') correctional_population = Table.read_table('correctional_population.csv')
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Good job! Now we have all the tables loaded. It is time to extract some information from these tables!In the next several cells, we would guide you through a quick manipulation that will allow us to extract information about the entire correctional population using both tables we have loaded above. In the correctional_population table, we are given data about the number of supervised per 100,000 U.S. adult residents. That means that if we want to have the approximated number of the entire population under supervision we need to multiply by 100,000.
# First, extract the column name "Number supervised per 100,000 U.S. adult residents/c" from # the correctional_population table and assign it to the variable provided. c_p = correctional_population.column('Number supervised per 100,000 U.S. adult residents/c') c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
filteringWhen you run the cell above, you may notice that the values in our array are actually strings (you can tell because each value has quotation marks around it). However, we can't do mathematical operations on strings, so we'll have to convert this array first so it has integers instead of strings. This is called filtering, or cleaning the data, so we can actually do some work on it. In the following cells, when you see the ` filtering` sign, know that we have yet to cover this topic.Run the following cell to do clean the table. We'll go over how to do this in a later section of this module. If you have any questions about how it works, feel free to ask any of us!
# filtering def string_to_int(val): return int(val.replace(',', '')) c_p = correctional_population.apply(string_to_int, 'Number supervised per 100,000 U.S. adult residents/c')
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, let's continue finding the real value of c_p.
# In this cell, multiply the correctional population column name "Number supervised per 100,000 U.S. adult residents/c" # by 100000 and assign it to a new variable (c_p stands for correctional population) real_c_p = c_p * 100000 real_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Next we want to assign the Total column from inmates_by_year to a variable in order to be able to operate on it.
total_inmates = inmates_by_year.column('Total') total_inmates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Again, run the following line to convert the values in `total_inmates` to ints.
# filtering total_inmates = inmates_by_year.apply(string_to_int, 'Total') total_inmates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Switch position, the navigator now takes the wheel. Now that we have the variables holding all the information we want to manipulate, we can start digging into it.We want to come up with a scheme that will allow us to see the precentage of people that are incarcerated, from the total supervised population, by year.Before we do that, though, examine your two variables, `total_inmates` and `real_c_p` and their corresponding tables. Do you foresee any issues with directly comparing these two tables? The `correctional_population` table has a row corresponding to 2000, which `inmates_by_year` does not have. This not only means that the data from our two tables doesn't match up, but also that our arrays are two different lengths. Recall that we cannot do operations on arrays with different lengths. To fix this, run the following cell, in which we get rid of the value corresponding to the year 2000 from `real_c_p`. Again, if you have questions about how this works, feel free to ask us!
# filtering real_c_p = real_c_p.take(np.arange(1, real_c_p.size)) real_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now our arrays both correspond to data from the same years and we can do operations with both of them!
# Write a short code that stores the precentage of people incarcerated from the supervised population # (rel stands for relative, c_p stands from correctional population) inmates_rel_c_p = (total_inmates / real_c_p) * 100 inmates_rel_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, this actually gives us useful information!Why not write it down? Please write down what this information tells you about the judicial infrastructure - we are looking for more mathy/dry explanation (rather than observation of how poorly it is).
# A simple sentence will suffice, we want to see intuitive understanding. Please call a teacher when done to check! extract_information_shows = "The percentage of people, supervisied by US adult correctional system, who are incarcerated"
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
For a final touch, please sort inmates_rel_c_p by descending order in the next cell. We won't tell you how to sort, this time please check the last lab module on how to sort a table. It is an important quality of a programmer to be able to reuse code you already have. Hint: Remember that you can only use `sort` on tables. How might you manipulate your array so that you can sort it?
# Please sort inmates_rel_c_p in descending order and print it out inmates_rel_c_p = Table().with_columns('Inmate_percentage', inmates_rel_c_p) inmates_rel_c_p.sort('Inmate_percentage',descending = True) inmates_rel_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Before starting, please switch positions Filtering Right now, we can't really get much extra information from our tables other than by sorting them. In this section, we'll learn how to filter our data so we can get more useful insights from it. This is especially useful when dealing with larger data sets!For example, say we wanted insights about the total number of inmates after 2012. We can find this out using the `where` function. Check out the cell below for an example of how to use this.
inmates_by_year.where('Year', are.above(2012))
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Notice that `where` takes in two arguments: the name of the column, and the condition we are filtering by. Now, try it for yourself! In the cell below, filter `correctional_population` so it only includes years after 2008. If you run the following cell, you'll find a complete description of all such conditions (which we'll call predicates) that you can pass into where. This information can also be found [here](https://www.inferentialthinking.com/chapters/05/2/selecting-rows.html).
functions = make_array('are.equal_to(Z)', 'are.above(x)', 'are.above_or_equal_to(x)', 'are.below(x)', 'are.below_or_equal_to(x)', 'are.between(x, y)', 'are.strictly_between(x, y)', 'are.between_or_equal_to(x, y)', 'are.containing(S)') descriptions = make_array('Equal to Z', 'Greater than x', 'Greater than or equal to x', 'Below x', 'Less than or equal to x', 'Greater than or equal to x, and less than y', 'Greater than x and less than y', 'Greater than or equal to x, and less than or equal to y', 'Contains the string S') predicates = Table().with_columns('Predicate', functions, 'Description', descriptions) predicates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, we'll be using filtering to gain more insights about our two tables. Before we start, be sure to run the following cell so we can ensure every column we're working with is numerical.
inmates_by_year = inmates_by_year.drop('Total').with_column('Total', total_inmates).select('Year', 'Total', 'Standard error/a') correctional_population = correctional_population.drop('Number supervised per 100,000 U.S. adult residents/c').with_column('Number supervised per 100,000 U.S. adult residents/c', c_p).select('Year', 'Number supervised per 100,000 U.S. adult residents/c', 'U.S. adult residents under correctional supervision ').relabel('U.S. adult residents under correctional supervision ', 'U.S. adult residents under correctional supervision')
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
First, find the mean of the total number of inmates. Hint: You can use the `np.mean()` function on arrays to calculate this.
avg_inmates = np.mean(inmates_by_year.column('Total')) avg_inmates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, filter `inmates_by_year` to find data for the years in which the number of total inmates was under the average.
filtered_inmates = inmates_by_year.where('Total', are.below(avg_inmates)) filtered_inmates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
What does this tell you about the total inmate population? Write your answer in the cell below.
answer = "YOUR TEXT HERE"
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Before continuing, please switch positions. Now, similarly, find the average number of adults under correctional supervision, and filter the table to find the years in which the number of adults under correctional supervision was under the average.
avg = np.mean(correctional_population.column('Number supervised per 100,000 U.S. adult residents/c')) filtered_c_p = correctional_population.where('Number supervised per 100,000 U.S. adult residents/c', are.below(avg)) filtered_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Do the years match up? Does this make sense based on the proportions you calculated above in `inmates_rel_c_p`?
answer = "YOUR TEXT HERE"
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, from `correctional_population`, filter the table so the value of U.S. adult residents under correctional supervision is 1 in 31. Remember, the values in this column are strings.
c_p_1_in_34 = correctional_population.where('U.S. adult residents under correctional supervision', are.containing('1 in 31')) c_p_1_in_34
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, we have one last challenge exercise. Before doing this, finish the challenge exercises from last module. We highly encourage you to work with your partner on this one.In the following cell, find the year with the max number of supervised adults for which the proportion of US adult residents under correctional supervision was 1 in 32.
one_in_32 = correctional_population.where('U.S. adult residents under correctional supervision', are.containing('1 in 32')) one_in_32_sorted = one_in_32.sort('Number supervised per 100,000 U.S. adult residents/c', descending = True) year = one_in_32_sorted.column('Year').item(0) year
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func from sqlalchemy import create_engine, inspect engine = create_engine("sqlite:///Resources/hawaii.sqlite") # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine, reflect=True) # We can view all of the classes that automap found Base.classes.keys() # Save references to each table Measurement = Base.classes.measurement Station = Base.classes.station # Create our session (link) from Python to the DB session = Session(engine) m_table = session.query(Measurement).first() m_table.__dict__ #measurements table rows for row in session.query(Measurement.id, Measurement.date, Measurement.tobs, Measurement.prcp, Measurement.station).limit(10).all(): print(row) s_table = session.query(Station).first() s_table.__dict__ for row in session.query(Station.id, Station.name, Station.station, Station.longitude, Station.latitude, Station.elevation).all(): print(row)
(1, 'WAIKIKI 717.2, HI US', 'USC00519397', -157.8168, 21.2716, 3.0) (2, 'KANEOHE 838.1, HI US', 'USC00513117', -157.8015, 21.4234, 14.6) (3, 'KUALOA RANCH HEADQUARTERS 886.9, HI US', 'USC00514830', -157.8374, 21.5213, 7.0) (4, 'PEARL CITY, HI US', 'USC00517948', -157.9751, 21.3934, 11.9) (5, 'UPPER WAHIAWA 874.3, HI US', 'USC00518838', -158.0111, 21.4992, 306.6) (6, 'WAIMANALO EXPERIMENTAL FARM, HI US', 'USC00519523', -157.71139, 21.33556, 19.5) (7, 'WAIHEE 837.5, HI US', 'USC00519281', -157.84888999999998, 21.45167, 32.9) (8, 'HONOLULU OBSERVATORY 702.2, HI US', 'USC00511918', -157.9992, 21.3152, 0.9) (9, 'MANOA LYON ARBO 785.2, HI US', 'USC00516128', -157.8025, 21.3331, 152.4)
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
OR
# Create the inspector and connect it to the engine inspector = inspect(engine) # Collect the names of tables within the database inspector.get_table_names() # Using the inspector to print the column names within the 'measuremnts' table and its types columns1 = inspector.get_columns('measurements') for column in columns1: print(column["name"], column["type"]) # Using the inspector to print the column names within the 'station' table and its types columns2 = inspector.get_columns('station') for column in columns2: print(column["name"], column["type"])
id INTEGER station TEXT name TEXT latitude FLOAT longitude FLOAT elevation FLOAT
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
Exploratory Climate Analysis -------------------------------------------------------------------------------------------------------------------------- ********************* Precipitation Analysis ********************* --------------------------------------------------------------------------------------------------------------------------
# Design a query to retrieve the last 12 months of precipitation data and plot the results #calulation the last date. session.query(Measurement.date).order_by(Measurement.date.desc()).first() # Calculate the date 1 year ago from the last data point in the database year_ago_date= dt.date(2017, 8, 23) - dt.timedelta(days=366) print('Query Date:', year_ago_date) # Perform a query to retrieve the data and precipitation scores prcp_date = session.query(Measurement.date, Measurement.prcp).\ filter(func.strftime('%Y-%m-%d',Measurement.date) > year_ago_date).order_by(Measurement.date).all() prcp_date # Save the query results as a Pandas DataFrame and set the index to the date column prcp_df = pd.DataFrame(prcp_date, columns=['date', 'prcp']) prcp_df.set_index('date', inplace = True) # Sort the dataframe by date sort_df = prcp_df.sort_values('date') sort_df prcp_df.plot(title="Precipitation Analysis", figsize=(12,8)) plt.legend(loc='upper center') #plt.savefig("Images/precipitation.png") plt.tight_layout() plt.show() # Use Pandas to calcualte the summary statistics for the precipitation data prcp_df.describe()
_____no_output_____
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
-------------------------------------------------------------------------------------------------------------------------- ********************* Station Analysis ********************* --------------------------------------------------------------------------------------------------------------------------
# Design a query to show how many stations are available in this dataset? number_of_stations = session.query(Station).count() number_of_stations # What are the most active stations? (i.e. what stations have the most rows)? # List the stations and the counts in descending order. active_stations = (session.query(Measurement.station, func.count(Measurement.station)) .group_by(Measurement.station) .order_by(func.count(Measurement.station).desc()).all()) active_stations # Using the station id from the previous query, calculate the lowest temperature recorded, # highest temperature recorded, and average temperature of the most active station? tobs = [Measurement.station, func.min(Measurement.tobs), func.max(Measurement.tobs),func.avg(Measurement.tobs)] activeStation = session.query(*tobs).filter(Measurement.station=='USC00519281').all() activeStation pd.DataFrame(activeStation, columns=['station', 'min_temp', 'max_temp', 'avg_temp']).set_index('station') # Choose the station with the highest number of temperature observations. # Query the last 12 months of temperature observation data for this station and plot the results as a histogram #year_high # Choose the station with the highest number of temperature observations. # Query the last 12 months of temperature observation data for this station and plot the results as a histogram year_high_temp =(session.query(Measurement.date,(Measurement.tobs)) .filter(func.strftime(Measurement.date) > year_ago_date) .filter(Measurement.station=='USC00519281') .all()) year_high_temp tobs_df = pd.DataFrame(year_high_temp, columns=['date', 'temp']) tobs_df.set_index('date', inplace = True) plt.rcParams['figure.figsize']=(10,7) plt.hist(tobs_df['temp'], bins=12, alpha=0.6 ) plt.title('Temperature Observation Aug 2016 - Aug 2017\nHonolulu, Hawaii',fontsize=20) plt.xlabel('Temperature (F)',fontsize=16) plt.ylabel('Frequency',fontsize=16) plt.xticks(fontsize=12) plt.yticks(fontsize=12) plt.ylim(0,70) plt.show()
_____no_output_____
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
Bonus Challenge Assignment
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date string in the format %Y-%m-%d end_date (string): A date string in the format %Y-%m-%d Returns: TMIN, TAVE, and TMAX """ return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\ filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all() # function usage example print(calc_temps('2012-02-28', '2012-03-05')) # Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax # for your trip using the previous year's data for those same dates. # Plot the results from your previous query as a bar chart. # Use "Trip Avg Temp" as your Title # Use the average temperature for the y value # Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr) # Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates. # Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation # Create a query that will calculate the daily normals # (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day) def daily_normals(date): """Daily Normals. Args: date (str): A date string in the format '%m-%d' Returns: A list of tuples containing the daily normals, tmin, tavg, and tmax """ sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)] return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all() daily_normals("01-01") # calculate the daily normals for your trip # push each tuple of calculations into a list called `normals` # Set the start and end date of the trip # Use the start and end date to create a range of dates # Stip off the year and save a list of %m-%d strings # Loop through the list of %m-%d strings and calculate the normals for each date # Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index # Plot the daily normals as an area plot with `stacked=False`
_____no_output_____
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
ROUGH WORK FOR APP.PY
# from flask import Flask, jsonify # def precipitation(): # # Create session (link) from Python to the DB # session = Session(engine) # # Query Measurement # results = (session.query(Measurement.date, Measurement.prcp) # .order_by(Measurement.date)) # # Create a dictionary # precipitation_date = [] # for each_row in results: # dt_dict = {} # dt_dict["date"] = each_row.date # dt_dict["prcp"] = each_row.prcp # precipitation_date.append(dt_dict) # # return jsonify(precipitation_date) # return(precipitation_date) # precipitation() # #def tobs(): # #create a session # session3 = Session(engine) # # Query measurement for latest datre # last_date = session3.query(Measurement.date).order_by(Measurement.date.desc()).first() # print(last_date) # last_12mnth = (dt.datetime.strptime(last_date[0],'%Y-%m-%d') -dt.timedelta(days=365)).date() # print(last_12mnth) # # year_ago_date= dt.date(2017, 8, 23) - dt.timedelta(days=366) # # # print('Query Date:', year_ago_date) # tobs_results = session3.query(Measurement.date, Measurement.tobs).\ # filter(Measurement.date >= last_12mnth).order_by(Measurement.date).all() # # Create a list of dicts with `date` and `tobs` as the keys and values # tobs_totals = [] # for result in tobs_results: # row = {} # row["date"] = result[0] # row["tobs"] = result[1] # tobs_totals.append(row) # tobs_totals # def start_date(start): # # print("start_date status:OK") # #convert the tsring from user to date # start_date = dt.datetime.strptime(start, '%Y-%m-%d').date() # last_date_dd = (dt.datetime.strptime(last_date[0][0], '%Y-%m-%d')).date() # first_date_dd = (dt.datetime.strptime(first_date[0][0], '%Y-%m-%d')).date() # #if fgiven start_date greater than last or lesser than first available date in dataset, print the following # if start_date > last_date_dd or start_date < first_date_dd: # return(f"Select date range between {first_date[0][0]} and {last_date[0][0]}") # else: # #Return a JSON list of the minimum temperature, the average temperature, # #and the max temperature for a given start range. # start_min_max_temp = session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs),\ # func.max(Measurement.tobs)).filter(Measurement.date >= start_date).all() # start_date_data = list(np.ravel(start_min_max_temp)) # #return jsonify(start_date_data)
_____no_output_____
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
Introduction to NLTKWe have seen how to do [some basic text processing in Python](https://github.com/Mashimo/datascience/blob/master/03-NLP/helloworld-nlp.ipynb), now we introduce an open source framework for natural language processing that can further help to work with human languages: [NLTK (Natural Language ToolKit)](http://www.nltk.org/). Tokenise a text Let's start with a simple text in a Python string:
sampleText1 = "The Elephant's 4 legs: THE Pub! You can't believe it or can you, the believer?" sampleText2 = "Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29."
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Tokens The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group.We have seen how we can extract tokens by splitting the text at the blank spaces. NTLK has a function word_tokenize() for it:
import nltk s1Tokens = nltk.word_tokenize(sampleText1) s1Tokens len(s1Tokens)
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
21 tokens extracted, which include words and punctuation. Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens: "can" and "n't" (= "not") while a tokeniser that splits text by spaces would consider it a single token: "can't". Let's see another example:
s2Tokens = nltk.word_tokenize(sampleText2) s2Tokens
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time:
# If you would like to work with the raw text you can use 'bookRaw' with open('../datasets/ThePrince.txt', 'r') as f: bookRaw = f.read() bookTokens = nltk.word_tokenize(bookRaw) bookText = nltk.Text(bookTokens) # special format nBookTokens= len(bookTokens) # or alternatively len(bookText) print ("*** Analysing book ***") print ("The book is {} chars long".format (len(bookRaw))) print ("The book has {} tokens".format (nBookTokens))
*** Analysing book *** The book is 300814 chars long The book has 59792 tokens
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens. SentencesNTLK has a function to tokenise a text not in words but in sentences.
text1 = "This is the first sentence. A liter of milk in the U.S. costs $0.99. Is this the third sentence? Yes, it is!" sentences = nltk.sent_tokenize(text1) len(sentences) sentences
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99). It also splits correctly sentences after question or exclamation marks but not after commas.
sentences = nltk.sent_tokenize(bookRaw) # extract sentences nSent = len(sentences) print ("The book has {} sentences".format (nSent)) print ("and each sentence has in average {} tokens".format (nBookTokens / nSent))
The book has 1416 sentences and each sentence has in average 42.22598870056497 tokens
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Most common tokensWhat are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token.Its `most_common()` method then returns a list of tuples where each tuple is of the form `(token, frequency)`. The list is sorted in descending order of frequency.
def get_top_words(tokens): # Calculate frequency distribution fdist = nltk.FreqDist(tokens) return fdist.most_common() topBook = get_top_words(bookTokens) # Output top 20 words topBook[:20]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Comma is the most common: we need to remove the punctuation. Most common alphanumeric tokensWe can use `isalpha()` to check if the token is a word and not punctuation.
topWords = [(freq, word) for (word,freq) in topBook if word.isalpha() and freq > 400] topWords
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
We can also remove any capital letters before tokenising:
def preprocessText(text, lowercase=True): if lowercase: tokens = nltk.word_tokenize(text.lower()) else: tokens = nltk.word_tokenize(text) return [word for word in tokens if word.isalpha()] bookWords = preprocessText(bookRaw) topBook = get_top_words(bookWords) # Output top 20 words topBook[:20] print ("*** Analysing book ***") print ("The text has now {} words (tokens)".format (len(bookWords)))
*** Analysing book *** The text has now 52202 words (tokens)
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ... As we have seen last time, these are so-called **stop words** that are very common and are normally stripped from a text when doing these kind of analysis. Meaningful most common tokensA simple approach could be to filter the tokens that have a length greater than 5 and frequency of more than 150.
meaningfulWords = [word for (word,freq) in topBook if len(word) > 5 and freq > 80] sorted(meaningfulWords)
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
This would work but would leave out also tokens such as `I` and `you` which are actually significative. The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words. NLTK has a corpus of stop words in several languages:
from nltk.corpus import stopwords stopwordsEN = set(stopwords.words('english')) # english language betterWords = [w for w in bookWords if w not in stopwordsEN] topBook = get_top_words(betterWords) # Output top 20 words topBook[:20]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Now we excluded words such as `the` but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
'princes' in betterWords betterWords.count("prince") + betterWords.count("princes")
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Stemming Above, in the list of words we have both `prince` and `princes` which are respectively the singular and plural version of the same word (the **stem**). The same would happen with verb conjugation (`love` and `loving` are considered different words but are actually *inflections* of the same verb). **Stemmer** is the tool that reduces such inflectional forms into their stem, base or root form and NLTK has several of them (each with a different heuristic algorithm).
input1 = "List listed lists listing listings" words1 = input1.lower().split(' ') words1
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
And now we apply one of the NLTK stemmer, the Porter stemmer:
porter = nltk.PorterStemmer() [porter.stem(t) for t in words1]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
stemmedWords = [porter.stem(w) for w in betterWords] topBook = get_top_words(stemmedWords) topBook[:20] # Output top 20 words
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Now the word `princ` is counted 281 times, exactly like the sum of prince and princes. A note here: Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes. `Prince` and `princes` become `princ`. A different flavour is the **lemmatisation** that we will see in one second, but first a note about stemming in other languages than English. Stemming in other languages **`Snowball`** is an improvement created by Porter: a language to create stemmers and have rules for many more languages than English. For example Italian:
from nltk.stem.snowball import SnowballStemmer stemmerIT = SnowballStemmer("italian") inputIT = "Io ho tre mele gialle, tu hai una mela gialla e due pere verdi" wordsIT = inputIT.split(' ') [stemmerIT.stem(w) for w in wordsIT]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
LemmaLemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the **base or dictionary form of a word, which is known as the lemma**. While a stemmer operates on a single word without knowledge of the context, a lemmatiser can take the context in consideration. NLTK has also a built-in lemmatiser, so let's see it in action:
from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() words1 [lemmatizer.lemmatize(w, 'n') for w in words1] # n = nouns
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
[lemmatizer.lemmatize(w, 'v') for w in words1] # v = verbs
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
We get a different result if we say that the words are verbs. They have all the same lemma, in fact they could be all different inflections or conjugation of a verb. The type of words that can be used are: 'n' = noun, 'v'=verb, 'a'=adjective, 'r'=adverb
words2 = ['good', 'better'] [porter.stem(w) for w in words2] [lemmatizer.lemmatize(w, 'a') for w in words2]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
It works with different adjectives, it doesn't look only at prefixes and suffixes. You would wonder why stemmers are used, instead of always using lemmatisers: stemmers are much simpler, smaller and faster and for many applications good enough. Now we lemmatise the book:
lemmatisedWords = [lemmatizer.lemmatize(w, 'n') for w in betterWords] topBook = get_top_words(lemmatisedWords) topBook[:20] # Output top 20 words
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Yes, the lemma now is `prince`. But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word. Part of speech (PoS)In traditional grammar, a part of speech (abbreviated form: PoS or POS) is a category of words which have similar grammatical properties. For example, an adjective (red, big, quiet, ...) describe properties while a verb (throw, walk, have) describe actions or states.Commonly listed parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection.
text1 = "Children shouldn't drink a sugary drink before bed." tokensT1 = nltk.word_tokenize(text1) nltk.pos_tag(tokensT1)
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
The NLTK function `pos_tag()` will tag each token with the estimated PoS. NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function:
nltk.help.upenn_tagset('RB')
RB: adverb occasionally unabatingly maddeningly adventurously professedly stirringly prominently technologically magisterially predominately swiftly fiscally pitilessly ...
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Which are the most common PoS in The Prince book?
tokensAndPos = nltk.pos_tag(bookTokens) posList = [thePOS for (word, thePOS) in tokensAndPos] fdistPos = nltk.FreqDist(posList) fdistPos.most_common(5) nltk.help.upenn_tagset('IN')
IN: preposition or conjunction, subordinating astride among uppon whether out inside pro despite on by throughout below within for towards near behind atop around if like until below next into if beside ...
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
It's not nouns (NN) but interections (IN) such as preposition or conjunction. Extra note: Parsing the grammar structure Words can be ambiguous and sometimes is not easy to understand which kind of POS is a word, for example in the sentence "visiting aunts can be a nuisance", is visiting a verb or an adjective? Tagging a PoS depends on the context, which can be ambiguous. Making sense of a sentence is easier if it follows a well-defined grammatical structure, such as : subject + verb + object NLTK allows to define a formal grammar which can then be used to parse a text. The NLTK ChartParser is a procedure for finding one or more trees (sentences have internal organisation that can be represented using a tree) corresponding to a grammatically well-formed sentence.
# Parsing sentence structure text2 = nltk.word_tokenize("Alice loves Bob") grammar = nltk.CFG.fromstring(""" S -> NP VP VP -> V NP NP -> 'Alice' | 'Bob' V -> 'loves' """) parser = nltk.ChartParser(grammar) trees = parser.parse_all(text2) for tree in trees: print(tree)
(S (NP Alice) (VP (V loves) (NP Bob)))
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Optimizing Code: Holiday GiftsIn the last example, you learned that using vectorized operations and more efficient data structures can optimize your code. Let's use these tips for one more example.Say your online gift store has one million users that each listed a gift on a wish list. You have the prices for each of these gifts stored in `gift_costs.txt`. For the holidays, you're going to give each customer their wish list gift for free if it is under 25 dollars. Now, you want to calculate the total cost of all gifts under 25 dollars to see how much you'd spend on free gifts. Here's one way you could've done it.
import time import numpy as np with open('gift_costs.txt') as f: gift_costs = f.read().split('\n') gift_costs = np.array(gift_costs).astype(int) # convert string to int start = time.time() total_price = 0 for cost in gift_costs: if cost < 25: total_price += cost * 1.08 # add cost after tax print(total_price) print('Duration: {} seconds'.format(time.time() - start))
32765421.24 Duration: 6.560739994049072 seconds
MIT
udacity_ml/software_engineering/holiday_gifts/optimizing_code_holiday_gifts.ipynb
issagaliyeva/machine_learning
Here you iterate through each cost in the list, and check if it's less than 25. If so, you add the cost to the total price after tax. This works, but there is a much faster way to do this. Can you refactor this to run under half a second? Refactor Code**Hint:** Using numpy makes it very easy to select all the elements in an array that meet a certain condition, and then perform operations on them together all at once. You can them find the sum of what those values end up being.
start = time.time() total_price = np.sum(gift_costs[gift_costs < 25] * 1.08) # compute the total price print(total_price) print('Duration: {} seconds'.format(time.time() - start))
32765421.24 Duration: 0.09631609916687012 seconds
MIT
udacity_ml/software_engineering/holiday_gifts/optimizing_code_holiday_gifts.ipynb
issagaliyeva/machine_learning
**Import Libraries and modules**
# https://keras.io/ !pip install -q keras import keras import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten, Add from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras.datasets import mnist
Using TensorFlow backend.
MIT
1st_DNN.ipynb
joyjeni/-Learn-Artificial-Intelligence-with-TensorFlow
Load pre-shuffled MNIST data into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data() print (X_train.shape) from matplotlib import pyplot as plt %matplotlib inline plt.imshow(X_train[0]) X_train = X_train.reshape(X_train.shape[0], 28, 28,1) X_test = X_test.reshape(X_test.shape[0], 28, 28,1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 y_train[:10] # Convert 1-dimensional class arrays to 10-dimensional class matrices Y_train = np_utils.to_categorical(y_train, 10) Y_test = np_utils.to_categorical(y_test, 10) Y_train[:10] from keras.layers import Activation model = Sequential() model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(28,28,1))) model.add(Convolution2D(10, 1, activation='relu')) model.add(Convolution2D(10, 26)) model.add(Flatten()) model.add(Activation('softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=32, nb_epoch=10, verbose=1) score = model.evaluate(X_test, Y_test, verbose=0) print(score) y_pred = model.predict(X_test) print(y_pred[:9]) print(y_test[:9]) layer_dict = dict([(layer.name, layer) for layer in model.layers]) import numpy as np from matplotlib import pyplot as plt from keras import backend as K %matplotlib inline # util function to convert a tensor into a valid image def deprocess_image(x): # normalize tensor: center on 0., ensure std is 0.1 x -= x.mean() x /= (x.std() + 1e-5) x *= 0.1 # clip to [0, 1] x += 0.5 x = np.clip(x, 0, 1) # convert to RGB array x *= 255 #x = x.transpose((1, 2, 0)) x = np.clip(x, 0, 255).astype('uint8') return x def vis_img_in_filter(img = np.array(X_train[2]).reshape((1, 28, 28, 1)).astype(np.float64), layer_name = 'conv2d_14'): layer_output = layer_dict[layer_name].output img_ascs = list() for filter_index in range(layer_output.shape[3]): # build a loss function that maximizes the activation # of the nth filter of the layer considered loss = K.mean(layer_output[:, :, :, filter_index]) # compute the gradient of the input picture wrt this loss grads = K.gradients(loss, model.input)[0] # normalization trick: we normalize the gradient grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5) # this function returns the loss and grads given the input picture iterate = K.function([model.input], [loss, grads]) # step size for gradient ascent step = 5. img_asc = np.array(img) # run gradient ascent for 20 steps for i in range(20): loss_value, grads_value = iterate([img_asc]) img_asc += grads_value * step img_asc = img_asc[0] img_ascs.append(deprocess_image(img_asc).reshape((28, 28))) if layer_output.shape[3] >= 35: plot_x, plot_y = 6, 6 elif layer_output.shape[3] >= 23: plot_x, plot_y = 4, 6 elif layer_output.shape[3] >= 11: plot_x, plot_y = 2, 6 else: plot_x, plot_y = 1, 2 fig, ax = plt.subplots(plot_x, plot_y, figsize = (12, 12)) ax[0, 0].imshow(img.reshape((28, 28)), cmap = 'gray') ax[0, 0].set_title('Input image') fig.suptitle('Input image and %s filters' % (layer_name,)) fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9]) for (x, y) in [(i, j) for i in range(plot_x) for j in range(plot_y)]: if x == 0 and y == 0: continue ax[x, y].imshow(img_ascs[x * plot_y + y - 1], cmap = 'gray') ax[x, y].set_title('filter %d' % (x * plot_y + y - 1)) vis_img_in_filter()
_____no_output_____
MIT
1st_DNN.ipynb
joyjeni/-Learn-Artificial-Intelligence-with-TensorFlow
Result 1 Replicating results from Boise Pre-K Program Evaluation 2017 Page 7
print("Fall LSF No Vista Pre-k = ", getMean(nonprek.copy(), 'Fall_LSF')) print("Fall LSF Vista Pre-k = ", getMean(prekst.copy(), 'Fall_LSF')) print("Fall LNF No Vista Pre-k = ", getMean(nonprek.copy(), 'Fall_LNF')) print("Fall LNF Vista Pre-k = ", getMean(prekst.copy(), 'Fall_LNF')) print("Winter LSF No Vista Pre-k = ", getMean(nonprek.copy(), 'Winter_LSF')) print("Winter LSF Vista Pre-k = ", getMean(prekst.copy(), 'Winter_LSF')) print("Winter LNF No Vista Pre-k = ", getMean(nonprek.copy(), 'Winter_LNF')) print("Winter LNF Vista Pre-k = ", getMean(prekst.copy(), 'Winter_LNF')) print("Spring LSF No Vista Pre-k = ", getMean(nonprek.copy(), 'Spring_LSF')) print("Spring LSF Vista Pre-k = ", getMean(prekst.copy(), 'Spring_LSF')) print("Spring LNF No Vista Pre-k = ", getMean(nonprek.copy(), 'Spring_LNF')) print("Spring LNF Vista Pre-k = ", getMean(prekst.copy(), 'Spring_LNF'))
Spring LSF No Vista Pre-k = 40.38297872340426 Spring LSF Vista Pre-k = 45.0 Spring LNF No Vista Pre-k = 41.07446808510638 Spring LNF Vista Pre-k = 46.26086956521739
MIT
project.ipynb
gcaracas/ds_project
Result 1 Replicating results from Boise Pre-K Program Evaluation 2017 Page 9
# We need to arrange our StudentID from float to int, and then to string def convertToInt(df_t): strTbl=[] for a in df_t: strTbl.append(int(a)) return strTbl def getListValues(dataset, firstSelector, secondSelector): tbl = dataset.reset_index() data = tbl.groupby([firstSelector])[[secondSelector]].count() data = data.reset_index() data['firstSelector']=convertToInt(data[firstSelector].values) return list(data[secondSelector]) def getBelowAverages(dataset): tbl = dataset.reset_index() data = tbl.groupby(['Fall_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Fall_GRTR_Level']=convertToInt(data['Fall_GRTR_Level'].values) fall = (list(data['Student_ID']))[0] data = tbl.groupby(['Winter_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Winter_GRTR_Level']=convertToInt(data['Winter_GRTR_Level'].values) winter = (list(data['Student_ID']))[0] data = tbl.groupby(['Spring_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Spring_GRTR_Level']=convertToInt(data['Spring_GRTR_Level'].values) spring = (list(data['Student_ID']))[0] return [fall, winter, spring] def getAverages(dataset): tbl = dataset.reset_index() data = tbl.groupby(['Fall_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Fall_GRTR_Level']=convertToInt(data['Fall_GRTR_Level'].values) fall = (list(data['Student_ID']))[1] data = tbl.groupby(['Winter_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Winter_GRTR_Level']=convertToInt(data['Winter_GRTR_Level'].values) winter = (list(data['Student_ID']))[1] data = tbl.groupby(['Spring_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Spring_GRTR_Level']=convertToInt(data['Spring_GRTR_Level'].values) spring = (list(data['Student_ID']))[1] return [fall, winter, spring] def getAboveAverages(dataset): tbl = dataset.reset_index() data = tbl.groupby(['Fall_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Fall_GRTR_Level']=convertToInt(data['Fall_GRTR_Level'].values) fall = (list(data['Student_ID']))[2] data = tbl.groupby(['Winter_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Winter_GRTR_Level']=convertToInt(data['Winter_GRTR_Level'].values) winter = (list(data['Student_ID']))[2] data = tbl.groupby(['Spring_GRTR_Level'])[['Student_ID']].count() data = data.reset_index() data['Spring_GRTR_Level']=convertToInt(data['Spring_GRTR_Level'].values) spring = (list(data['Student_ID']))[2] return [fall, winter, spring] fig, ax = plt.subplots(figsize=(12, 7)) N = 3 #Number of groups width = 0.40 # the width of the bars ind = np.arange(N) # the x locations for the groups x=[0, 2, 4] fall = getListValues(prekst.copy(), 'Fall_GRTR_Level','Student_ID') ba = getBelowAverages(prekst.copy()) av = getAverages(prekst.copy()) aa = getAboveAverages(prekst.copy()) #ax = plt.figure().gca() ax.yaxis.set_major_locator(MaxNLocator(integer=True)) p1 = ax.bar(x,ba, width, color='deepskyblue', bottom=0) p2 = ax.bar(ind*2 + width , av, width,color='orangered') p3 = ax.bar(ind*2 + width*2, aa, width,color='royalblue') ax.set_title('Cohort 1 Get Ready To Read Levels (2015-2016)') ax.set_xticks((ind*2) + width/2) ax.set_xticklabels(('Fall', 'Winter', 'Spring')) ax.set_ylabel('number of students') ax.grid(True) ax.legend((p1[0], p2[0], p3[0]), ('below average', 'average', 'above average'),loc='upper left') ax.autoscale_view() plt.show() prekst.columns
_____no_output_____
MIT
project.ipynb
gcaracas/ds_project
Trending improvement in Get Ready To Read score Question: Are students who start the pre-k program, show improvement from fall to spring on their Get Ready To Read scores?
sns.pairplot(prekst, x_vars="Fall_GRTR_Score", y_vars="Spring_GRTR_Score",kind="reg")
_____no_output_____
MIT
project.ipynb
gcaracas/ds_project
Rate of improvement on pre-k and no pre-k students together Question: On Kindergardent, do we see any difference on improvement rate between kids with and without pre-k? Preliminary observation: Here we will use the slope of our regression to measure that and we do see that kids with pre-k have a higher rate of imporovement (higher slope) on both, LNF and LSF
print("LNF Scores for pre-k Students") p1=sns.pairplot(prekst, x_vars=["Fall_LNF"],y_vars="Spring_LNF", kind='reg') axes = p1.axes axes[0,0].set_ylim(0,100) print("LNF Scores for pre-k Students") p2=sns.pairplot(prekst, x_vars=["Fall_LSF"],y_vars="Spring_LSF", kind='reg') axes = p2.axes axes[0,0].set_ylim(0,100) print("LNF Scores for no pre-k Students") p1=sns.pairplot(nonprek, x_vars=["Fall_LNF"],y_vars="Spring_LNF", kind='reg') axes = p1.axes axes[0,0].set_ylim(0,100) print("LSF Scores for no pre-k Students") p2=sns.pairplot(nonprek, x_vars=["Fall_LSF"],y_vars="Spring_LSF", kind='reg') axes = p2.axes axes[0,0].set_ylim(0,100)
LNF Scores for no pre-k Students LSF Scores for no pre-k Students
MIT
project.ipynb
gcaracas/ds_project
Now let's get the real numbers on rate of learning (our m slope from the regression)
# Import SK Learn train test split from sklearn.linear_model import LinearRegression from sklearn.cross_validation import train_test_split def getSlope(X, y): # Assign variables to capture train test split output #X_train, X_test, y_train, y_test = train_test_split(X, y) # Instantiate linreg = LinearRegression() linreg.fit(X, y) return linreg.coef_[0]
_____no_output_____
MIT
project.ipynb
gcaracas/ds_project
Question: What is the quantitative learning rate in LNF from students with and without Pre-k
toEval = nonprek.copy() X = toEval['Fall_LNF'] y = toEval['Winter_LNF'] X[0]=0 # Fix an issue because the first sample is Nan thus ffill is ineffective for first sample X=X.fillna(method='ffill') y=y.fillna(method='ffill') X=X.values.reshape(-1,1) y=y.values.reshape(-1,1) print("LNF Learning rate for studenst non pre-K From Fall to Winter =",getSlope(X,y)) toEval = prekst.copy() X = toEval['Fall_LNF'] y = toEval['Winter_LNF'] X[0]=0 # Fix an issue because the first sample is Nan thus ffill is ineffective for first sample X=X.fillna(method='ffill') y=y.fillna(method='ffill') X=X.values.reshape(-1,1) y=y.values.reshape(-1,1) print("LNF Learning rate for studenst pre-K From Fall to Winter =",getSlope(X,y))
LNF Learning rate for studenst pre-K From Fall to Winter = [1.01872332]
MIT
project.ipynb
gcaracas/ds_project
Question: What is the quantitative learning rate in LSF from students with and without Pre-k
toEval = nonprek.copy() X = toEval['Fall_LSF'] y = toEval['Winter_LSF'] X[0]=0 # Fix an issue because the first sample is Nan thus ffill is ineffective for first sample X=X.fillna(method='ffill') y=y.fillna(method='ffill') X=X.values.reshape(-1,1) y=y.values.reshape(-1,1) print("LSF Learning rate for studenst non pre-K From Fall to Winter =",getSlope(X,y)) toEval = prekst.copy() X = toEval['Fall_LSF'] y = toEval['Winter_LSF'] X[0]=0 # Fix an issue because the first sample is Nan thus ffill is ineffective for first sample X=X.fillna(method='ffill') y=y.fillna(method='ffill') X=X.values.reshape(-1,1) y=y.values.reshape(-1,1) print("LSF Learning rate for studenst pre-K From Fall to Winter =",getSlope(X,y))
LSF Learning rate for studenst pre-K From Fall to Winter = [1.194067]
MIT
project.ipynb
gcaracas/ds_project
Question: Is there a difference in learning rate between high performers from both groups? Observation: The following plots have the same scale
pkhp = prekst[prekst['Fall_Level'] == 3] npkhp = nonprek[nonprek['Fall_Level'] == 3] print("LNF Scores for pre-k Students") p1=sns.pairplot(pkhp, x_vars=["Fall_LNF"],y_vars="Spring_LNF", kind='reg') axes = p1.axes axes[0,0].set_ylim(0,100) type(p1) print("LSF Scores for no pre-k Students") p2=sns.pairplot(npkhp, x_vars=["Fall_LNF"],y_vars="Spring_LNF", kind='reg') axes = p2.axes axes[0,0].set_ylim(0,100)
LSF Scores for no pre-k Students
MIT
project.ipynb
gcaracas/ds_project
2.5 Expressions and statements **An expression** is a combination of values, variables, and operators. A value all by itselfis considered an expression, and so is a variable, so the following are all legal expressions(assuming that the variable x has been assigned a value):17xx + 17**A statement** is a unit of code that the Python interpreter can execute. We have seen twokinds of statement: print and assignment.Technically an expression is also a statement, but it is probably simpler to think of themas different things. The important difference is that an expression has a value; a statementdoes not. A value is an expression so it gets printed out in interpreter mode
5
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
An assignment is a statement and so does not get printed out technically by the python shell in interpreter mode
x = 5
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
The value of an expression gets printed out in the python shell interpreter mode
x + 1
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
2.7 Order of operationsWhen more than one operator appears in an expression, the order of evaluation dependson the rules of precedence. For mathematical operators, Python follows mathematicalconvention. The acronym **PEMDAS** is a useful way to remember the rules: • **Parentheses** have the highest precedence and can be used to force an expression toevaluate in the order you want. Since expressions in parentheses are evaluated first,2 * (3-1) is 4, and (1+1)^(5-2) is 8. You can also use parentheses to make anexpression easier to read, as in (minute * 100) / 60, even if it doesn’t change theresult.• **Exponentiation** has the next highest precedence, so 2^1+1 is 3, not 4, and 3*1^3 is3, not 27.• **Multiplication and Division** have the same precedence, which is higher than**Addition and Subtraction**, which also have the same precedence. So 2*3-1 is 5, not4, and 6+4/2 is 8, not 5.• Operators with the same precedence are evaluated from left to right (except exponentiation). So in the expression degrees / 2 * pi, the division happens first and theresult is multiplied by pi. To divide by 2π, you can use parentheses or write degrees/ 2 / pi.I don’t work very hard to remember rules of precedence for other operators. If I can’t tellby looking at the expression, I use parentheses to make it obvious. 2.9 CommentsAs programs get bigger and more complicated, they get more difficult to read. Formallanguages are dense, and it is often difficult to look at a piece of code and figure out whatit is doing, or why. Comments are most useful when they document non-obvious features of the code. It isreasonable to assume that the reader can figure out what the code does; it is much moreuseful to explain why.This comment is redundant with the code and useless:`v = 5` assign 5 to vThis comment contains useful information that is not in the code:`v = 5` velocity in meters/second.Good variable names can reduce the need for comments, but long names can make complex expressions hard to read, so there is a tradeoff. 2.11 Glossary1. **value:** One of the basic units of data, like a number or string, that a program manipulates.2. **type:** A category of values. The types we have seen so far are integers (type int), floatingpoint numbers (type float), and strings (type str).3. **integer:** A type that represents whole numbers.4. **floating-point:** A type that represents numbers with fractional parts.5. **string:** A type that represents sequences of characters.6. **variable:** A name that refers to a value.7. **statement:** A section of code that represents a command or action. So far, the statementswe have seen are assignments and print statements.8. **assignment:** A statement that assigns a value to a variable.9. **state diagram:** A graphical representation of a set of variables and the values they refer to.10. **keyword:** A reserved word that is used by the compiler to parse a program; you cannotuse keywords like if, def, and while as variable names. 11. **operator:** A special symbol that represents a simple computation like addition, multiplication, or string concatenation.12. **operand:** One of the values on which an operator operates.13. **floor division:** The operation that divides two numbers and chops off the fraction part.14. **expression:** A combination of variables, operators, and values that represents a single result value.15. **evaluate:** To simplify an expression by performing the operations in order to yield a single value.16. **rules of precedence:** The set of rules governing the order in which expressions involving multiple operators and operands are evaluated.17. **concatenate:** To join two operands end-to-end.18. **comment:** Information in a program that is meant for other programmers (or anyone reading the source code) and has no effect on the execution of the program. 2.12 Exercises **Exercise 2.2.** Assume that we execute the following assignment statements:
width = 17 height = 12.0 delimiter = '.'
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
For each of the following expressions, write the value of the expression and the type (of the value ofthe expression).1. width/22. width/2.03. height/34. 1 + 2 * 55. delimiter * 5
width / 2 # Type of value of expression in float width / 2.0 # Type of value of expression in float height / 3 # Type of value of expression in float 1+2 * 5 # Type of value of expression is int, value is 11 delimiter * 5 # Type of value of expression is string
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
**Exercise 2.3.** Practice using the Python interpreter as a calculator: 1. The volume of a sphere with radius r is ${4\over3}\pi r^3$ What is the volume of a sphere with radius 5? Hint: 392.7 is wrong!2. Suppose the cover price of a book is $$24.95, but bookstores get a 40% discount. Shipping costs$3 for the first copy and 75 cents for each additional copy. What is the total wholesale cost for60 copies?3. If I leave my house at 6:52 am and run 1 mile at an easy pace (8:15 per mile), then 3 miles attempo (7:12 per mile) and 1 mile at easy pace again, what time do I get home for breakfast?
import math
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
**Quest 1.**
radius = 5 volume = (4/3*math.pi)*radius**3 volume
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
**Quest 2.**
cover_price = 24.95 book_stores_discount = 0.4
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
Total wholesale cost for each book will be the cover_price cost less discounts plus shipping cost. The fisrst copy has shipping of $$3 and the rest 0.75 cents. So add it up for 60 copies
net_cover_price = cover_price - (cover_price * book_stores_discount) net_cover_price First_shipping_cost = 3 subsequent_shipping_cost = 0.75 first_book_cost = net_cover_price + First_shipping_cost fifty_nine_books_cost = (net_cover_price + subsequent_shipping_cost) * 59 total_wholesale_cost = first_book_cost + fifty_nine_books_cost total_wholesale_cost
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
**Quest 3.**
min_sec = 60 hours_sec = 3600 start_time_secs = 6 * hours_sec + 52 * min_sec start_time_secs easy_pace_per_mile = 8 * min_sec + 15 tempo_space_per_mile = 7 * min_sec + 12
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
Now add 2 * easy-pace + 3 * tempo-pace to start-time
finish_time_secs = start_time_secs + (2 * easy_pace_per_mile) + (3 * tempo_space_per_mile) finish_time_secs
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
Now convert finish-time-secs to hours and minutes
import time def convert(seconds): return time.strftime("%H:%M:%S", time.gmtime(seconds)) # Now call it on the start_time to check, start-time is 06.52 convert(start_time_secs) # Now call it on the end_time to get the answer convert(finish_time_secs)
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
Starbucks Capstone Challenge IntroductionThis data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks. Not all users receive the same offer, and that is the challenge to solve with this data set.Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer. Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer. ExampleTo give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer. CleaningThis makes data cleaning especially important and tricky.You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers. Final AdviceBecause this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A). Data SetsThe data is contained in three files:* portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)* profile.json - demographic data for each customer* transcript.json - records for transactions, offers received, offers viewed, and offers completedHere is the schema and explanation of each variable in the files:**portfolio.json*** id (string) - offer id* offer_type (string) - type of offer ie BOGO, discount, informational* difficulty (int) - minimum required spend to complete an offer* reward (int) - reward given for completing an offer* duration (int) - time for offer to be open, in days* channels (list of strings)**profile.json*** age (int) - age of the customer * became_member_on (int) - date when customer created an app account* gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)* id (str) - customer id* income (float) - customer's income**transcript.json*** event (str) - record description (ie transaction, offer received, offer viewed, etc.)* person (str) - customer id* time (int) - time in hours since start of test. The data begins at time t=0* value - (dict of strings) - either an offer id or transaction amount depending on the record
# Import required libraries from datetime import datetime import json import math import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.metrics import fbeta_score, accuracy_score from sklearn.model_selection import cross_val_score from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import MinMaxScaler from sklearn.tree import DecisionTreeClassifier import warnings warnings.filterwarnings('ignore') % matplotlib inline
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone