markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Pre-processing: Feature selection/extraction Lets look at the day of the week people get the loan
df['dayofweek'] = df['effective_date'].dt.dayofweek bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10) g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2) g.map(plt.hist, 'dayofweek', bins=bins, ec="k") g.axes[-1].legend() plt.show()
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0) df.head()
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Convert Categorical features to numerical values Lets look at gender:
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
86 % of female pay there loans while only 73 % of males pay there loan Lets convert male to 0 and female to 1:
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True) df.head()
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
One Hot Encoding How about education?
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Feature befor One Hot Encoding
df[['Principal','terms','age','Gender','education']].head()
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame
Feature = df[['Principal','terms','age','Gender','weekend']] Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1) Feature.drop(['Master or Above'], axis = 1,inplace=True) Feature.head()
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Feature selection Lets defind feature sets, X:
X = Feature X[0:5]
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
What are our lables?
y = df['loan_status'].values y[0:5]
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Normalize Data Data Standardization give data zero mean and unit variance (technically should be done after train test split )
X= preprocessing.StandardScaler().fit(X).transform(X) X[0:5]
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Classification Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the modelYou should use the following algorithm:- K Nearest Neighbor(KNN)- Decision Tree- Support Vector Machine- Logistic Regression__ Notice:__ - You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.- You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.- You should include the code of the algorithm in the following cells. K Nearest Neighbor(KNN)Notice: You should find the best k to build the model with the best accuracy. **warning:** You should not use the __loan_test.csv__ for finding the best k, however, you can split your train_loan.csv into train and test to find the best __k__.
#Importing library for KNN from sklearn.neighbors import KNeighborsClassifier # Train Test Split from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4) print ('Train set:', X_train.shape, y_train.shape) print ('Test set:', X_test.shape, y_test.shape) #Set initial k for prediction k = 4 #Train Model and Predict neigh = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train) neigh yhat_KNN = neigh.predict(X_test) yhat_KNN from sklearn import metrics #To check which k is the best to our model Ks = 10 mean_acc = np.zeros((Ks-1)) std_acc = np.zeros((Ks-1)) ConfustionMx = []; for n in range(1,Ks): #Train Model and Predict neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train) yhat_KNN=neigh.predict(X_test) mean_acc[n-1] = metrics.accuracy_score(y_test, yhat_KNN) std_acc[n-1]=np.std(yhat_KNN==y_test)/np.sqrt(yhat_KNN.shape[0]) mean_acc #Print the best accuracy with k value print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1) print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh.predict(X_train))) print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat_KNN))
The best accuracy was with 0.7857142857142857 with k= 7 Train set Accuracy: 0.7898550724637681 Test set Accuracy: 0.7571428571428571
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Decision Tree
#Import library of DecisionTreeClassifier from sklearn.tree import DecisionTreeClassifier #Modeling drugTree = DecisionTreeClassifier(criterion="entropy", max_depth = 5) drugTree # it shows the default parameters drugTree.fit(X_train,y_train) #Prediction predTree = drugTree.predict(X_test) print (predTree [0:10]) print (y [0:10])
['COLLECTION' 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'COLLECTION' 'COLLECTION' 'PAIDOFF' 'PAIDOFF'] ['PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF']
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Support Vector Machine
#Importing library for SVM from sklearn import svm #Modeling clf = svm.SVC(kernel='rbf') clf.fit(X_train, y_train) #Prediction yhat_SVM = clf.predict(X_test) yhat_SVM [0:5]
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Logistic Regression
#Importing library for Logistic Regression from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix #Modeling LR = LogisticRegression(C=0.01, solver='liblinear').fit(X_train,y_train) LR #Prediction yhat_L = LR.predict(X_test) yhat_prob = LR.predict_proba(X_test) yhat_prob
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Model Evaluation using Test set
from sklearn.metrics import jaccard_similarity_score from sklearn.metrics import f1_score from sklearn.metrics import log_loss
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
First, download and load the test set:
#!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv url='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv'
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Load Test set for evaluation
test_df = pd.read_csv(url) test_df.head() #Test set for y y_for_test=test_df[['loan_status']] #Feature test_df['due_date'] = pd.to_datetime(test_df['due_date']) test_df['effective_date'] = pd.to_datetime(test_df['effective_date']) test_df['dayofweek'] = test_df['effective_date'].dt.dayofweek test_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3) else 0) test_df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True) test_df.shape Feature_T = test_df[['Principal','terms','age','Gender','weekend']] #pd_T=pd.get_dummies(test_df['education']) Feature_T = pd.concat([Feature_T,pd.get_dummies(test_df['education'])], axis=1) Feature_T.drop(['Master or Above'], axis = 1,inplace=True) Feature_T.head() Feature_T= preprocessing.StandardScaler().fit(Feature_T).transform(Feature_T) #Using best k=8 for KNN neigh = KNeighborsClassifier(n_neighbors = 8).fit(X_train,y_train) yhat_KNN = neigh.predict(Feature_T) #Decision tree prediction using test set T_predTree = drugTree.predict(Feature_T) #SVM prediction using test set yhat_SVM = clf.predict(Feature_T) #Prediction yhat_L = LR.predict(Feature_T) #Evaluation for logistic log_loss yhat_prob = LR.predict_proba(Feature_T) #Evaluation for Jaccard index J_KNN=jaccard_similarity_score(y_for_test, yhat_KNN) J_Tree=jaccard_similarity_score(y_for_test, T_predTree) J_SVM=jaccard_similarity_score(y_for_test, yhat_SVM) J_L=jaccard_similarity_score(y_for_test, yhat_L) #Evaluation for F1-score f1_KNN=f1_score(y_for_test, yhat_KNN, average='weighted') f1_Tree=f1_score(y_for_test, T_predTree, average='weighted') f1_SVM=f1_score(y_for_test, yhat_SVM, average='weighted') f1_L=f1_score(y_for_test, yhat_L, average='weighted') #Evaluation for log_loss loglossL=log_loss(y_for_test, yhat_prob) row_index = ['KNN', 'Decision Tree', 'SVM','LogisticRegression'] column_index = ['Jaccard', 'F1-score', 'LogLoss'] values=[[J_KNN,f1_KNN,'NA'],[J_Tree,f1_Tree,'NA'],[J_SVM,f1_SVM,'NA'],[J_L,f1_L,loglossL]] df = pd.DataFrame(values, index=row_index, columns = column_index) df
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
ClusteringIn contrast to *supervised* machine learning, *unsupervised* learning is used when there is no "ground truth" from which to train and validate label predictions. The most common form of unsupervised learning is *clustering*, which is simllar conceptually to *classification*, except that the the training data does not include known values for the class label to be predicted. Clustering works by separating the training cases based on similarities that can be determined from their feature values. Think of it this way; the numeric features of a given entity can be thought of as vector coordinates that define the entity's position in n-dimensional space. What a clustering model seeks to do is to identify groups, or *clusters*, of entities that are close to one another while being separated from other clusters.For example, let's take a look at a dataset that contains measurements of different species of wheat seed.> **Citation**: The seeds dataset used in the this exercise was originally published by the Institute of Agrophysics of the Polish Academy of Sciences in Lublin, and can be downloaded from the UCI dataset repository (Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science).
import pandas as pd # load the training dataset data = pd.read_csv('data/seeds.csv') # Display a random sample of 10 observations (just the features) features = data[data.columns[0:6]] features.sample(10)
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
As you can see, the dataset contains six data points (or *features*) for each instance (*observation*) of a seed. So you could interpret these as coordinates that describe each instance's location in six-dimensional space.Now, of course six-dimensional space is difficult to visualise in a three-dimensional world, or on a two-dimensional plot; so we'll take advantage of a mathematical technique called *Principal Component Analysis* (PCA) to analyze the relationships between the features and summarize each observation as coordinates for two principal components - in other words, we'll translate the six-dimensional feature values into two-dimensional coordinates.
from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA # Normalize the numeric features so they're on the same scale scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]]) # Get two principal components pca = PCA(n_components = 2).fit(scaled_features) features_2d = pca.transform(scaled_features) features_2d[0:10] #js: complete decomposition pca = PCA().fit(scaled_features) #js print(pca.explained_variance_) print(pca.explained_variance_ratio_) #js: cumulative explain variance ratio pca.explained_variance_ratio_.cumsum() #js: how many components do I need if I want alleast 95% of the variance to be explained pca = PCA(.95) pca.fit(scaled_features) pca.n_components_
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
Now that we have the data points translated to two dimensions, we can visualize them in a plot:
import matplotlib.pyplot as plt %matplotlib inline plt.scatter(features_2d[:,0],features_2d[:,1]) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data') plt.show()
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
Hopefully you can see at least two, arguably three, reasonably distinct groups of data points; but here lies one of the fundamental problems with clustering - without known class labels, how do you know how many clusters to separate your data into?One way we can try to find out is to use a data sample to create a series of clustering models with an incrementing number of clusters, and measure how tightly the data points are grouped within each cluster. A metric often used to measure this tightness is the *within cluster sum of squares* (WCSS), with lower values meaning that the data points are closer. You can then plot the WCSS for each model.
#importing the libraries import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans %matplotlib inline # Create 10 models with 1 to 10 clusters wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters = i) # Fit the data points kmeans.fit(features.values) # kmeans.fit(scaled_features) # why not on these? # Get the WCSS (inertia) value wcss.append(kmeans.inertia_) #Plot the WCSS values onto a line graph plt.plot(range(1, 11), wcss) plt.title('WCSS by Clusters') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show()
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
The plot shows a large reduction in WCSS (so greater *tightness*) as the number of clusters increases from one to two, and a further noticable reduction from two to three clusters. After that, the reduction is less pronounced, resulting in an "elbow" in the chart at around three clusters. This is a good indication that there are two to three reasonably well separated clusters of data points. K-Means ClusteringThe algorithm we used to create our test clusters is *K-Means*. This is a commonly used clustering algorithm that separates a dataset into *K* clusters of equal variance. The number of clusters, *K*, is user defined. The basic algorithm has the following steps:1. A set of K centroids are randomly chosen.2. Clusters are formed by assigning the data points to their closest centroid.3. The means of each cluster is computed and the centroid is moved to the mean.4. Steps 2 and 3 are repeated until a stopping criteria is met. Typically, the algorithm terminates when each new iteration results in negligable movement of centroids and the clusters become static.5. When the clusters stop changing, the algorithm has *converged*, defining the locations of the clusters - note that the random starting point for the centroids means that re-running the algorithm could result in slightly different clusters, so training usually involves multiple iterations, reinitializing the centroids each time, and the model with the best WCSS is selected.Let's try using K-Means on our seeds data with a K value of 3.
from sklearn.cluster import KMeans # Create a model based on 3 centroids model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000) # Fit to the data and predict the cluster assignments for each data point km_clusters = model.fit_predict(features.values) # View the cluster assignments km_clusters
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
Let's see those cluster assignments with the two-dimensional data points.
def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, km_clusters)
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
Hopefully, the the data has been separated into three distinct clusters.So what's the practical use of clustering? In some cases, you may have data that you need to group into distict clusters without knowing how many clusters there are or what they indicate. For example a marketing organization might want to separate customers into distinct segments, and then investigate how those segments exhibit different purchasing behaviors.Sometimes, clustering is used as an initial step towards creating a classification model. You start by identifying distinct groups of data points, and then assign class labels to those clusters. You can then use this labelled data to train a classification model.In the case of the seeds data, the different species of seed are already known and encoded as 0 (*Kama*), 1 (*Rosa*), or 2 (*Canadian*), so we can use these identifiers to compare the species classifications to the clusters identified by our unsupervised algorithm
seed_species = data[data.columns[7]] plot_clusters(features_2d, seed_species.values)
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
There may be some differences between the cluster assignments and class labels, but the K-Means model should have done a reasonable job of clustering the observations so that seeds of the same species are generally in the same cluster. Hierarchical ClusteringHierarchical clustering methods make fewer distributional assumptions when compared to K-means methods. However, K-means methods are generally more scalable, sometimes very much so.Hierarchical clustering creates clusters by either a *divisive* method or *agglomerative* method. The divisive method is a "top down" approach starting with the entire dataset and then finding partitions in a stepwise manner. Agglomerative clustering is a "bottom up** approach. In this lab you will work with agglomerative clustering which roughly works as follows:1. The linkage distances between each of the data points is computed.2. Points are clustered pairwise with their nearest neighbor.3. Linkage distances between the clusters are computed.4. Clusters are combined pairwise into larger clusters.5. Steps 3 and 4 are repeated until all data points are in a single cluster.The linkage function can be computed in a number of ways:- Ward linkage measures the increase in variance for the clusters being linked,- Average linkage uses the mean pairwise distance between the members of the two clusters,- Complete or Maximal linkage uses the maximum distance between the members of the two clusters.Several different distance metrics are used to compute linkage functions:- Euclidian or l2 distance is the most widely used. This metric is only choice for the Ward linkage method.- Manhattan or l1 distance is robust to outliers and has other interesting properties.- Cosine similarity, is the dot product between the location vectors divided by the magnitudes of the vectors. Notice that this metric is a measure of similarity, whereas the other two metrics are measures of difference. Similarity can be quite useful when working with data such as images or text documents. Agglomerative ClusteringLet's see an example of clustering the seeds data using an agglomerative clustering algorithm.
from sklearn.cluster import AgglomerativeClustering agg_model = AgglomerativeClustering(n_clusters=3) agg_clusters = agg_model.fit_predict(features.values) agg_clusters
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
So what do the agglomerative cluster assignments look like?
import matplotlib.pyplot as plt %matplotlib inline def plot_clusters(samples, clusters): col_dic = {0:'blue',1:'green',2:'orange'} mrk_dic = {0:'*',1:'x',2:'+'} colors = [col_dic[x] for x in clusters] markers = [mrk_dic[x] for x in clusters] for sample in range(len(clusters)): plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Assignments') plt.show() plot_clusters(features_2d, agg_clusters)
_____no_output_____
MIT
04 - Clustering.ipynb
jschevers/ml-basics
These are a couple of functions I've always kept handy because I end up using them more often than I even expect. They both work off of an iterable, so I'll define one here to use as an example:
my_iter = [x for x in range(20)]
_____no_output_____
MIT
Notebooks/Batch Process and Sliding Windows.ipynb
agbs2k8/python_FAQ
Batch ProcessingThis is a function that will yield small subsets of an iterable and allow you to work with smaller parts at once. I've used this when I have too much data to process all of it at once, so I could process it in chuncks.
def batch(iterable, n: int = 1): """ Return a dataset in batches (no overlap) :param iterable: the item to be returned in segments :param n: length of the segments :return: generator of portions of the original data """ for ndx in range(0, len(iterable), n): yield iterable[ndx:max(ndx+n, 1)] for this_batch in batch(my_iter, 3): print(this_batch)
[0, 1, 2] [3, 4, 5] [6, 7, 8] [9, 10, 11] [12, 13, 14] [15, 16, 17] [18, 19]
MIT
Notebooks/Batch Process and Sliding Windows.ipynb
agbs2k8/python_FAQ
You can see that it just split my iterable up into smaller parts. It still gave me back all of it, and did not repeat any portions. Sliding WindowDifferent from the batch, this gives me overlapping sections of the iterable. You define a window size, and it will give you back each window, in order of that size.
from itertools import islice def window(sequence, n: int = 5): """ Returns a sliding window of width n over the iterable sequence :param sequence: iterable to yield segments from :param n: number of items in the window :return: generator of windows """ _it = iter(sequence) result = tuple(islice(_it, n)) if len(result) == n: yield result for element in _it: result = result[1:] + (element,) yield result for this_window in window(my_iter, 4): print(this_window)
(0, 1, 2, 3) (1, 2, 3, 4) (2, 3, 4, 5) (3, 4, 5, 6) (4, 5, 6, 7) (5, 6, 7, 8) (6, 7, 8, 9) (7, 8, 9, 10) (8, 9, 10, 11) (9, 10, 11, 12) (10, 11, 12, 13) (11, 12, 13, 14) (12, 13, 14, 15) (13, 14, 15, 16) (14, 15, 16, 17) (15, 16, 17, 18) (16, 17, 18, 19)
MIT
Notebooks/Batch Process and Sliding Windows.ipynb
agbs2k8/python_FAQ
QUBO and basic calculation Here we are going to check the way how to create QUBO matrix and solve it. First we prepare the wildqat sdk and create an instance.
import wildqat as wq a = wq.opt()
_____no_output_____
Apache-2.0
examples_en/tutorial001_qubo_en.ipynb
mori388/annealing
Next we make a matrix called QUBO like below.
a.qubo = [[4,-4,-4],[0,4,-4],[0,0,4]]
_____no_output_____
Apache-2.0
examples_en/tutorial001_qubo_en.ipynb
mori388/annealing
And then calculate with algorithm. This time we use SA algorithm to solve the matrix.
a.sa()
1.412949800491333
Apache-2.0
examples_en/tutorial001_qubo_en.ipynb
mori388/annealing
And we got the result. Upper value show the time to aquire the result. The array is the list of result of the bit. The QUBO matrix once converted to Ising Model so called Jij matrix. If we want to check it we can easily aquire is by printing a.J
print(a.J)
[[0, -1, -1], [0, 0, -1], [0, 0, 0]]
Apache-2.0
examples_en/tutorial001_qubo_en.ipynb
mori388/annealing
Component GraphsEvalML component graphs represent and describe the flow of data in a collection of related components. A component graph is comprised of nodes representing components, and edges between pairs of nodes representing where the inputs and outputs of each component should go. It is the backbone of the features offered by the EvalML [pipeline](pipelines.ipynb), but is also a powerful data structure on its own. EvalML currently supports component graphs as linear and [directed acyclic graphs (DAG)](https://en.wikipedia.org/wiki/Directed_acyclic_graph). Defining a Component GraphComponent graphs can be defined by specifying the dictionary of components and edges that describe the graph.In this dictionary, each key is a reference name for a component. Each corresponding value is a list, where the first element is the component itself, and the remaining elements are the input edges that should be connected to that component. The component as listed in the value can either be the component object itself or its string name.This stucture is very similar to that of [Dask computation graphs](https://docs.dask.org/en/latest/spec.html).For example, in the code example below, we have a simple component graph made up of two components: an Imputer and a Random Forest Classifer. The names used to reference these two components are given by the keys, "My Imputer" and "RF Classifier" respectively. Each value in the dictionary is a list where the first element is the component corresponding to the component name, and the remaining elements are the inputs, e.g. "My Imputer" represents an Imputer component which has inputs "X" (the original features matrix) and "y" (the original target).Feature edges are specified as `"X"` or `"{component_name}.x"`. For example, `{"My Component": [MyComponent, "Imputer.x", ...]}` indicates that we should use the feature output of the `Imputer` as as part of the feature input for MyComponent. Similarly, target edges are specified as `"y"` or `"{component_name}.y". {"My Component": [MyComponent, "Target Imputer.y", ...]}` indicates that we should use the target output of the `Target Imputer` as a target input for MyComponent.Each component can have a number of feature inputs, but can only have one target input. All input edges must be explicitly defined. Using a real example, we define a simple component graph consisting of three nodes: an Imputer ("My Imputer"), an One-Hot Encoder ("OHE"), and a Random Forest Classifier ("RF Classifier"). - "My Imputer" takes the original X as a features input, and the original y as the target input- "OHE" also takes the original X as a features input, and the original y as the target input- "RF Classifer" takes the concatted feature outputs from "My Imputer" and "OHE" as a features input, and the original y as the target input.
from evalml.pipelines import ComponentGraph component_dict = { 'My Imputer': ['Imputer', 'X', 'y'], 'OHE': ['One Hot Encoder', 'X', 'y'], 'RF Classifier': ['Random Forest Classifier', 'My Imputer.x', 'OHE.x', 'y'] # takes in multiple feature inputs } cg_simple = ComponentGraph(component_dict)
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
All component graphs must end with one final or terminus node. This can either be a transformer or an estimator. Below, the component graph is invalid because has two terminus nodes: the "RF Classifier" and the "EN Classifier".
# Can't instantiate a component graph with more than one terminus node (here: RF Classifier, EN Classifier) component_dict = { 'My Imputer': ['Imputer', 'X', 'y'], 'RF Classifier': ['Random Forest Classifier', 'My Imputer.x', 'y'], 'EN Classifier': ['Elastic Net Classifier', 'My Imputer.x', 'y'] }
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
Once we have defined a component graph, we can instantiate the graph with specific parameter values for each component using `.instantiate(parameters)`. All components in a component graph must be instantiated before fitting, transforming, or predicting.Below, we instantiate our graph and set the value of our Imputer's `numeric_impute_strategy` to "most_frequent".
cg_simple.instantiate({'My Imputer': {'numeric_impute_strategy': 'most_frequent'}})
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
Components in the Component GraphYou can use `.get_component(name)` and provide the unique component name to access any component in the component graph. Below, we can grab our Imputer component and confirm that `numeric_impute_strategy` has indeed been set to "most_frequent".
cg_simple.get_component('My Imputer')
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
You can also `.get_inputs(name)` and provide the unique component name to to retrieve all inputs for that component.Below, we can grab our 'RF Classifier' component and confirm that we use `"My Imputer.x"` as our features input and `"y"` as target input.
cg_simple.get_inputs('RF Classifier')
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
Component Graph Computation OrderUpon initalization, each component graph will generate a topological order. We can access this generated order by calling the `.compute_order` attribute. This attribute is used to determine the order that components should be evaluated during calls to `fit` and `transform`.
cg_simple.compute_order
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
Visualizing Component Graphs We can get more information about an instantiated component graph by calling `.describe()`. This method will pretty-print each of the components in the graph and its parameters.
# Using a more involved component graph with more complex edges component_dict = { "Imputer": ["Imputer", "X", "y"], "Target Imputer": ["Target Imputer", "X", "y"], "OneHot_RandomForest": ["One Hot Encoder", "Imputer.x", "Target Imputer.y"], "OneHot_ElasticNet": ["One Hot Encoder", "Imputer.x", "y"], "Random Forest": ["Random Forest Classifier", "OneHot_RandomForest.x", "y"], "Elastic Net": ["Elastic Net Classifier", "OneHot_ElasticNet.x", "Target Imputer.y"], "Logistic Regression": [ "Logistic Regression Classifier", "Random Forest.x", "Elastic Net.x", "y", ], } cg_with_estimators = ComponentGraph(component_dict) cg_with_estimators.instantiate({}) cg_with_estimators.describe()
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
We can also visualize a component graph by calling `.graph()`.
cg_with_estimators.graph()
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
Component graph methodsSimilar to the pipeline structure, we can call `fit`, `transform` or `predict`. We can also call `fit_features` which will fit all but the final component and `compute_final_component_features` which will transform all but the final component. These two methods may be useful in cases where you want to understand what transformed features are being passed into the last component.
from evalml.demos import load_breast_cancer X, y = load_breast_cancer() component_dict = { 'My Imputer': ['Imputer', 'X', 'y'], 'OHE': ['One Hot Encoder', 'My Imputer.x', 'y'] } cg_with_final_transformer = ComponentGraph(component_dict) cg_with_final_transformer.instantiate({}) cg_with_final_transformer.fit(X, y) # We can call `transform` for ComponentGraphs with a final transformer cg_with_final_transformer.transform(X, y) cg_with_estimators.fit(X, y) # We can call `predict` for ComponentGraphs with a final transformer cg_with_estimators.predict(X)
_____no_output_____
BSD-3-Clause
docs/source/user_guide/component_graphs.ipynb
peterataylor/evalml
![alt text](https://github.com/callysto/callysto-sample-notebooks/blob/master/notebooks/images/Callysto_Notebook-Banner_Top_06.06.18.jpg?raw=true) Basics of PythonThis notebook will provide basics of python and introduction to DataFrames.To enter code in Colab we are going to use **Code Cells**. Click on `+Code` in the top left corner (or in between cells) to create a new Code cell.
# Anything in a code cell after a pound sign is a comment! # You can type anything here and it will not be excecuted # Variables are defined with an equals sign (=) my_variable = 10 # You cannot put spaces in variable names. other_variable = "some text" # variables need not be numbers! # Print will output our variables below the cell print(my_variable, other_variable) # Variables are also shared between cells. You can also pring words and sentences directly. print(my_variable, other_variable, "We can print text directly in quotes") # You can do mathematical operations in python x = 5 y = 10 add = x + y subtract = x - y multiply = x * y divide = x/y print(add, subtract, multiply, divide)
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
--- Exercise 11. In the cell below, assign variable **z** to your name and run the cell. 2. In the cell below, write a comment on the same line where you define z. Run the cell to make sure the comment is not changing anything.---
## Enter your code in the line below z = "your name here" ## print(z, "is loving Python!")
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
Basics of DataFrames and pandas **DataFrame** - two-dimentional data structure, similar to a table or a spreadsheet.In Python there is a library of pre-defined functions to work with DataFrames - **pandas**.
#load "pandas" library under short name "pd" import pandas as pd
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
To read file in csv format the **read_csv()** function is used, it can read a file or a file from a URL.
#we have csv file stored in the cloud - it is 10 rows of data related to Titanic url = "https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_d22d1e3f28be45209ba8f660295c84cf/hackaton/titanic_short.csv" #read csv file from url and save it as dataframe titanic = pd.read_csv(url) #shape shows number of rows and number of columns titanic.shape
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
Basic operations with DataFrames Select rows/colums by name and index
#Getting column names titanic.columns #Selecting one column titanic[['survived']] #Selecting multiple columns titanic[['survived','age']] #Selecting first 5 rows #try changin to head(10) or head(2) titanic.head() #Getting index (row names) - note that row names start at 0 titanic.index.tolist() #Selecting one row titanic.iloc[[2]] #(it's row 3, remember row number start at zero) #Selecting multiple rows(rows 2 and 5): titanic.iloc[[2,5]] #Selecting rows and columns: titanic[['survived','age']].iloc[[2,5]]
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
--- Exercise 21. In the cell below, uncomment the code2. Change "column1", "column2", and "column3" to "fare", "age", and "class" to get these 3 columns---
#titanic[["column1","column2","column3"]]
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
Add a new column using existing one
#creating new column - age in months by multiplying "age" column by 12 titanic['age_months'] = titanic['age']*12 #look at the very last column 'age_months' titanic
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
Select specific colums by condition
#create a condition: for example sex equals female condition = titanic['sex']=="female" #note == sign checks rather than assigns condition #it shows True for rows where sex is equal to female #select only rows where condition is True (all female) titanic[condition] #other examples of conditions: #Not equal condition1 = titanic['sex']!="female" #equal to one value in the list condition2 = titanic['class'].isin(["First","Second"]) #Multiple conditions - "and" (sex is "female" and class id "First") condition3 = (titanic['sex']=="female") & (titanic['class']=="First") #Multiple conditions - "or" (sex is "female" or class id "First") condition4 = (titanic['sex']=="female") | (titanic['class']=="First")
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
--- Exercise 31. Change the cell below to subset data where sex is "female" and embark_town is "Cherbourg"---
condition5 = (titanic['sex']=="female") | (titanic['class']=="First") #change the condition here titanic[condition5]
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
Sorting
#sorting by pclass - note the ascending paramater, try changing it to False titanic.sort_values("pclass", ascending=True) #sort by two columns - first by pclass and then by age titanic.sort_values(["pclass","age"])
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
Grouping and calculating summaries on groups
#split data into groups based on all unique values in "survived" column - #first group is survived(1), second groups is not survived(0) #calculate average(mean) for every column for both groups titanic.groupby("survived").mean() #another operations you can do on groups are # min(), max(), sum() #you can do multiple operations at once using agg() function titanic.groupby("survived").agg(["mean","max"])
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
--- Exercise 41. Modify the cell below to calculate **max()** for every column grouped by "class"---
titanic.groupby("survived").mean() ##modify this cell
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
Calculating number of rows by group
#using size() function to calculate number of rows by group row_counts = titanic.groupby("sex").size() #create new column "count" - to store row numbers row_counts = row_counts.reset_index(name="count") row_counts
_____no_output_____
CC-BY-4.0
Prep_materials/Python and pandas basics solutions.ipynb
lgfunderburk/hackathon
Univariate Process Kalman Filtering ExampleIn this Jypyter notebook we implement the example given on pages 11-15 of [An Introduction to the Kalman Filter](http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html) by Greg Welch and Gary Bishop. It is written in the spirit of Andrew Straw's [SciPy cookbook](http://scipy-cookbook.readthedocs.io/items/KalmanFiltering.html). Our aim is to show how one can use the higher-level TSA routines in this simple setting.
import os, sys sys.path.append(os.path.abspath('../../main/python')) import numpy as np import matplotlib.pyplot as plt from thalesians.tsa.distrs import NormalDistr as N import thalesians.tsa.filtering as filtering import thalesians.tsa.filtering.kalman as kalman import thalesians.tsa.processes as proc import thalesians.tsa.pypes as pypes import thalesians.tsa.random as rnd import thalesians.tsa.simulation as sim %matplotlib inline plt.rcParams['figure.figsize'] = (16, 10) rnd.random_state(np.random.RandomState(seed=42), force=True) pype = pypes.Pype(pypes.Direction.OUTGOING, name='FILTER', port=5758) n = 100 # number of itrations x = -0.37727 # true value z = np.random.normal(x, .1, size=n) # observations (normal about x, sd=0.1) posteriors = np.zeros(n) # a posteri estimate of x P = np.zeros(n) # a posteri error estimate priors = np.zeros(n) # a priori estimate of x Pminus = np.zeros(n) # a priori error estimate P[0] = 1.0 Q = 1e-5 # process variance R = 0.1**2 # estimate of measurement variance (change to see the effect) # Instantiate the process W = proc.WienerProcess(mean=0, vol=Q) # Insantiate the filter kf = kalman.KalmanFilter(time=0, state_distr=N(0.,1.), process=W, pype=pype) observable = kf.create_named_observable('noisy observation', kalman.KalmanFilterObsModel.create(1.), W) for k in range(0, n): prior = observable.predict(k) P[k] = prior.distr.cov[0][0] obs = observable.observe(time=k, obs=N(z[k], R), true_value=x) posterior = observable.predict(k) posteriors[k] = posterior.distr.mean[0][0] plt.figure() plt.plot(z, 'k+', label='noisy measurements') plt.plot(posteriors, 'b-', label='a posteri estimate') plt.axhline(x, color='g', label='truth value') plt.legend() plt.title('Estimate vs. iteration step', fontweight='bold') plt.xlabel('Iteration') plt.ylabel('Voltage') plt.figure() plt.plot(P[1:], label='a priori error estimate') plt.title('Estimated $\it{\mathbf{a \ priori}}$ error vs. iteration step', fontweight='bold') plt.xlabel('Iteration') plt.ylabel('$(Voltage)^2$') plt.ylim([0, 0.015]) plt.show() pype.close()
_____no_output_____
BSD-3-Clause
tsa/src/jupyter/python/welchbishopkalmanfilteringtutorial.ipynb
mikimaus78/ml_monorepo
- Combine all data
import pandas as pd from os import listdir path = '../data/' files = listdir('../data/') df = pd.DataFrame(columns=["url", "query", "text"]) for f in files: temp = pd.read_csv(path + f) if 'article-name' in temp.columns: temp.rename(columns={'article-name':'name','article-url':'url','content':'text','keyword':'query'}, inplace=True) if len(temp) < 1: continue df = df.append(temp) df.drop(['Unnamed: 0', 'name'], inplace=True, axis=1)
_____no_output_____
MIT
doc2vec/Word2Vec to cyber security data.ipynb
sagar1993/NLP_cyber_security
- data preprocessing 1. stop word removal 2. lower case letters 3. non ascii character removal
from nltk.corpus import stopwords import re stop = stopwords.words('english') def normalize_text(text): norm_text = text.lower() # Replace breaks with spaces norm_text = norm_text.replace('<br />', ' ') # Pad punctuation with spaces on both sides norm_text = re.sub(r"([\.\",\(\)!\?;:])", " \\1 ", norm_text) return norm_text def remove_stop_words(text): return " ".join([item.lower() for item in text.split() if item not in stop]) def remove_non_ascii(text): return ''.join(["" if ord(i) < 32 or ord(i) > 126 else i for i in text]) df['text'] = df['text'].apply(remove_non_ascii) df['text'] = df['text'].apply(normalize_text) df['text'] = df['text'].apply(remove_stop_words) df["text"] = df['text'].str.replace('[^\w\s]','')
_____no_output_____
MIT
doc2vec/Word2Vec to cyber security data.ipynb
sagar1993/NLP_cyber_security
- a simple word2vec model In this section we apply simple word to vec model to tokenized data.
from gensim.models import Word2Vec from nltk import word_tokenize df['tokenized_text'] = df.apply(lambda row: word_tokenize(row['text']), axis=1) model = Word2Vec(df['tokenized_text'], size=100) for num in [1, 3, 5, 10, 12, 16, 17, 18, 19, 28, 29, 30, 32, 33, 34, 37, 38]: term = "apt%s"%str(num) if term in model.wv.vocab: print("Most similar words for %s"%term) for t in model.most_similar(term): print(t) print('\n')
Most similar words for apt1 ('mandiant', 0.9992831349372864) ('according', 0.9988211989402771) ('china', 0.9986724257469177) ('defense', 0.9986507892608643) ('kaspersky', 0.9986412525177002) ('iranian', 0.9985784888267517) ('military', 0.9983772039413452) ('lab', 0.9978839159011841) ('detected', 0.997614860534668) ('published', 0.997364342212677) Most similar words for apt3 ('strontium', 0.9977763891220093) ('cozy', 0.9963721036911011) ('tracked', 0.9958826899528503) ('team', 0.994817852973938) ('also', 0.9941498041152954) ('menupass', 0.9935141205787659) ('linked', 0.9934953451156616) ('axiom', 0.9930843114852905) ('chinalinked', 0.9929003715515137) ('behind', 0.9923593997955322) Most similar words for apt10 ('apt37', 0.9996817111968994) ('sophisticated', 0.9994451403617859) ('naikon', 0.9994421601295471) ('overlap', 0.999294638633728) ('entities', 0.9992740154266357) ('micro', 0.9989956021308899) ('noticed', 0.9988883137702942) ('tracks', 0.9988324642181396) ('primarily', 0.9988023042678833) ('associated', 0.9987926483154297) Most similar words for apt17 ('vietnamese', 0.9984132051467896) ('hellsing', 0.9982680082321167) ('netherlands', 0.9982122182846069) ('turla', 0.9981800317764282) ('aligns', 0.99793940782547) ('region', 0.997829258441925) ('continues', 0.9977688193321228) ('operating', 0.9977645874023438) ('variety', 0.9977619647979736) ('aware', 0.9976860284805298) Most similar words for apt28 ('sofacy', 0.9984127283096313) ('bear', 0.9978348612785339) ('known', 0.9976195096969604) ('fancy', 0.9963506460189819) ('storm', 0.9960793256759644) ('apt', 0.995140790939331) ('pawn', 0.9940293431282043) ('sednit', 0.9939311742782593) ('tsar', 0.9931427240371704) ('actor', 0.9903273582458496) Most similar words for apt29 ('sandworm', 0.9979566335678101) ('2010', 0.9978185892105103) ('including', 0.9976153373718262) ('observed', 0.9976032972335815) ('overview', 0.9973697662353516) ('spotted', 0.9972324371337891) ('aimed', 0.9965631365776062) ('2007', 0.9963749647140503) ('buckeye', 0.9962424039840698) ('aka', 0.9962256550788879) Most similar words for apt30 ('companies', 0.998908281326294) ('prolific', 0.9988271594047546) ('variety', 0.9987081289291382) ('expanded', 0.9986468553543091) ('focuses', 0.9986134767532349) ('continues', 0.998511552810669) ('connected', 0.9984531402587891) ('detailed', 0.9984067678451538) ('interests', 0.9984041452407837) ('actively', 0.9984041452407837) Most similar words for apt32 ('continues', 0.9995431900024414) ('region', 0.9994964003562927) ('ties', 0.9994940757751465) ('destructive', 0.999233067035675) ('interests', 0.9991957545280457) ('europe', 0.9991946220397949) ('dukes', 0.9991874098777771) ('mainly', 0.9991647005081177) ('countries', 0.9991510510444641) ('apt38', 0.9991440176963806) Most similar words for apt33 ('multiple', 0.9996379613876343) ('japanese', 0.9994475841522217) ('revealed', 0.9994279146194458) ('involved', 0.9992635250091553) ('south', 0.9992367029190063) ('2009', 0.998937726020813) ('responsible', 0.9989287257194519) ('evidence', 0.9987417459487915) ('associated', 0.9987338781356812) ('determined', 0.9987262487411499) Most similar words for apt34 ('shift', 0.9994713068008423) ('particularly', 0.9993870258331299) ('continue', 0.9993187785148621) ('indicate', 0.9992826581001282) ('crew', 0.9991933703422546) ('consistent', 0.999139666557312) ('palo', 0.999091625213623) ('august', 0.9990721344947815) ('added', 0.9990265369415283) ('provided', 0.9990137815475464) Most similar words for apt37 ('apt10', 0.9996817111968994) ('sophisticated', 0.9993605017662048) ('entities', 0.9991942048072815) ('overlap', 0.9991032481193542) ('naikon', 0.9991011619567871) ('micro', 0.9990009069442749) ('primarily', 0.9989291429519653) ('associated', 0.9988642930984497) ('highly', 0.9987080097198486) ('noticed', 0.9986851811408997) Most similar words for apt38 ('continues', 0.9994156956672668) ('individuals', 0.9993045330047607) ('early', 0.9992733001708984) ('turla', 0.9992636442184448) ('stone', 0.9992102980613708) ('experts', 0.9991610050201416) ('europe', 0.9991508722305298) ('apt32', 0.9991441965103149) ('kitten', 0.9991305470466614) ('region', 0.9991227388381958)
MIT
doc2vec/Word2Vec to cyber security data.ipynb
sagar1993/NLP_cyber_security
here we got one interesting result for apt17 as apt28 but for all other word2vec results we observe that we are getting names like malware, attackers, groups, backdoor in the most similar items. It might be the case that the names of attacker groups are ommited because they are phrases instead simple words. - word2vec with bigram phrases here we try to find bigram phrases from the dataset and apply word2vec model to it
from gensim.models import Phrases from collections import Counter bigram = Phrases() bigram.add_vocab(df['tokenized_text']) bigram_counter = Counter() for key in bigram.vocab.keys(): if len(key.split("_")) > 1: bigram_counter[key] += bigram.vocab[key] for key, counts in bigram_counter.most_common(20): print '{0: <20} {1}'.format(key.encode("utf-8"), counts) bigram_model = Word2Vec(bigram[df['tokenized_text']], size=100) for num in [1, 3, 5, 10, 12, 16, 17, 18, 19, 28, 29, 30, 32, 33, 34, 37, 38]: term = "apt%s"%str(num) if term in bigram_model.wv.vocab: print("Most similar words for %s"%term) for t in bigram_model.most_similar(term): print(t) print('\n')
Most similar words for apt1 (u'different', 0.99991774559021) (u'likely', 0.9999154806137085) (u'well', 0.9999152421951294) (u'says', 0.9999047517776489) (u'multiple', 0.9999043941497803) (u'threat_actors', 0.9998949766159058) (u'network', 0.9998934268951416) (u'according', 0.9998912811279297) (u'compromised', 0.9998894929885864) (u'related', 0.999876856803894) Most similar words for apt3 (u'actor', 0.9998462796211243) (u'described', 0.9998243451118469) (u'also_known', 0.9998069405555725) (u'actors', 0.9997928738594055) (u'recently', 0.9997922778129578) (u'experts', 0.999782919883728) (u'apt29', 0.9997620582580566) (u'identified', 0.9997564554214478) (u'two', 0.9997557401657104) (u'domains', 0.9997459650039673) Most similar words for apt10 (u'time', 0.999898374080658) (u'analysis', 0.9998810291290283) (u'u', 0.9998781681060791) (u'version', 0.9998765587806702) (u'based', 0.9998717308044434) (u'provided', 0.9998701810836792) (u'least', 0.9998694658279419) (u'mandiant', 0.9998666644096375) (u'governments', 0.9998637437820435) (u'apt32', 0.9998601675033569) Most similar words for apt17 (u'connections', 0.9996646642684937) (u'email', 0.9996588230133057) (u'find', 0.9996576905250549) (u'across', 0.9996559023857117) (u'order', 0.9996424913406372) (u'web', 0.9996327757835388) (u'user', 0.9996271133422852) (u'connection', 0.9996263980865479) (u'key', 0.9996225833892822) (u'shows', 0.9996156096458435) Most similar words for apt28 (u'fireeye', 0.9996447563171387) (u'using', 0.999575138092041) (u'targeted', 0.9995599985122681) (u'sofacy', 0.9995203614234924) (u'known', 0.9995172619819641) (u'tools', 0.9993760585784912) (u'spotted', 0.9993688464164734) (u'researchers', 0.9991514086723328) (u'report', 0.9991289973258972) (u'also', 0.9991098046302795) Most similar words for apt29 (u'recently', 0.9998775720596313) (u'however', 0.9998724460601807) (u'actors', 0.9998624920845032) (u'two', 0.999857485294342) (u'vulnerabilities', 0.9998537302017212) (u'identified', 0.9998456835746765) (u'first', 0.9998396635055542) (u'described', 0.9998297691345215) (u'leveraged', 0.999822735786438) (u'seen', 0.9998195767402649) Most similar words for apt30 (u'research', 0.999484658241272) (u'published', 0.9994805455207825) (u'noted', 0.9994770288467407) (u'fireeye_said', 0.9994675517082214) (u'account', 0.9994667768478394) (u'provide', 0.9994657039642334) (u'command_control', 0.9994556903839111) (u'splm', 0.9994515776634216) (u'c2', 0.9994462728500366) (u'2013', 0.9994445443153381) Most similar words for apt32 (u'techniques', 0.9999111890792847) (u'additional', 0.9999087452888489) (u'analysis', 0.9999069571495056) (u'many', 0.9999059438705444) (u'companies', 0.9998983144760132) (u'based', 0.9998965263366699) (u'part', 0.9998964071273804) (u'backdoors', 0.999894380569458) (u'mandiant', 0.9998939037322998) (u'another', 0.9998925924301147) Most similar words for apt33 (u'mandiant', 0.9999130368232727) (u'year', 0.9999092221260071) (u'techniques', 0.9998992681503296) (u'tracked', 0.999896764755249) (u'team', 0.9998966455459595) (u'last_year', 0.9998915195465088) (u'part', 0.9998914003372192) (u'military', 0.9998868703842163) (u'chinese', 0.9998816251754761) (u'threat', 0.9998784065246582) Most similar words for apt34 (u'services', 0.9997851848602295) (u'targeted_attacks', 0.9997463226318359) (u'example', 0.9997448325157166) (u'called', 0.999743640422821) (u'available', 0.9997414946556091) (u'able', 0.9997405409812927) (u'activities', 0.999738335609436) (u'2018', 0.9997329711914062) (u'make', 0.9997280836105347) (u'details', 0.9997265934944153) Most similar words for apt37 (u'flaw', 0.999801754951477) (u'2014', 0.9997944831848145) (u'2013', 0.9997936487197876) (u'efforts', 0.999792754650116) (u'made', 0.9997915625572205) (u'designed', 0.9997785091400146) (u'list', 0.9997777938842773) (u'media', 0.9997776746749878) (u'make', 0.9997761845588684) (u'attribution', 0.9997747540473938) Most similar words for apt38 (u'command_control', 0.99981290102005) (u'attribution', 0.9997984170913696) (u'media', 0.9997962117195129) (u'activities', 0.9997954368591309) (u'2014', 0.9997861385345459) (u'software', 0.9997845888137817) (u'see', 0.9997791051864624) (u'research', 0.999776303768158) (u'designed', 0.9997758865356445) (u'even', 0.9997751712799072)
MIT
doc2vec/Word2Vec to cyber security data.ipynb
sagar1993/NLP_cyber_security
After applying bigram phrases still we cannot see the desired results. Word2Vec model topic by topic using bigram phrases
df_doc = df[['query', 'text']] df_doc df_doc = df_doc.groupby(['query'],as_index=False).first() df_doc from nltk.corpus import stopwords import re stop = stopwords.words('english') + ['fireeye', 'crowdstrike', 'symantec', 'rapid7', 'securityweek', 'kaspersky'] def normalize_text(text): norm_text = text.lower() # Replace breaks with spaces norm_text = norm_text.replace('<br />', ' ') # Pad punctuation with spaces on both sides norm_text = re.sub(r"([\.\",\(\)!\?;:])", " \\1 ", norm_text) return norm_text def remove_stop_words(text): return " ".join([item.lower() for item in text.split() if item not in stop]) def remove_non_ascii(text): return ''.join(["" if ord(i) < 32 or ord(i) > 126 else i for i in text]) df_doc['text'] = df_doc['text'].apply(remove_non_ascii) df_doc['text'] = df_doc['text'].apply(normalize_text) df_doc['text'] = df_doc['text'].apply(remove_stop_words) df_doc["text"] = df_doc['text'].str.replace('[^\w\s]','') df_doc df_doc['tokenized_text'] = df_doc.apply(lambda row: word_tokenize(row['text']), axis=1) df_doc from gensim.models import Phrases from collections import Counter for num in ['APT1', 'APT10', 'APT12', 'APT15', 'APT16', 'APT17', 'APT18', 'APT27', 'APT28', 'APT29', 'APT3', 'APT30', 'APT32', 'APT33', 'APT34', 'APT35', 'APT37', 'APT38']: temp = df_doc[df_doc['query'] == num] print(temp.shape) if temp.shape[0] == 0: continue bigram = Phrases() bigram.add_vocab(temp['tokenized_text']) bigram_model = Word2Vec(bigram[temp['tokenized_text']], size=100) term = num.lower() if term in bigram_model.wv.vocab: print("Most similar words for %s"%term) for t in bigram_model.most_similar(term, topn=20): print(t) print('\n') num = 38 temp = df_doc[df_doc['query'] == 'APT%s'%num] bigram = Phrases() bigram.add_vocab(temp['tokenized_text']) bigram_model = Word2Vec(bigram[temp['tokenized_text']], size=100) term = 'apt%s'%num if term in bigram_model.wv.vocab: print("Most similar words for %s"%term) for t in bigram_model.most_similar(term, topn=20): print(t) print('\n') temp.shape
_____no_output_____
MIT
doc2vec/Word2Vec to cyber security data.ipynb
sagar1993/NLP_cyber_security
Logistic Regression Project In this project we will be working with a fake advertising data set, indicating whether or not a particular internet user clicked on an Advertisement. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user.This data set contains the following features:* 'Daily Time Spent on Site': consumer time on site in minutes* 'Age': cutomer age in years* 'Area Income': Avg. Income of geographical area of consumer* 'Daily Internet Usage': Avg. minutes a day consumer is on the internet* 'Ad Topic Line': Headline of the advertisement* 'City': City of consumer* 'Male': Whether or not consumer was male* 'Country': Country of consumer* 'Timestamp': Time at which consumer clicked on Ad or closed window* 'Clicked on Ad': 0 or 1 indicated clicking on Ad Import Libraries**Import a few libraries you think you'll need (Or just import them as you go along!)**
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import warnings warnings.filterwarnings("ignore")
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Get the Data**Read in the advertising.csv file and set it to a data frame called ad_data.**
ad_data = pd.read_csv("advertising.csv")
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
**Check the head of ad_data**
ad_data.head()
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
** Use info and describe() on ad_data**
ad_data.info() ad_data.describe()
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Exploratory Data AnalysisLet's use seaborn to explore the data!Try recreating the plots shown below!** Create a histogram of the Age**
sns.set_style('whitegrid') ad_data['Age'].hist(bins=30) plt.xlabel("Age")
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
**Create a jointplot showing Area Income versus Age.**
sns.jointplot(x='Age',y='Area Income',data=ad_data)
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
**Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.**
sns.jointplot(x='Age',y='Daily Time Spent on Site',data=ad_data,color='red',kind='kde')
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
** Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'**
sns.jointplot(x="Daily Time Spent on Site",y="Daily Internet Usage",data=ad_data,color='green')
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
** Finally, create a pairplot with the hue defined by the 'Clicked on Ad' column feature.**
sns.pairplot(ad_data,hue='Clicked on Ad',palette='bwr')
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Logistic RegressionNow it's time to do a train test split, and train our model!You'll have the freedom here to choose columns that you want to train on! ** Split the data into training set and testing set using train_test_split**
from sklearn.model_selection import train_test_split X = ad_data[['Daily Time Spent on Site', 'Age', 'Area Income','Daily Internet Usage', 'Male']] y = ad_data['Clicked on Ad'] X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.33,random_state = 42)
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
** Train and fit a logistic regression model on the training set.**
from sklearn.linear_model import LogisticRegression logmodel = LogisticRegression() logmodel.fit(X_train,y_train)
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Predictions and Evaluations** Now predict values for the testing data.**
predictions = logmodel.predict(X_test)
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
** Create a classification report for the model.**
from sklearn.metrics import classification_report print(classification_report(y_test,predictions)) from sklearn.metrics import confusion_matrix confusion_matrix(y_test,predictions)
_____no_output_____
MIT
Logistic Regression/17.4 Logistic Regression Project.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Forecasting - Facebook Prophet
import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline mpl.rcParams['figure.figsize'] = (16, 10) pd.set_option('display.max_rows', 500) import plotly.graph_objects as go #from fbprophet import Prophet %matplotlib inline plt.style.use('fivethirtyeight') #def mean_absolute_percentage_error(y_true, y_pred): # y_true, y_pred = np.array(y_true), np.array(y_pred) # return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
_____no_output_____
MIT
notebooks/Modelling_Forecast.ipynb
MayuriKalokhe/Data_Science_Covid-19
Trivial Forecast (rolling mean)
df = pd.DataFrame({'X': np.arange(0,10)}) # generate an input df df['y']=df.rolling(3).mean() df #trying over small data set first df_new = pd.read_csv('../data/processed/COVID_small_flat_table.csv',sep=';') df=df_new[['date','India']] df=df.rename(columns={'date': 'ds', 'India': 'y'}) ax = df.set_index('ds').plot(figsize=(12, 8), logy=True) ax.set_ylabel('Daily Number of confimed cases') ax.set_xlabel('Date') plt.show() my_model = Prophet(interval_width=0.95) #changing uncertainty interval to 95% #my_model = Prophet(growth='logistic') #df['cap']=1000000 my_model.fit(df) #adding more date vectors (to predict) to the existing dataframe future_dates = my_model.make_future_dataframe(periods=7, freq='D') #future_dates['cap']=1000000. # only mandatory for the logistic model future_dates.tail() forecast = my_model.predict(future_dates) my_model.plot(forecast,uncertainty=True ); #plotting the same in plotly to overcome fbprophet rendering drawbank import plotly.offline as py from fbprophet.plot import plot_plotly fig = plot_plotly(my_model, forecast) fig.update_layout( width=1024, height=900, xaxis_title="Time", yaxis_title="Confirmed infected people (source johns hopkins csse, log-scale)", ) fig.update_yaxes(type="log",range=[1.1,5.5]) py.iplot(fig) forecast.sort_values(by='ds').head() #checking what information we get from this prediction model my_model.plot_components(forecast); #decomsing the prediction model for treand and seasonal pattern #to get better visualization of the trend, plotting the graph from the data frame data directly forecast[['ds','trend']].set_index('ds').plot(figsize=(12, 8),logy=True)
_____no_output_____
MIT
notebooks/Modelling_Forecast.ipynb
MayuriKalokhe/Data_Science_Covid-19
Cross Validation
from fbprophet.diagnostics import cross_validation df_cv = cross_validation(my_model, initial='40 days', # we take the first 40 days for training period='1 days', # every days a new prediction run horizon = '7 days') #we predict 7days into the future df_cv.sort_values(by=['cutoff','ds'])[0:14] df_cv.head() from fbprophet.diagnostics import performance_metrics df_p = performance_metrics(df_cv) #to understand the error between actual and predicted value df_p from fbprophet.plot import plot_cross_validation_metric fig = plot_cross_validation_metric(df_cv, metric='mape',)
_____no_output_____
MIT
notebooks/Modelling_Forecast.ipynb
MayuriKalokhe/Data_Science_Covid-19
Diagonalplot
## to understand comparison/under and over estimation wrt. actual values horizon='7 days' df_cv['horizon']=df_cv.ds-df_cv.cutoff date_vec=df_cv[df_cv['horizon']==horizon]['ds'] y_hat=df_cv[df_cv['horizon']==horizon]['yhat'] y=df_cv[df_cv['horizon']==horizon]['y'] df_cv_7=df_cv[df_cv['horizon']==horizon] df_cv_7.tail() type(df_cv['horizon'][0]) fig, ax = plt.subplots(1, 1) ax.plot(np.arange(max(y)),np.arange(max(y)),'--',label='diagonal') ax.plot(y,y_hat,'-',label=horizon) # horizon is a np.timedelta objct ax.set_title('Diagonal Plot') ax.set_ylim(10, max(y)) ax.set_xlabel('truth: y') ax.set_ylabel('prediciton: y_hat') ax.set_yscale('log') ax.set_xlim(10, max(y)) ax.set_xscale('log') ax.legend(loc='best', prop={'size': 16});
_____no_output_____
MIT
notebooks/Modelling_Forecast.ipynb
MayuriKalokhe/Data_Science_Covid-19
Trivial Forecast
def mean_absolute_percentage_error(y_true, y_pred): ''' MAPE calculation ''' y_true, y_pred = np.array(y_true), np.array(y_pred) return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 parse_dates=['date'] df_all = pd.read_csv('../data/processed/COVID_small_flat_table.csv',sep=';',parse_dates=parse_dates) df_trivial=df_all[['date','Germany']] df_trivial=df_trivial.rename(columns={'date': 'ds', 'Germany': 'y'}) df_trivial['y_mean_r3']=df_trivial.y.rolling(3).mean() df_trivial['cutoff']=df_trivial['ds'].shift(7) df_trivial['y_hat']=df_trivial['y_mean_r3'].shift(7) df_trivial['horizon']=df_trivial['ds']-df_trivial['cutoff'] print('MAPE: '+str(mean_absolute_percentage_error(df_trivial['y_hat'].iloc[12:,], df_trivial['y'].iloc[12:,]))) df_trivial
MAPE: 134.06143093647987
MIT
notebooks/Modelling_Forecast.ipynb
MayuriKalokhe/Data_Science_Covid-19
Causes of Death in the United States (2011)To visualize the causes of death in the United States, we look at data from the [Centers for Disease Control](http://www.cdc.gov/).To get the most recent information, we start with [data from 2015 (PDF)](http://www.cdc.gov/nchs/data/hus/hus15.pdf019). Table 7 inclues data on the number of legal abortions (699,000 in 2012, the latest year with data available from the CDC). CDC abortion data for 2012 is also given here: [Web](http://www.cdc.gov/reproductivehealth/data_stats/abortion.htm) and [Excel](http://www.cdc.gov/reproductivehealth/data_stats/excel/abortions_2012.xls)The Guttmacher Institute also provides and analyzes abortion data. According to the institute's webpage:>The Guttmacher Institute is a primary source for research and policy analysis on abortion in the United States. In many cases, Guttmacher’s data are more comprehensive than state and federal government sources. The Institute’s work examines the incidence of abortion, access to care and barriers to obtaining services, factors underlying women’s decisions to terminate a pregnancy, characteristics of women who have abortions and the conditions under which women obtain them."[[1]](https://www.guttmacher.org/united-states/abortion)The Guttmacher Institute [September 2016 Fact Sheet](https://www.guttmacher.org/fact-sheet/induced-abortion-united-states?gclid=CjwKEAjw1qHABRDU9qaXs4rtiS0SJADNzJisAvv0VPSG-35GEPayoftb1RwZuF8heovbdZz0u1ns-xoC0cjw_wcB7) indicates that **1.06 million** abortions were performed in 2011. This number is also reported by the CDC in the [2015 document cited above](http://www.cdc.gov/nchs/data/hus/hus15.pdf019). The goal is to compare abortion with other leading causes of death in the United States. We will use the figure of 1.06 million abortions in 2011 reported by the Guttmacher Institute. Data for other causes of death in 2011 data is available from the CDC here: [Deaths: Final Data for 2011 (PDF)](http://www.cdc.gov/nchs/data/nvsr/nvsr63/nvsr63_03.pdf)
%matplotlib inline import pandas as pd from altair import * df = pd.read_excel('2011_causes_of_death.xlsx') df.head() df = df.ix[1:] df.plot(kind="bar",x=df["Cause"], title="United SttCauses of Death", legend=False) c = Chart(df).mark_bar().encode( x=X('Number:Q',axis=Axis(title='2011 Deaths')), y=Y('Cause:O', sort=SortField(field='Number', order='descending', op='sum')) ) c
_____no_output_____
MIT
us-deaths.ipynb
mkudija/US-Causes-of-Death
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
import sys import gym import numpy as np from collections import defaultdict from plot_utils import plot_blackjack_values, plot_policy
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
env = gym.make('Blackjack-v0')
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
print(env.observation_space) print(env.action_space)
Tuple(Discrete(32), Discrete(11), Discrete(2)) Discrete(2)
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
for i_episode in range(3): state = env.reset() while True: print(state) action = env.action_space.sample() state, reward, done, info = env.step(action) if done: print('End game! Reward: ', reward) print('You won :)\n') if reward > 0 else print('You lost :(\n') break
(17, 2, False) End game! Reward: -1 You lost :( (20, 5, False) End game! Reward: -1 You lost :( (20, 7, False) End game! Reward: 1.0 You won :)
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
def generate_episode_from_limit_stochastic(bj_env): episode = [] state = bj_env.reset() while True: probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8] action = np.random.choice(np.arange(2), p=probs) next_state, reward, done, info = bj_env.step(action) episode.append((state, action, reward)) state = next_state if done: break return episode
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
for i in range(3): print(generate_episode_from_limit_stochastic(env))
[((17, 2, True), 0, 1.0)] [((17, 5, False), 1, 0), ((18, 5, False), 1, 0), ((20, 5, False), 1, -1)] [((10, 9, False), 0, -1.0)]
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0): # initialize empty dictionaries of arrays returns_sum = defaultdict(lambda: np.zeros(env.action_space.n)) N = defaultdict(lambda: np.zeros(env.action_space.n)) Q = defaultdict(lambda: np.zeros(env.action_space.n)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() episode = generate_episode(env) states, actions, rewards = zip(*episode) discounts = np.array([gamma**i for i in range(len(episode))]) for i, (state, action, reward) in enumerate(episode): returns_sum[state][action] += sum(rewards[i:]*discounts[:len(rewards)-i]) N[state][action] += 1 Q[state][action] = returns_sum[state][action]/N[state][action] return Q
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
# obtain the action-value function Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic) # obtain the corresponding state-value function V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \ for k, v in Q.items()) # plot the state-value function plot_blackjack_values(V_to_plot)
Episode 500000/500000.
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
def epsilon_soft_policy(Qs, epsilon, nA): policy = np.ones(nA)*epsilon/nA Q_arg_max = np.argmax(Qs) policy[Q_arg_max] = 1 - epsilon + epsilon/nA return policy def generate_episode_with_Q(env, Q, epsilon, nA): episode = [] state = env.reset() while True: probs = epsilon_soft_policy(Q[state], epsilon, nA) action = np.random.choice(np.arange(nA), p=probs) if state in Q else env.action_space.sample() next_state, reward, done, info = env.step(action) episode.append((state, action, reward)) state = next_state if done: break return episode def update_Q(Q, episode, alpha, gamma): states, actions, rewards = zip(*episode) discounts = np.array([gamma**i for i in range(len(episode))]) for i, (state, action, reward) in enumerate(episode): Q_prev = Q[state][action] Q[state][action] = Q_prev + alpha*(sum(rewards[i:]*discounts[:len(rewards)-i]) - Q_prev) return Q def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_start=1.0, epsilon_decay=0.99999, epsilon_min=0.05): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(nA)) epsilon = epsilon_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() epsilon = max(epsilon*epsilon_decay, epsilon_min) episode = generate_episode_with_Q(env, Q, epsilon, nA) Q = update_Q(Q, episode, alpha, gamma) policy = dict((key, np.argmax(value)) for key, value in Q.items()) return policy, Q
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
# obtain the estimated optimal policy and action-value function policy, Q = mc_control(env, 500000, 1/50)
Episode 500000/500000.
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Next, we plot the corresponding state-value function.
# obtain the corresponding state-value function V = dict((k,np.max(v)) for k, v in Q.items()) # plot the state-value function plot_blackjack_values(V)
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Finally, we visualize the policy that is estimated to be optimal.
# plot the policy plot_policy(policy)
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
dmitrylosev/deep-reinforcement-learning
Figure 4 - Effective variablesCreate the figure panels describing the model's effective variables dependencies on US frequency, US amplitude and sonophore radius. Imports
import os import matplotlib.pyplot as plt from PySONIC.plt import plotEffectiveVariables from PySONIC.utils import logger from PySONIC.neurons import getPointNeuron from utils import saveFigsAsPDF
_____no_output_____
MIT
figure_04.ipynb
tjjlemaire/JNE1685
Plot parameters
figindex = 4 fs = 12 lw = 2 ps = 15 figs = {}
_____no_output_____
MIT
figure_04.ipynb
tjjlemaire/JNE1685
Simulation parameters
pneuron = getPointNeuron('RS') a = 32e-9 # m Fdrive = 500e3 # Hz Adrive = 50e3 # Pa
_____no_output_____
MIT
figure_04.ipynb
tjjlemaire/JNE1685
Panel A: dependence on acoustic amplitude
fig = plotEffectiveVariables(pneuron, a=a, f=Fdrive, cmap='Oranges', zscale='log') figs['a'] = fig
_____no_output_____
MIT
figure_04.ipynb
tjjlemaire/JNE1685
Panel B: dependence on US frequency
fig = plotEffectiveVariables(pneuron, a=a, A=Adrive, cmap='Greens', zscale='log') figs['b'] = fig
28/04/2020 22:17:14: Rounding f value (4000000.000000001) to interval upper bound (4000000.0)
MIT
figure_04.ipynb
tjjlemaire/JNE1685
Panel C: dependence on sonophore radius
fig = plotEffectiveVariables(pneuron, f=Fdrive, A=Adrive, cmap='Blues', zscale='log') figs['c'] = fig
_____no_output_____
MIT
figure_04.ipynb
tjjlemaire/JNE1685
Save figure panelsSave figure panels as **pdf** in the *figs* sub-folder:
saveFigsAsPDF(figs, figindex)
_____no_output_____
MIT
figure_04.ipynb
tjjlemaire/JNE1685
Thresh - Binary
res , thresh = cv2.threshold(img,127,255,cv2.THRESH_BINARY_INV) plt.imshow(thresh)
_____no_output_____
MIT
Notebooks/.ipynb_checkpoints/Character Segmentation Model -checkpoint.ipynb
swapnilmarathe007/Handwriting-Recognition