markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Table of Contents1  Dimensionality Reduction1.1  The Problem1.1.1  Multi-Collinearity1.2  Sparsity2  Principle Component Analysis2.1  Important Points:3  Singular Value Decomposition3.1  Measuring the Quality of the Reconstruction3.2  Heuristic Step for How Many Dimensions to Keep4  GLOVE4.1  Using Spacy word2vec embeddings4.2  Using Glove5  Clustering Text Dimensionality Reduction The ProblemThere is an interesting tradeoff between model performance and a feature's dimensionality:![http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classification/](images/dimensionality_vs_performance.png)>*If the amount of available training data is fixed, then overfitting occurs if we keep adding dimensions. On the other hand, if we keep adding dimensions, the amount of **training data needs to grow exponentially fast to maintain the same coverage** and to avoid overfitting* ([Computer Vision for Dummies](http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classification/)).![http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classification/](images/curseofdimensionality.png) Multi-CollinearityIn many cases, there is a high degree of correlation between many of the features in a dataset. This multi-collinearity has the effect of drowning out the "signal" of your dataset in many cases, and amplifies "outlier" noise. Sparsity- High dimensionality increases the sparsity of your features (**what NLP techniques have we used that illustrate this point?**)- The density of the training samples decreases when dimensionality increases:- **Distance measures (Euclidean, for instance) start losing their effectiveness**, because there isn't much difference between the max and min distances in higher dimensions.- Many models that rely upon **assumptions of Gaussian distributions** (like OLS linear regression), Gaussian mixture models, Gaussian processes, etc. become less and less effective since their distributions become flatter and "fatter tailed".![http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classification/](images/distance-asymptote.png)What is the amount of data needed to maintain **20% coverage** of the feature space? For 1 dimension, it is **20% of the entire population's dataset**. For a dimensionality of $D$:$$X^{D} = .20$$$$(X^{D})^{\frac{1}{D}} = .20^{\frac{1}{D}}$$$$X = \sqrt[D]{.20}$$You can approximate this as ```pythondef coverage_requirement(requirement, D): return requirement ** (1 / D)x = []y = []for d in range(1,20): y.append(coverage_requirement(0.10, d)) x.append(d) import matplotlib.pyplot as pltplt.plot(x,y)plt.xlabel("Number of Dimensions")plt.ylabel("Appromximate % of Population Dataset")plt.title("% of Dataset Needed to Maintain 10% Coverage of Feature Space")plt.show()```
import pandas as pd from sklearn.feature_extraction.text import CountVectorizer reviews = pd.read_csv("mcdonalds-yelp-negative-reviews.csv", encoding='latin-1') reviews = open("poor_amazon_toy_reviews.txt", encoding='latin-1') #text = reviews["review"].values text = reviews.readlines() vectorizer = CountVectorizer(ngram_range=(3,3), min_df=0.01, max_df=0.75, max_features=200) # tokenize and build vocab vectorizer.fit(text) vector = vectorizer.transform(text) features = vector.toarray() features_df = pd.DataFrame(features, columns=vectorizer.get_feature_names()) correlations = features_df.corr() correlations_stacked = correlations.stack().reset_index() #set column names correlations_stacked.columns = ['Tri-Gram 1','Tri-Gram 2','Correlation'] correlations_stacked = correlations_stacked[correlations_stacked["Correlation"] < 1] correlations_stacked = correlations_stacked.sort_values(by=['Correlation'], ascending=False) correlations_stacked.head() import numpy as np import matplotlib.pyplot as plt # visualize the correlations (install seaborn first)! import seaborn as sns # Generate a mask for the upper triangle mask = np.triu(np.ones_like(correlations, dtype=np.bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(correlations, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5})
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Principle Component AnalysisIf you have an original matrix $Z$, you can decompose this matrix into two smaller matrices $X$ and $Q$. Important Points:- Multiplying a vector by a matrix typically changes the direction of the vector. For instance: Lazy Programmer- Tutorial to PCA However, there are eigenvalues ฮป and eigenvectors $v$ such that$$\sum_{X}v = \lambda v$$Multiplying the eigenvectors $v$ with the eigenvalue $\lambda$ does not change the direction of the eigenvector.Multiplying the eigenvector $v$ by the covariance matrix $\sum_{X}$ also does not change the direction of the eigenvector.If our data $X$ is of shape $N \times D$, it turns out that we have $D$ eigenvalues and $D$ eigenvectors. This means we can arrange the eigenvalues $\lambda$ in decreasing order so that$$\lambda_3 > \lambda_2 > \lambda_5$$In this case, $\lambda_3$ is the largest eigenvalue, followed by $\lambda_2$, and then $\lambda_5$. Then, we can arrange We can also rearrange the eigenvectors the same: $v_3$ will be the first column, $v_2$ will be the second column, and $v_5$ will be the third column.We'll end up with two matrices $V$ and $\Lambda$: Lazy Programmer- Tutorial to PCA
# what is the shape of our features? features.shape from sklearn.decomposition import PCA pca = PCA(n_components=4) Z = pca.fit_transform(features) # what is the shape of Z? Z.shape # what will happen if we take the correlation matrix and covariance matrix of our new reduced features? import numpy as np covariances = pd.DataFrame(np.cov(Z.transpose())) plt.rcParams["figure.figsize"] = (5,5) sns.heatmap(covariances) # train the model to reduce the dimensions down to 2 pca = PCA(n_components=2) Z_two_dimensions = pca.fit_transform(features) Z_two_dimensions import matplotlib.pyplot as plt plt.scatter(Z_two_dimensions[:,0], Z_two_dimensions[:, 1]) reduced_features_df = pd.DataFrame(Z_two_dimensions, columns=["x1", "x2"]) reduced_features_df["text"] = text
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Singular Value DecompositionGiven an input matrix $A$, we want to try to represent it instead as three smaller matrices $U$, $\sum$, and $V$. Instead of **$n$ original terms**, we want to represent each document as **$r$ concepts** (other referred to as **latent dimensions**, or **latent factors**): Mining of Massive Datasets - Dimensionality Reduction: Singular Value Decomposition by Leskovec, Rajaraman, and Ullman (Stanford University)Here, **$A$ is your matrix of word vectors** - you could use any of the word vectorization techniques we have learned so far, include one-hot encoding, word count, TF-IDF.- $\sum$ will be a **diagonal matrix** with values that are positive and sorted in decreasing order. Its value indicate the **variance (information encoded on that new dimension)**- therefore, the higher the value, the stronger that dimension is in capturing data from A, the original features. For our purposes, we can think of the rank of this $\sum$ matrix as the number of desired dimensions. Instance, if we want to reduce $A$ from shape $1020 x 300$ to $1020 x 10$, we will want to reduce the rank of $\sum$ from 300 to 10.- $U^T U = I$ and $V^T V = I$ Measuring the Quality of the ReconstructionA popular metric used for measuring the quality of the reconstruction is the [Frobenius Norm](https://en.wikipedia.org/wiki/Matrix_normFrobenius_norm). When you explain your methodology for reducing dimensions, usually managers / stakeholders will want to some way to compare multiple dimensionality techniques' ability to quantify its ability to retain information but trim dimensions:$$\begin{equation}||A_{old}-A_{new}||_{F} = \sqrt{\sum_{ij}{(A^{old}_{ij}- A^{new}_{ij}}})^2\end{equation}$$ Heuristic Step for How Many Dimensions to Keep1. Sum the $\sum$ matrix's diagonal values: $$\begin{equation}\sum_{i}^{m}\sigma_{i}\end{equation}$$2. Define your threshold of "information" (variance) $\alpha$ to keep: usually 80% to 90%. 3. Define your cutoff point $C$: $$\begin{equation}C = \sum_{i}^{m}\sigma_{i} \alpha\end{equation}$$4. Beginning with your largest singular value, sum your singular values $\sigma_{i}$ until it is greater than C. Retain only those dimensions. Mining of Massive Datasets - Dimensionality Reduction: Singular Value Decomposition by Leskovec, Rajaraman, and Ullman (Stanford University)
# create sample data import numpy as np import matplotlib.pyplot as plt from scipy.linalg import svd x = np.linspace(1,20, 20) # create the first dimension x = np.concatenate((x,x)) y = x + np.random.normal(0,1, 40) # create the second dimension z = x + np.random.normal(0,2, 40) # create the third dimension a = x + np.random.normal(0,4, 40) # create the fourth dimension plt.scatter(x,y) # plot just the first two dimensions plt.show() # create matrix A = np.stack([x,y,z,a]).T # perform SVD D = 1 U, s, V = svd(A) print(f"s is {s}\n") print(f"U is {U}\n") print(f"V is {V}") # Frobenius norm s[D:] = 0 S = np.zeros((A.shape[0], A.shape[1])) S[:A.shape[1], :A.shape[1]] = np.diag(s) A_reconstructed = U.dot(S.dot(V)) np.sum((A_reconstructed - A) ** 2) ** (1/2) # Frobenius norm # reconstruct matrix U.dot(S)
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
GLOVEGlobal vectors for word presentation: GloVe: Global Vectors for Word Representation
!pip3 install gensim # import glove embeddings into a word2vec format that is consumable by Gensim from gensim.scripts.glove2word2vec import glove2word2vec glove_input_file = 'glove.6B.100d.txt' word2vec_output_file = 'glove.6B.100d.txt.word2vec' glove2word2vec(glove_input_file, word2vec_output_file) from gensim.models import KeyedVectors # load the Stanford GloVe model filename = 'glove.6B.100d.txt.word2vec' model = KeyedVectors.load_word2vec_format(filename, binary=False) # calculate: (king - man) + woman = ? result = model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1) print(result) words = ["woman", "king", "man", "queen", "puppy", "kitten", "cat", "quarterback", "football", "stadium", "touchdown", "dog", "government", "tax", "federal", "judicial", "elections", "avocado", "tomato", "pear", "championship", "playoffs"] vectors = [model.wv[word] for word in words] import pandas as pd vector_df = pd.DataFrame(vectors) vector_df["word"] = words vector_df.head()
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Using Spacy word2vec embeddings
import en_core_web_md import spacy from scipy.spatial.distance import cosine nlp = en_core_web_md.load() words = ["woman", "king", "man", "queen", "puppy", "kitten", "cat", "quarterback", "football", "stadium", "touchdown", "dog", "government", "tax", "federal", "judicial", "elections", "avocado", "tomato", "pear", "championship", "playoffs"] tokens = nlp(" ".join(words)) word2vec_vectors = [token.vector for token in tokens] np.array(word2vec_vectors).shape %matplotlib inline from sklearn.manifold import TSNE from sklearn.decomposition import PCA from sklearn.decomposition import TruncatedSVD import matplotlib.pyplot as plt import matplotlib dimension_model = PCA(n_components=2) reduced_vectors = dimension_model.fit_transform(word2vec_vectors) reduced_vectors.shape matplotlib.rc('figure', figsize=(10, 10)) for i, vector in enumerate(reduced_vectors): x = vector[0] y = vector[1] plt.plot(x,y, 'bo') plt.text(x * (1 + 0.01), y * (1 + 0.01) , words[i], fontsize=12)
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Using Glove
%matplotlib inline from sklearn.manifold import TSNE from sklearn.decomposition import PCA from sklearn.decomposition import TruncatedSVD import matplotlib.pyplot as plt dimension_model = PCA(n_components=2) reduced_vectors = dimension_model.fit_transform(vectors) for i, vector in enumerate(reduced_vectors): x = vector[0] y = vector[1] plt.plot(x,y, 'bo') plt.text(x * (1 + 0.01), y * (1 + 0.01) , words[i], fontsize=12)
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Clustering Text
from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=4) cluster_assignments = kmeans.fit_predict(reduced_vectors) for cluster_assignment, word in zip(cluster_assignments, words): print(f"{word} assigned to cluster {cluster_assignment}") color_map = { 0: "r", 1: "b", 2: "g", 3: "y" } plt.rcParams["figure.figsize"] = (10,10) for i, vector in enumerate(reduced_vectors): x = vector[0] y = vector[1] plt.plot(x,y, 'bo', c=color_map[cluster_assignments[i]]) plt.text(x * (1 + 0.01), y * (1 + 0.01) , words[i], fontsize=12)
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
์šฉ์–ด ์ •์˜
#๊ฐ€์„ค์„ค์ • # A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis about a population. 1. First, we state a hypothesis about a population. Usually the hypothesis concerns the value of a population parameter. 2. Before we select a sample, we use the hypothesis to predict the characteristics that the sample should have. 3. Next, we obtain a random sample from the population. 4. Finally, we compare the obtained sample data with the prediction that was made from the hypothesis. ## ๊ฐ€์„ค์„ค์ • ํ”„๋กœ์„ธ์Šค 1. State the hypothesis. null hypothesis(H0) ๊ท€๋ฌด๊ฐ€์„ค : ๋…๋ฆฝ๋ณ€์ˆ˜๊ฐ€ ์ข…์†๋ณ€์ˆ˜์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š๋Š”๋‹ค๋Š” ๊ฒƒ => ๋ ˆ์Šคํ† ๋ž‘์˜ ์›จ์ดํ„ฐ๊ฐ€ ๋ ˆ๋“œ ์…”์ธ  ์ž…๋Š” ๊ฒƒ์ด ํŒ์— ์˜ํ–ฅ์ด ์—†๋‹ค. The null hypothesis (H0) states that in the general population there is no change, no difference, or no relationship. In the context of an experiment, H0 predicts that the independent variable (treatment) has no effect on the dependent variable (scores) for the population. m = 15.8 ๋Œ€์•ˆ๊ฐ€์„ค : ์–ด๋–ค ๋ณ€์ธ์ด ์ข…์† ๋ณ€์ˆ˜์— ํšจ๊ณผ๊ฐ€ ์žˆ๋‹ค๋Š” ๊ฒƒ => ๋ ˆ์Šคํ† ๋ž‘์˜ ์›จ์ดํ„ฐ๊ฐ€ ๋ ˆ๋“œ ์…”์ธ  ์ž…๋Š” ๊ฒƒ ํŒ์— ์˜ํ–ฅ์ด ์žˆ๋‹ค. The alternative hypothesis (H1) states that there is a change, a difference, or a relationship for the general population. In the context of an experiment, H1 predicts that the independent variable (treatment) does have an effect on the dependent variable. m != 15.8 ์ด๋‹ค. ์ด ์‹คํ—˜์—์„œ๋Š” m > 15.8 directional hypothisis test 2. set the criteria for a decision a. Sample means that are likely to be obtained if H0 is true; that is, sample means that are close to the null hypothesis b. Sample means that are very unlikely to be obtained if H0 is true; that is, sample means that are very different from the null hypothesis The Alpha Level alpha levels are ฮฑ = .05 (5%), ฮฑ = .01 (1%), and ฮฑ = .001 (0.1%). The alpha level, or the level of significance, is a probability value that is used to define the concept of โ€œvery unlikelyโ€ in a hypothesis test. The critical region is composed of the extreme sample values that are very unlikely (as defined by the alpha level) to be obtained if the null hypothesis is true. The boundaries for the critical region are determined by the alpha level. If sample data fall in the critical region, the null hypothesis is rejected. 3. Collect data and compute sample statistics. z = sample mean - hypothesized population mean / standard error between M and m 4. Make a decision 1. Thesampledataarelocatedinthecriticalregion. Bydefinition,asamplevaluein the critical region is very unlikely to occur if the null hypothesis is true. 2. The sample data are not in the critical region. In this case, the sample mean is reasonably close to the population mean specified in the null hypothesis (in the center of the distribution).
_____no_output_____
MIT
08_OSK.ipynb
seokyeongheo/slow_statistics
Problems
1. Identify the four steps of a hypothesis test as presented in this chapter. 1)State the hypothesis. ๊ท€๋ฌด๊ฐ€์„ค๊ณผ ๋Œ€์•ˆ๊ฐ€์„ค ์–ธ๊ธ‰ 2)alpha level ์„ค์ •, ์‹ ๋ขฐ ๊ตฌ๊ฐ„ ์„ค์ • 3) Collect data and compute sample statistics. ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘๊ณผ ์ƒ˜ํ”Œ ํ†ต๊ณ„์  ๊ณ„์‚ฐ 4)make decision ๊ฒฐ๋ก  ๊ฒฐ์ • 2. Define the alpha level and the critical region for a hypothesis test. ๋…๋ฆฝ๋ณ€์ˆ˜์™€ ์ข…์†๋ณ€์ˆ˜์— ๋Œ€ํ•œ ๊ท€๋ฌด๊ฐ€์„ค์„ rejectํ•˜๊ธฐ ์œ„ํ•ด ๊ทธ ํ†ต๊ณ„์น˜๋ฅผ ํ†ต์ƒ์ ์ธ ์ˆ˜์น˜๋ฅผ ๋ฒ—์–ด๋‚˜ ์˜๋ฏธ์žˆ๋Š” ์ˆ˜๊ฐ€ ๋‚˜์˜จ ๊ฒƒ์„ ์„ค์ •ํ•ด์ค€๋‹ค. 3. Define a Type I error and a Type II error and explain the consequences of each. ๊ฐ€์„ค๊ฒ€์ฆ์—์„œ ์‹ค์ œํšจ๊ณผ๊ฐ€ ์—†๋Š”๋ฐ ํšจ๊ณผ๊ฐ€ ์žˆ๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜์˜จ๊ฒƒ, ์‹ค์ œ ํšจ๊ณผ๊ฐ€ ์žˆ๋Š”๋ฐ, ์—†๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜์˜จ๊ฒƒ. ๊ฐ€์„ค ์„ค์ •์— ๋ฌธ์ œ 4. If the alpha level is changed from ฮฑ = .05 to ฮฑ = .01, a. What happens to the boundaries for the critical region? ์‹ ๋ขฐ๊ตฌ๊ฐ„์ด ์ค„์–ด๋“ ๋‹ค. b. What happens to the probability of a Type I error? ์—๋Ÿฌ ํ™•๋ฅ ์€ ๋‚ฎ์•„์ง„๋‹ค. 6. Although there is a popular belief that herbal remedies such as Ginkgo biloba and Ginseng may improve learning and memory in healthy adults, these effects are usually not supported by well- controlled research (Persson, Bringlov, Nilsson, and Nyberg, 2004). In a typical study, a researcher obtains a sample of n = 16 participants and has each person take the herbal supplements every day for 90 days. At the end of the 90 days, each person takes a standardized memory test. For the general popula- tion, scores from the test form a normal distribution with a mean of ฮผ = 50 and a standard deviation of ฯƒ = 12. The sample of research participants had an average of M = 54. a. Assuming a two-tailed test, state the null hypoth- esis in a sentence that includes the two variables being examined. b. Using the standard 4-step procedure, conduct a two-tailed hypothesis test with ฮฑ = .05 to evaluate the effect of the supplements. from scipy import stats sample_number = 16 # ์ƒ˜ํ”Œ์ˆ˜ population_mean = 50 # ๋ชจ์ง‘๋‹จ์˜ ํ‰๊ท  standard_deviation = 12 # ํ‘œ์ค€ํŽธ์ฐจ sample_mean = 54 # ์ƒ˜ํ”Œ์˜ ํ‰๊ท  result = stats.ttest_1samp(sample_mean, 50) # ๋น„๊ต์ง‘๋‹จ, ๊ด€์ธก์น˜ result sample_mean - population_mean ## Import import numpy as np from scipy import stats sample_number = 16 # ์ƒ˜ํ”Œ์ˆ˜ population_mean = 50 # ๋ชจ์ง‘๋‹จ์˜ ํ‰๊ท  standard_deviation = 12 # ํ‘œ์ค€ํŽธ์ฐจ sample_mean = 54 # ์ƒ˜ํ”Œ์˜ ํ‰๊ท  ## ์‹ ๋ขฐ๊ตฌ๊ฐ„์„ ๋ฒ—์–ด๋‚˜๋Š”์ง€ ์•„๋‹Œ์ง€ ํ™•์ธ ํ•จ์ˆ˜ alpha_level05 = 1.96 alpha_level01 = 2.58 def h_test(sample_mean, population_mean, standard_deviation, sample_number, alpha_level): result = (sample_mean - population_mean)/ (standard_deviation/np.sqrt(sample_number)) if result> alpha_level or result< - alpha_level: print("a = .05 ์‹ ๋ขฐ๊ตฌ๊ฐ„์—์„œ ๊ท€๋ฌด๊ฐ€์„ค reject๋˜๊ณ , ๊ฐ€์„ค์ด ok") else: print("๊ท€๋ฌด๊ฐ€์„ค์ด reject ๋˜์ง€ ์•Š์•„ ๊ฐ€์„ค์ด ๊ธฐ๊ฐ๋ฉ๋‹ˆ๋‹ค.") return result ##Compute Cohenโ€™s d def Cohen(sample_mean, population_mean, standard_deviation): result = (sample_mean - population_mean) / (standard_deviation) if result<=0.2: print("small effect") elif result<= 0.5: print("medium effect") elif result<= 0.8: print("Large effect") return result ## ์‹ ๋ขฐ๊ตฌ๊ฐ„์„ ๋ฒ—์–ด๋‚˜๋Š”์ง€ ์•„๋‹Œ์ง€ ํ™•์ธ ํ•จ์ˆ˜ h_test(sample_mean, population_mean, standard_deviation, sample_number, alpha_level05) Cohen(sample_mean, population_mean, standard_deviation) ํ•จ์ˆ˜๋ฅผ ํ™œ์šฉํ•ด์„œ, ์‹ ๋ขฐ๊ตฌ๊ฐ„๊ณผ cohen's d๋ฅผ ๊ตฌํ•  ์ˆ˜ ์žˆ๋‹ค. # ## Import the packages # import numpy as np # from scipy import stats # ## ํ•จ์ˆ˜๋กœ ๋งŒ๋“ค๊ธฐ # #Sample Size # sample_number = 16 # population_mean = 50 # ๋ชจ์ง‘๋‹จ์˜ ํ‰๊ท  # standard_deviation = 12 # ํ‘œ์ค€ํŽธ์ฐจ # sample_mean = [54,54,58,53,52] # ์ƒ˜ํ”Œ์˜ ํ‰๊ท  # def h_test(sample_mean, population_mean, standard_deviation, sample_number): # #For unbiased max likelihood estimate we have to divide the var by N-1, and therefore the parameter ddof = 1 # var_sample_mean = sample_mean.var(ddof=1) # var_population_mean = population_mean.var(ddof=1) # #std deviation # std_deviation = np.sqrt((var_sample_mean + var_population_mean)/2) # ## Calculate the t-statistics # t = (a.mean() - b.mean())/(s*np.sqrt(2/N)) # ## Define 2 random distributions # N = 10 # #Gaussian distributed data with mean = 2 and var = 1 # a = np.random.randn(N) + 2 # #Gaussian distributed data with with mean = 0 and var = 1 # b = np.random.randn(N) # ## Calculate the Standard Deviation # #Calculate the variance to get the standard deviation # #For unbiased max likelihood estimate we have to divide the var by N-1, and therefore the parameter ddof = 1 # var_a = a.var(ddof=1) # var_b = b.var(ddof=1) # #std deviation # s = np.sqrt((var_a + var_b)/2) # s # ## Calculate the t-statistics # t = (a.mean() - b.mean())/(s*np.sqrt(2/N)) # ## Compare with the critical t-value # #Degrees of freedom # df = 2*N - 2 # #p-value after comparison with the t # p = 1 - stats.t.cdf(t,df=df) # print("t = " + str(t)) # print("p = " + str(2*p)) # ### You can see that after comparing the t statistic with the critical t value (computed internally) we get a good p value of 0.0005 and thus we reject the null hypothesis and thus it proves that the mean of the two distributions are different and statistically significant. # ## Cross Checking with the internal scipy function # t2, p2 = stats.ttest_ind(a,b) # print("t = " + str(t2)) # print("p = " + str(p2))
_____no_output_____
MIT
08_OSK.ipynb
seokyeongheo/slow_statistics
Source: https://qiskit.org/documentation/tutorials/circuits/01_circuit_basics.html Circuit Basics
import numpy as np from qiskit import QuantumCircuit %matplotlib inline
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Create a Quantum Circuit acting on a quantum register of three qubits
circ = QuantumCircuit(3)
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
After you create the circuit with its registers, you can add gates (โ€œoperationsโ€) to manipulate the registers. As you proceed through the tutorials you will find more gates and circuits; below is an example of a quantum circuit that makes a three-qubit GHZ state|๐œ“โŸฉ=(|000โŸฉ+|111โŸฉ)/2โ€พโˆšTo create such a state, we start with a three-qubit quantum register. By default, each qubit in the register is initialized to |0โŸฉ. To make the GHZ state, we apply the following gates: - A Hadamard gate ๐ป on qubit 0, which puts it into the superposition state (|0โŸฉ+|1โŸฉ)/2โ€พโˆš. - A controlled-Not operation (๐ถ๐‘‹) between qubit 0 and qubit 1. - A controlled-Not operation between qubit 0 and qubit 2.On an ideal quantum computer, the state produced by running this circuit would be the GHZ state above.
# Add a H gate on qubit 0, putting this qubit in superposition. circ.h(0) # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting # the qubits in a Bell state. circ.cx(0, 1) # Add a CX (CNOT) gate on control qubit 0 and target qubit 2, putting # the qubits in a GHZ state. circ.cx(0, 2)
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Visualize circuit
circ.draw('mpl')
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Simulating Circuits To simulate a circuit we use the quant-info module in Qiskit. This simulator returns the quantum state, which is a complex vector of dimensions 2๐‘›, where ๐‘› is the number of qubits (so be careful using this as it will quickly get too large to run on your machine).There are two stages to the simulator. The fist is to set the input state and the second to evolve the state by the quantum circuit.
from qiskit.quantum_info import Statevector # Set the intial state of the simulator to the ground state using from_int state = Statevector.from_int(0, 2**3) # Evolve the state by the quantum circuit state = state.evolve(circ) #draw using latex state.draw('latex')
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
VisualizationBelow, we use the visualization function to plot the qsphere and a hinton representing the real and imaginary components of the state density matrix ๐œŒ.
state.draw('qsphere') state.draw('hinton')
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Unitary representation of a circuitQiskitโ€™s quant_info module also has an operator method which can be used to make a unitary operator for the circuit. This calculates the 2๐‘›ร—2๐‘› matrix representing the quantum circuit.
from qiskit.quantum_info import Operator U = Operator(circ) # Show the results U.data
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
OpenQASM backendThe simulators above are useful because they provide information about the state output by the ideal circuit and the matrix representation of the circuit. However, a real experiment terminates by measuring each qubit (usually in the computational |0โŸฉ,|1โŸฉ basis). Without measurement, we cannot gain information about the state. Measurements cause the quantum system to collapse into classical bits.For example, suppose we make independent measurements on each qubit of the three-qubit GHZ state|๐œ“โŸฉ=(|000โŸฉ+|111โŸฉ)/2โ€พโˆš,and let ๐‘ฅ๐‘ฆ๐‘ง denote the bitstring that results. Recall that, under the qubit labeling used by Qiskit, ๐‘ฅ would correspond to the outcome on qubit 2, ๐‘ฆ to the outcome on qubit 1, and ๐‘ง to the outcome on qubit 0.Note: This representation of the bitstring puts the most significant bit (MSB) on the left, and the least significant bit (LSB) on the right. This is the standard ordering of binary bitstrings. We order the qubits in the same way (qubit representing the MSB has index 0), which is why Qiskit uses a non-standard tensor product order.Recall the probability of obtaining outcome ๐‘ฅ๐‘ฆ๐‘ง is given byPr(๐‘ฅ๐‘ฆ๐‘ง)=|โŸจ๐‘ฅ๐‘ฆ๐‘ง|๐œ“โŸฉ|2and as such for the GHZ state probability of obtaining 000 or 111 are both 1/2.To simulate a circuit that includes measurement, we need to add measurements to the original circuit above, and use a different Aer backend.
# Create a Quantum Circuit meas = QuantumCircuit(3, 3) meas.barrier(range(3)) # map the quantum measurement to the classical bits meas.measure(range(3), range(3)) # The Qiskit circuit object supports composition. # Here the meas has to be first and front=True (putting it before) # as compose must put a smaller circuit into a larger one. qc = meas.compose(circ, range(3), front=True) #drawing the circuit qc.draw('mpl')
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
This circuit adds a classical register, and three measurements that are used to map the outcome of qubits to the classical bits.To simulate this circuit, we use the qasm_simulator in Qiskit Aer. Each run of this circuit will yield either the bitstring 000 or 111. To build up statistics about the distribution of the bitstrings (to, e.g., estimate Pr(000)), we need to repeat the circuit many times. The number of times the circuit is repeated can be specified in the execute function, via the shots keyword.
# Adding the transpiler to reduce the circuit to QASM instructions # supported by the backend from qiskit import transpile # Use Aer's qasm_simulator from qiskit.providers.aer import QasmSimulator backend = QasmSimulator() # First we have to transpile the quantum circuit # to the low-level QASM instructions used by the # backend qc_compiled = transpile(qc, backend) # Execute the circuit on the qasm simulator. # We've set the number of repeats of the circuit # to be 1024, which is the default. job_sim = backend.run(qc_compiled, shots=1024) # Grab the results from the job. result_sim = job_sim.result()
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Once you have a result object, you can access the counts via the function get_counts(circuit). This gives you the aggregated binary outcomes of the circuit you submitted.
counts = result_sim.get_counts(qc_compiled) print(counts)
{'000': 503, '111': 521}
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Approximately 50 percent of the time, the output bitstring is 000. Qiskit also provides a function plot_histogram, which allows you to view the outcomes.
from qiskit.visualization import plot_histogram plot_histogram(counts)
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
The estimated outcome probabilities Pr(000) and Pr(111) are computed by taking the aggregate counts and dividing by the number of shots (times the circuit was repeated). Try changing the shots keyword in the execute function and see how the estimated probabilities change.
import qiskit.tools.jupyter %qiskit_version_table %qiskit_copyright
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Linear algebra
import numpy as np np.__version__
_____no_output_____
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Matrix and vector products Q1. Predict the results of the following code.
import numpy as np x = [1,2] y = [[4, 1], [2, 2]] print np.dot(x, y) print np.dot(y, x) print np.matmul(x, y) print np.inner(x, y) print np.inner(y, x)
[8 5] [6 6] [8 5] [6 6] [6 6]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q2. Predict the results of the following code.
x = [[1, 0], [0, 1]] y = [[4, 1], [2, 2], [1, 1]] print np.dot(y, x) print np.matmul(y, x)
[[4 1] [2 2] [1 1]] [[4 1] [2 2] [1 1]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q3. Predict the results of the following code.
x = np.array([[1, 4], [5, 6]]) y = np.array([[4, 1], [2, 2]]) print np.vdot(x, y) print np.vdot(y, x) print np.dot(x.flatten(), y.flatten()) print np.inner(x.flatten(), y.flatten()) print (x*y).sum()
30 30 30 30 30
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q4. Predict the results of the following code.
x = np.array(['a', 'b'], dtype=object) y = np.array([1, 2]) print np.inner(x, y) print np.inner(y, x) print np.outer(x, y) print np.outer(y, x)
abb abb [['a' 'aa'] ['b' 'bb']] [['a' 'b'] ['aa' 'bb']]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Decompositions Q5. Get the lower-trianglular `L` in the Cholesky decomposition of x and verify it.
x = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.int32) L = np.linalg.cholesky(x) print L assert np.array_equal(np.dot(L, L.T.conjugate()), x)
[[ 2. 0. 0.] [ 6. 1. 0.] [-8. 5. 3.]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q6. Compute the qr factorization of x and verify it.
x = np.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=np.float32) q, r = np.linalg.qr(x) print "q=\n", q, "\nr=\n", r assert np.allclose(np.dot(q, r), x)
q= [[-0.85714287 0.39428571 0.33142856] [-0.42857143 -0.90285712 -0.03428571] [ 0.2857143 -0.17142858 0.94285715]] r= [[ -14. -21. 14.] [ 0. -175. 70.] [ 0. 0. -35.]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q7. Factor x by Singular Value Decomposition and verify it.
x = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]], dtype=np.float32) U, s, V = np.linalg.svd(x, full_matrices=False) print "U=\n", U, "\ns=\n", s, "\nV=\n", v assert np.allclose(np.dot(U, np.dot(np.diag(s), V)), x)
U= [[ 0. 1. 0. 0.] [ 1. 0. 0. 0.] [ 0. 0. 0. -1.] [ 0. 0. 1. 0.]] s= [ 3. 2.23606801 2. 0. ] V= [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Matrix eigenvalues Q8. Compute the eigenvalues and right eigenvectors of x. (Name them eigenvals and eigenvecs, respectively)
x = np.diag((1, 2, 3)) eigenvals = np.linalg.eig(x)[0] eigenvals_ = np.linalg.eigvals(x) assert np.array_equal(eigenvals, eigenvals_) print "eigenvalues are\n", eigenvals eigenvecs = np.linalg.eig(x)[1] print "eigenvectors are\n", eigenvecs
eigenvalues are [ 1. 2. 3.] eigenvectors are [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q9. Predict the results of the following code.
print np.array_equal(np.dot(x, eigenvecs), eigenvals * eigenvecs)
True
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Norms and other numbers Q10. Calculate the Frobenius norm and the condition number of x.
x = np.arange(1, 10).reshape((3, 3)) print np.linalg.norm(x, 'fro') print np.linalg.cond(x, 'fro')
16.8819430161 4.56177073661e+17
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q11. Calculate the determinant of x.
x = np.arange(1, 5).reshape((2, 2)) out1 = np.linalg.det(x) out2 = x[0, 0] * x[1, 1] - x[0, 1] * x[1, 0] assert np.allclose(out1, out2) print out1
-2.0
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q12. Calculate the rank of x.
x = np.eye(4) out1 = np.linalg.matrix_rank(x) out2 = np.linalg.svd(x)[1].size assert out1 == out2 print out1
4
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q13. Compute the sign and natural logarithm of the determinant of x.
x = np.arange(1, 5).reshape((2, 2)) sign, logdet = np.linalg.slogdet(x) det = np.linalg.det(x) assert sign == np.sign(det) assert logdet == np.log(np.abs(det)) print sign, logdet
-1.0 0.69314718056
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q14. Return the sum along the diagonal of x.
x = np.eye(4) out1 = np.trace(x) out2 = x.diagonal().sum() assert out1 == out2 print out1
4.0
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Solving equations and inverting matrices Q15. Compute the inverse of x.
x = np.array([[1., 2.], [3., 4.]]) out1 = np.linalg.inv(x) assert np.allclose(np.dot(x, out1), np.eye(2)) print out1
[[-2. 1. ] [ 1.5 -0.5]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
่‡ช็„ถ่ฏญ่จ€ๅค„็†ๅฎžๆˆ˜โ€”โ€”ๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซ ่ฟ›ๅ…ฅModelArts็‚นๅ‡ปๅฆ‚ไธ‹้“พๆŽฅ๏ผšhttps://www.huaweicloud.com/product/modelarts.html ๏ผŒ ่ฟ›ๅ…ฅModelArtsไธป้กตใ€‚็‚นๅ‡ปโ€œ็ซ‹ๅณไฝฟ็”จโ€ๆŒ‰้’ฎ๏ผŒ่พ“ๅ…ฅ็”จๆˆทๅๅ’Œๅฏ†็ ็™ปๅฝ•๏ผŒ่ฟ›ๅ…ฅModelArtsไฝฟ็”จ้กต้ขใ€‚ ๅˆ›ๅปบModelArts notebookไธ‹้ข๏ผŒๆˆ‘ไปฌๅœจModelArtsไธญๅˆ›ๅปบไธ€ไธชnotebookๅผ€ๅ‘็Žฏๅขƒ๏ผŒModelArts notebookๆไพ›็ฝ‘้กต็‰ˆ็š„Pythonๅผ€ๅ‘็Žฏๅขƒ๏ผŒๅฏไปฅๆ–นไพฟ็š„็ผ–ๅ†™ใ€่ฟ่กŒไปฃ็ ๏ผŒๅนถๆŸฅ็œ‹่ฟ่กŒ็ป“ๆžœใ€‚็ฌฌไธ€ๆญฅ๏ผšๅœจModelArtsๆœๅŠกไธป็•Œ้ขไพๆฌก็‚นๅ‡ปโ€œๅผ€ๅ‘็Žฏๅขƒโ€ใ€โ€œๅˆ›ๅปบโ€![create_nb_create_button](./img/create_nb_create_button.png)็ฌฌไบŒๆญฅ๏ผšๅกซๅ†™notebookๆ‰€้œ€็š„ๅ‚ๆ•ฐ๏ผš| ๅ‚ๆ•ฐ | ่ฏดๆ˜Ž || - - - - - | - - - - - || ่ฎก่ดนๆ–นๅผ | ๆŒ‰้œ€่ฎก่ดน || ๅ็งฐ | Notebookๅฎžไพ‹ๅ็งฐ || ๅทฅไฝœ็Žฏๅขƒ | Python3 || ่ต„ๆบๆฑ  | ้€‰ๆ‹ฉ"ๅ…ฌๅ…ฑ่ต„ๆบๆฑ "ๅณๅฏ || ็ฑปๅž‹ | ้€‰ๆ‹ฉ"GPU" || ่ง„ๆ ผ | ้€‰ๆ‹ฉ"[้™ๆ—ถๅ…่ดน]ไฝ“้ชŒ่ง„ๆ ผGPU็‰ˆ"|| ๅญ˜ๅ‚จ้…็ฝฎ | ้€‰ๆ‹ฉEVS๏ผŒ็ฃ็›˜่ง„ๆ ผ5GB |็ฌฌไธ‰ๆญฅ๏ผš้…็ฝฎๅฅฝnotebookๅ‚ๆ•ฐๅŽ๏ผŒ็‚นๅ‡ปไธ‹ไธ€ๆญฅ๏ผŒ่ฟ›ๅ…ฅnotebookไฟกๆฏ้ข„่งˆใ€‚็กฎ่ฎคๆ— ่ฏฏๅŽ๏ผŒ็‚นๅ‡ปโ€œ็ซ‹ๅณๅˆ›ๅปบโ€็ฌฌๅ››ๆญฅ๏ผšๅˆ›ๅปบๅฎŒๆˆๅŽ๏ผŒ่ฟ”ๅ›žๅผ€ๅ‘็Žฏๅขƒไธป็•Œ้ข๏ผŒ็ญ‰ๅพ…Notebookๅˆ›ๅปบๅฎŒๆฏ•ๅŽ๏ผŒๆ‰“ๅผ€Notebook๏ผŒ่ฟ›่กŒไธ‹ไธ€ๆญฅๆ“ไฝœใ€‚![modelarts_notebook_index](./img/modelarts_notebook_index.png) ๅœจModelArtsไธญๅˆ›ๅปบๅผ€ๅ‘็ŽฏๅขƒๆŽฅไธ‹ๆฅ๏ผŒๆˆ‘ไปฌๅˆ›ๅปบไธ€ไธชๅฎž้™…็š„ๅผ€ๅ‘็Žฏๅขƒ๏ผŒ็”จไบŽๅŽ็ปญ็š„ๅฎž้ชŒๆญฅ้ชคใ€‚็ฌฌไธ€ๆญฅ๏ผš็‚นๅ‡ปไธ‹ๅ›พๆ‰€็คบ็š„โ€œๆ‰“ๅผ€โ€ๆŒ‰้’ฎ๏ผŒ่ฟ›ๅ…ฅๅˆšๅˆšๅˆ›ๅปบ็š„Notebook![inter_dev_env](img/enter_dev_env.png)็ฌฌไบŒๆญฅ๏ผšๅˆ›ๅปบไธ€ไธชPython3็Žฏๅขƒ็š„็š„Notebookใ€‚็‚นๅ‡ปๅณไธŠ่ง’็š„"New"๏ผŒ็„ถๅŽๅˆ›ๅปบTensorFlow 1.13.1ๅผ€ๅ‘็Žฏๅขƒใ€‚็ฌฌไธ‰ๆญฅ๏ผš็‚นๅ‡ปๅทฆไธŠๆ–น็š„ๆ–‡ไปถๅ"Untitled"๏ผŒๅนถ่พ“ๅ…ฅไธ€ไธชไธŽๆœฌๅฎž้ชŒ็›ธๅ…ณ็š„ๅ็งฐ![notebook_untitled_filename](./img/notebook_untitled_filename.png)![notebook_name_the_ipynb](./img/notebook_name_the_ipynb.png) ๅœจNotebookไธญ็ผ–ๅ†™ๅนถๆ‰ง่กŒไปฃ็ ๅœจNotebookไธญ๏ผŒๆˆ‘ไปฌ่พ“ๅ…ฅไธ€ไธช็ฎ€ๅ•็š„ๆ‰“ๅฐ่ฏญๅฅ๏ผŒ็„ถๅŽ็‚นๅ‡ปไธŠๆ–น็š„่ฟ่กŒๆŒ‰้’ฎ๏ผŒๅฏไปฅๆŸฅ็œ‹่ฏญๅฅๆ‰ง่กŒ็š„็ป“ๆžœ๏ผš![run_helloworld](./img/run_helloworld.png)ๅผ€ๅ‘็Žฏๅขƒๅ‡†ๅค‡ๅฅฝๅ•ฆ๏ผŒๆŽฅไธ‹ๆฅๅฏไปฅๆ„‰ๅฟซๅœฐๅ†™ไปฃ็ ๅ•ฆ๏ผ ๅ‡†ๅค‡ๆบไปฃ็ ๅ’Œๆ•ฐๆฎๅ‡†ๅค‡ๆกˆไพ‹ๆ‰€้œ€็š„ๆบไปฃ็ ๅ’Œๆ•ฐๆฎ๏ผŒ็›ธๅ…ณ่ต„ๆบๅทฒ็ปไฟๅญ˜ๅœจ OBS ไธญ๏ผŒๆˆ‘ไปฌ้€š่ฟ‡ ModelArts SDK ๅฐ†่ต„ๆบไธ‹่ฝฝๅˆฐๆœฌๅœฐใ€‚
from modelarts.session import Session session = Session() if session.region_name == 'cn-north-1': bucket_path = 'modelarts-labs/notebook/DL_nlp_ner/ner.tar.gz' elif session.region_name == 'cn-north-4': bucket_path = 'modelarts-labs-bj4/notebook/DL_nlp_ner/ner.tar.gz' else: print("่ฏทๆ›ดๆขๅœฐๅŒบๅˆฐๅŒ—ไบฌไธ€ๆˆ–ๅŒ—ไบฌๅ››") session.download_data(bucket_path=bucket_path, path='./ner.tar.gz') !ls -la
Successfully download file modelarts-labs/notebook/DL_nlp_ner/ner.tar.gz from OBS to local ./ner.tar.gz total 375220 drwxrwsrwx 4 ma-user ma-group 4096 Sep 6 13:34 . drwsrwsr-x 22 ma-user ma-group 4096 Sep 6 13:03 .. drwxr-s--- 2 ma-user ma-group 4096 Sep 6 13:33 .ipynb_checkpoints -rw-r----- 1 ma-user ma-group 45114 Sep 6 13:33 ner.ipynb -rw-r----- 1 ma-user ma-group 384157325 Sep 6 13:35 ner.tar.gz drwx--S--- 2 ma-user ma-group 4096 Sep 6 13:03 .Trash-1000
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
่งฃๅŽ‹ไปŽobsไธ‹่ฝฝ็š„ๅŽ‹็ผฉๅŒ…๏ผŒ่งฃๅŽ‹ๅŽๅˆ ้™คๅŽ‹็ผฉๅŒ…ใ€‚
# ่งฃๅŽ‹ !tar xf ./ner.tar.gz # ๅˆ ้™ค !rm ./ner.tar.gz !ls -la
total 68 drwxrwsrwx 5 ma-user ma-group 4096 Sep 6 13:35 . drwsrwsr-x 22 ma-user ma-group 4096 Sep 6 13:03 .. drwxr-s--- 2 ma-user ma-group 4096 Sep 6 13:33 .ipynb_checkpoints drwxr-s--- 8 ma-user ma-group 4096 Sep 6 00:24 ner -rw-r----- 1 ma-user ma-group 45114 Sep 6 13:33 ner.ipynb drwx--S--- 2 ma-user ma-group 4096 Sep 6 13:03 .Trash-1000
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅฏผๅ…ฅPythonๅบ“
import os import json import numpy as np import tensorflow as tf import codecs import pickle import collections from ner.bert import modeling, optimization, tokenization
_____no_output_____
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅฎšไน‰่ทฏๅพ„ๅŠๅ‚ๆ•ฐ
data_dir = "./ner/data" output_dir = "./ner/output" vocab_file = "./ner/chinese_L-12_H-768_A-12/vocab.txt" data_config_path = "./ner/chinese_L-12_H-768_A-12/bert_config.json" init_checkpoint = "./ner/chinese_L-12_H-768_A-12/bert_model.ckpt" max_seq_length = 128 batch_size = 64 num_train_epochs = 5.0
_____no_output_____
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅฎšไน‰processor็ฑป่Žทๅ–ๆ•ฐๆฎ๏ผŒๆ‰“ๅฐๆ ‡็ญพ
tf.logging.set_verbosity(tf.logging.INFO) from ner.src.models import InputFeatures, InputExample, DataProcessor, NerProcessor processors = {"ner": NerProcessor } processor = processors["ner"](output_dir) label_list = processor.get_labels() print("labels:", label_list)
labels: ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'X', '[CLS]', '[SEP]']
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ไปฅไธŠ labels ๅˆ†ๅˆซ่กจ็คบ๏ผš- O๏ผš้žๆ ‡ๆณจๅฎžไฝ“- B-PER๏ผšไบบๅ้ฆ–ๅญ—- I-PER๏ผšไบบๅ้ž้ฆ–ๅญ—- B-ORG๏ผš็ป„็ป‡้ฆ–ๅญ—- I-ORG๏ผš็ป„็ป‡ๅ้ž้ฆ–ๅญ—- B-LOC๏ผšๅœฐๅ้ฆ–ๅญ—- I-LOC๏ผšๅœฐๅ้ž้ฆ–ๅญ—- X๏ผšๆœช็Ÿฅ- [CLS]๏ผšๅฅ้ฆ–- [SEP]๏ผšๅฅๅฐพ ๅŠ ่ฝฝ้ข„่ฎญ็ปƒๅ‚ๆ•ฐ
data_config = json.load(codecs.open(data_config_path)) train_examples = processor.get_train_examples(data_dir) num_train_steps = int(len(train_examples) / batch_size * num_train_epochs) num_warmup_steps = int(num_train_steps * 0.1) data_config['num_train_steps'] = num_train_steps data_config['num_warmup_steps'] = num_warmup_steps data_config['num_train_size'] = len(train_examples) print("ๆ˜พ็คบ้…็ฝฎไฟกๆฏ:") for key,value in data_config.items(): print('{key}:{value}'.format(key = key, value = value)) bert_config = modeling.BertConfig.from_json_file(data_config_path) tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=True) #tf.estimator่ฟ่กŒๅ‚ๆ•ฐ run_config = tf.estimator.RunConfig( model_dir=output_dir, save_summary_steps=1000, save_checkpoints_steps=1000, session_config=tf.ConfigProto( log_device_placement=False, inter_op_parallelism_threads=0, intra_op_parallelism_threads=0, allow_soft_placement=True ) )
ๆ˜พ็คบ้…็ฝฎไฟกๆฏ: attention_probs_dropout_prob:0.1 directionality:bidi hidden_act:gelu hidden_dropout_prob:0.1 hidden_size:768 initializer_range:0.02 intermediate_size:3072 max_position_embeddings:512 num_attention_heads:12 num_hidden_layers:12 pooler_fc_size:768 pooler_num_attention_heads:12 pooler_num_fc_layers:3 pooler_size_per_head:128 pooler_type:first_token_transform type_vocab_size:2 vocab_size:21128 num_train_steps:1630 num_warmup_steps:163 num_train_size:20864
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
่ฏปๅ–ๆ•ฐๆฎ๏ผŒ่Žทๅ–ๅฅๅ‘้‡
def convert_single_example(ex_index, example, label_list, max_seq_length, tokenizer, output_dir, mode): label_map = {} for (i, label) in enumerate(label_list, 1): label_map[label] = i if not os.path.exists(os.path.join(output_dir, 'label2id.pkl')): with codecs.open(os.path.join(output_dir, 'label2id.pkl'), 'wb') as w: pickle.dump(label_map, w) textlist = example.text.split(' ') labellist = example.label.split(' ') tokens = [] labels = [] for i, word in enumerate(textlist): token = tokenizer.tokenize(word) tokens.extend(token) label_1 = labellist[i] for m in range(len(token)): if m == 0: labels.append(label_1) else: labels.append("X") if len(tokens) >= max_seq_length - 1: tokens = tokens[0:(max_seq_length - 2)] labels = labels[0:(max_seq_length - 2)] ntokens = [] segment_ids = [] label_ids = [] ntokens.append("[CLS]") # ๅฅๅญๅผ€ๅง‹่ฎพ็ฝฎ [CLS] ๆ ‡ๅฟ— segment_ids.append(0) label_ids.append(label_map["[CLS]"]) for i, token in enumerate(tokens): ntokens.append(token) segment_ids.append(0) label_ids.append(label_map[labels[i]]) ntokens.append("[SEP]") # ๅฅๅฐพๆทปๅŠ  [SEP] ๆ ‡ๅฟ— segment_ids.append(0) label_ids.append(label_map["[SEP]"]) input_ids = tokenizer.convert_tokens_to_ids(ntokens) input_mask = [1] * len(input_ids) while len(input_ids) < max_seq_length: input_ids.append(0) input_mask.append(0) segment_ids.append(0) label_ids.append(0) ntokens.append("**NULL**") assert len(input_ids) == max_seq_length assert len(input_mask) == max_seq_length assert len(segment_ids) == max_seq_length assert len(label_ids) == max_seq_length feature = InputFeatures( input_ids=input_ids, input_mask=input_mask, segment_ids=segment_ids, label_ids=label_ids, ) return feature def filed_based_convert_examples_to_features( examples, label_list, max_seq_length, tokenizer, output_file, mode=None): writer = tf.python_io.TFRecordWriter(output_file) for (ex_index, example) in enumerate(examples): if ex_index % 5000 == 0: tf.logging.info("Writing example %d of %d" % (ex_index, len(examples))) feature = convert_single_example(ex_index, example, label_list, max_seq_length, tokenizer, output_dir, mode) def create_int_feature(values): f = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) return f features = collections.OrderedDict() features["input_ids"] = create_int_feature(feature.input_ids) features["input_mask"] = create_int_feature(feature.input_mask) features["segment_ids"] = create_int_feature(feature.segment_ids) features["label_ids"] = create_int_feature(feature.label_ids) tf_example = tf.train.Example(features=tf.train.Features(feature=features)) writer.write(tf_example.SerializeToString()) train_file = os.path.join(output_dir, "train.tf_record") #ๅฐ†่ฎญ็ปƒ้›†ไธญๅญ—็ฌฆ่ฝฌๅŒ–ไธบfeaturesไฝœไธบ่ฎญ็ปƒ็š„่พ“ๅ…ฅ filed_based_convert_examples_to_features( train_examples, label_list, max_seq_length, tokenizer, output_file=train_file)
INFO:tensorflow:Writing example 0 of 20864 INFO:tensorflow:Writing example 5000 of 20864 INFO:tensorflow:Writing example 10000 of 20864 INFO:tensorflow:Writing example 15000 of 20864 INFO:tensorflow:Writing example 20000 of 20864
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅผ•ๅ…ฅ BiLSTM+CRF ๅฑ‚๏ผŒไฝœไธบไธ‹ๆธธๆจกๅž‹
learning_rate = 5e-5 dropout_rate = 1.0 lstm_size=1 cell='lstm' num_layers=1 from ner.src.models import BLSTM_CRF from tensorflow.contrib.layers.python.layers import initializers def create_model(bert_config, is_training, input_ids, input_mask, segment_ids, labels, num_labels, use_one_hot_embeddings, dropout_rate=dropout_rate, lstm_size=1, cell='lstm', num_layers=1): model = modeling.BertModel( config=bert_config, is_training=is_training, input_ids=input_ids, input_mask=input_mask, token_type_ids=segment_ids, use_one_hot_embeddings=use_one_hot_embeddings ) embedding = model.get_sequence_output() max_seq_length = embedding.shape[1].value used = tf.sign(tf.abs(input_ids)) lengths = tf.reduce_sum(used, reduction_indices=1) blstm_crf = BLSTM_CRF(embedded_chars=embedding, hidden_unit=1, cell_type='lstm', num_layers=1, dropout_rate=dropout_rate, initializers=initializers, num_labels=num_labels, seq_length=max_seq_length, labels=labels, lengths=lengths, is_training=is_training) rst = blstm_crf.add_blstm_crf_layer(crf_only=True) return rst def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate, num_train_steps, num_warmup_steps,use_one_hot_embeddings=False): #ๆž„ๅปบๆจกๅž‹ def model_fn(features, labels, mode, params): tf.logging.info("*** Features ***") for name in sorted(features.keys()): tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape)) input_ids = features["input_ids"] input_mask = features["input_mask"] segment_ids = features["segment_ids"] label_ids = features["label_ids"] print('shape of input_ids', input_ids.shape) is_training = (mode == tf.estimator.ModeKeys.TRAIN) total_loss, logits, trans, pred_ids = create_model( bert_config, is_training, input_ids, input_mask, segment_ids, label_ids, num_labels, False, dropout_rate, lstm_size, cell, num_layers) tvars = tf.trainable_variables() if init_checkpoint: (assignment_map, initialized_variable_names) = \ modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint) tf.train.init_from_checkpoint(init_checkpoint, assignment_map) output_spec = None if mode == tf.estimator.ModeKeys.TRAIN: train_op = optimization.create_optimizer( total_loss, learning_rate, num_train_steps, num_warmup_steps, False) hook_dict = {} hook_dict['loss'] = total_loss hook_dict['global_steps'] = tf.train.get_or_create_global_step() logging_hook = tf.train.LoggingTensorHook( hook_dict, every_n_iter=100) output_spec = tf.estimator.EstimatorSpec( mode=mode, loss=total_loss, train_op=train_op, training_hooks=[logging_hook]) elif mode == tf.estimator.ModeKeys.EVAL: def metric_fn(label_ids, pred_ids): return { "eval_loss": tf.metrics.mean_squared_error(labels=label_ids, predictions=pred_ids), } eval_metrics = metric_fn(label_ids, pred_ids) output_spec = tf.estimator.EstimatorSpec( mode=mode, loss=total_loss, eval_metric_ops=eval_metrics ) else: output_spec = tf.estimator.EstimatorSpec( mode=mode, predictions=pred_ids ) return output_spec return model_fn
_____no_output_____
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅˆ›ๅปบๆจกๅž‹๏ผŒๅผ€ๅง‹่ฎญ็ปƒ
model_fn = model_fn_builder( bert_config=bert_config, num_labels=len(label_list) + 1, init_checkpoint=init_checkpoint, learning_rate=learning_rate, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, use_one_hot_embeddings=False) def file_based_input_fn_builder(input_file, seq_length, is_training, drop_remainder): name_to_features = { "input_ids": tf.FixedLenFeature([seq_length], tf.int64), "input_mask": tf.FixedLenFeature([seq_length], tf.int64), "segment_ids": tf.FixedLenFeature([seq_length], tf.int64), "label_ids": tf.FixedLenFeature([seq_length], tf.int64), } def _decode_record(record, name_to_features): example = tf.parse_single_example(record, name_to_features) for name in list(example.keys()): t = example[name] if t.dtype == tf.int64: t = tf.to_int32(t) example[name] = t return example def input_fn(params): params["batch_size"] = 32 batch_size = params["batch_size"] d = tf.data.TFRecordDataset(input_file) if is_training: d = d.repeat() d = d.shuffle(buffer_size=300) d = d.apply(tf.contrib.data.map_and_batch( lambda record: _decode_record(record, name_to_features), batch_size=batch_size, drop_remainder=drop_remainder )) return d return input_fn #่ฎญ็ปƒ่พ“ๅ…ฅ train_input_fn = file_based_input_fn_builder( input_file=train_file, seq_length=max_seq_length, is_training=True, drop_remainder=True) num_train_size = len(train_examples) tf.logging.info("***** Running training *****") tf.logging.info(" Num examples = %d", num_train_size) tf.logging.info(" Batch size = %d", batch_size) tf.logging.info(" Num steps = %d", num_train_steps) #ๆจกๅž‹้ข„ๆต‹estimator estimator = tf.estimator.Estimator( model_fn=model_fn, config=run_config, params={ 'batch_size':batch_size }) estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
INFO:tensorflow:***** Running training ***** INFO:tensorflow: Num examples = 20864 INFO:tensorflow: Batch size = 64 INFO:tensorflow: Num steps = 1630 INFO:tensorflow:Using config: {'_model_dir': './ner/output', '_tf_random_seed': None, '_save_summary_steps': 1000, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true , '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fca68ba6748>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} INFO:tensorflow:Calling model_fn. INFO:tensorflow:*** Features *** INFO:tensorflow: name = input_ids, shape = (32, 128) INFO:tensorflow: name = input_mask, shape = (32, 128) INFO:tensorflow: name = label_ids, shape = (32, 128) INFO:tensorflow: name = segment_ids, shape = (32, 128)
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅœจ้ชŒ่ฏ้›†ไธŠ้ชŒ่ฏๆจกๅž‹
eval_examples = processor.get_dev_examples(data_dir) eval_file = os.path.join(output_dir, "eval.tf_record") filed_based_convert_examples_to_features( eval_examples, label_list, max_seq_length, tokenizer, eval_file) data_config['eval.tf_record_path'] = eval_file data_config['num_eval_size'] = len(eval_examples) num_eval_size = data_config.get('num_eval_size', 0) tf.logging.info("***** Running evaluation *****") tf.logging.info(" Num examples = %d", num_eval_size) tf.logging.info(" Batch size = %d", batch_size) eval_steps = None eval_drop_remainder = False eval_input_fn = file_based_input_fn_builder( input_file=eval_file, seq_length=max_seq_length, is_training=False, drop_remainder=eval_drop_remainder) result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps) output_eval_file = os.path.join(output_dir, "eval_results.txt") with codecs.open(output_eval_file, "w", encoding='utf-8') as writer: tf.logging.info("***** Eval results *****") for key in sorted(result.keys()): tf.logging.info(" %s = %s", key, str(result[key])) writer.write("%s = %s\n" % (key, str(result[key]))) if not os.path.exists(data_config_path): with codecs.open(data_config_path, 'a', encoding='utf-8') as fd: json.dump(data_config, fd)
INFO:tensorflow:Writing example 0 of 4631 INFO:tensorflow:***** Running evaluation ***** INFO:tensorflow: Num examples = 4631 INFO:tensorflow: Batch size = 64 INFO:tensorflow:Calling model_fn. INFO:tensorflow:*** Features *** INFO:tensorflow: name = input_ids, shape = (?, 128) INFO:tensorflow: name = input_mask, shape = (?, 128) INFO:tensorflow: name = label_ids, shape = (?, 128) INFO:tensorflow: name = segment_ids, shape = (?, 128)
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅœจๆต‹่ฏ•้›†ไธŠ่ฟ›่กŒๆต‹่ฏ•
token_path = os.path.join(output_dir, "token_test.txt") if os.path.exists(token_path): os.remove(token_path) with codecs.open(os.path.join(output_dir, 'label2id.pkl'), 'rb') as rf: label2id = pickle.load(rf) id2label = {value: key for key, value in label2id.items()} predict_examples = processor.get_test_examples(data_dir) predict_file = os.path.join(output_dir, "predict.tf_record") filed_based_convert_examples_to_features(predict_examples, label_list, max_seq_length, tokenizer, predict_file, mode="test") tf.logging.info("***** Running prediction*****") tf.logging.info(" Num examples = %d", len(predict_examples)) tf.logging.info(" Batch size = %d", batch_size) predict_drop_remainder = False predict_input_fn = file_based_input_fn_builder( input_file=predict_file, seq_length=max_seq_length, is_training=False, drop_remainder=predict_drop_remainder) predicted_result = estimator.evaluate(input_fn=predict_input_fn) output_eval_file = os.path.join(output_dir, "predicted_results.txt") with codecs.open(output_eval_file, "w", encoding='utf-8') as writer: tf.logging.info("***** Predict results *****") for key in sorted(predicted_result.keys()): tf.logging.info(" %s = %s", key, str(predicted_result[key])) writer.write("%s = %s\n" % (key, str(predicted_result[key]))) result = estimator.predict(input_fn=predict_input_fn) output_predict_file = os.path.join(output_dir, "label_test.txt") def result_to_pair(writer): for predict_line, prediction in zip(predict_examples, result): idx = 0 line = '' line_token = str(predict_line.text).split(' ') label_token = str(predict_line.label).split(' ') if len(line_token) != len(label_token): tf.logging.info(predict_line.text) tf.logging.info(predict_line.label) for id in prediction: if id == 0: continue curr_labels = id2label[id] if curr_labels in ['[CLS]', '[SEP]']: continue try: line += line_token[idx] + ' ' + label_token[idx] + ' ' + curr_labels + '\n' except Exception as e: tf.logging.info(e) tf.logging.info(predict_line.text) tf.logging.info(predict_line.label) line = '' break idx += 1 writer.write(line + '\n') from ner.src.conlleval import return_report with codecs.open(output_predict_file, 'w', encoding='utf-8') as writer: result_to_pair(writer) eval_result = return_report(output_predict_file) for line in eval_result: print(line)
INFO:tensorflow:Writing example 0 of 68 INFO:tensorflow:***** Running prediction***** INFO:tensorflow: Num examples = 68 INFO:tensorflow: Batch size = 64 INFO:tensorflow:Calling model_fn. INFO:tensorflow:*** Features *** INFO:tensorflow: name = input_ids, shape = (?, 128) INFO:tensorflow: name = input_mask, shape = (?, 128) INFO:tensorflow: name = label_ids, shape = (?, 128) INFO:tensorflow: name = segment_ids, shape = (?, 128)
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅœจ็บฟๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซ็”ฑไปฅไธŠ่ฎญ็ปƒๅพ—ๅˆฐๆจกๅž‹่ฟ›่กŒๅœจ็บฟๆต‹่ฏ•๏ผŒๅฏไปฅไปปๆ„่พ“ๅ…ฅๅฅๅญ๏ผŒ่ฟ›่กŒๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซใ€‚่พ“ๅ…ฅโ€œๅ†่งโ€๏ผŒ็ป“ๆŸๅœจ็บฟๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซใ€‚่‹ฅไธ‹่ฟฐ็จ‹ๅบๆœชๆ‰ง่กŒๆˆๅŠŸ๏ผŒๅˆ™่กจ็คบ่ฎญ็ปƒๅฎŒๆˆๅŽ๏ผŒGPUๆ˜พๅญ˜่ฟ˜ๅœจๅ ็”จ๏ผŒ้œ€่ฆrestart kernel๏ผŒ็„ถๅŽๆ‰ง่กŒ %run ๅ‘ฝไปคใ€‚้‡Šๆ”พ่ต„ๆบๅ…ทไฝ“ๆต็จ‹ไธบ๏ผš่œๅ• > Kernel > Restart ![้‡Šๆ”พ่ต„ๆบ](./img/้‡Šๆ”พ่ต„ๆบ.png)
%run ner/src/terminal_predict.py
checkpoint path:./ner/output/checkpoint going to restore checkpoint INFO:tensorflow:Restoring parameters from ./ner/output/model.ckpt-1630 {1: 'O', 2: 'B-PER', 3: 'I-PER', 4: 'B-ORG', 5: 'I-ORG', 6: 'B-LOC', 7: 'I-LOC', 8: 'X', 9: '[CLS]', 10: '[SEP]'} ่พ“ๅ…ฅๅฅๅญ: ไธญๅ›ฝ็”ท็ฏฎไธŽๅง”ๅ†…็‘žๆ‹‰้˜ŸๅœจๅŒ—ไบฌไบ”ๆฃตๆพไฝ“่‚ฒ้ฆ†ๅฑ•ๅผ€ๅฐ็ป„่ต›ๆœ€ๅŽไธ€ๅœบๆฏ”่ต›็š„ไบ‰ๅคบ๏ผŒ่ตต็ปงไผŸ12ๅˆ†4ๅŠฉๆ”ป3ๆŠขๆ–ญใ€ๆ˜“ๅปบ่”11ๅˆ†8็ฏฎๆฟใ€ๅ‘จ็ฆ8ๅˆ†7็ฏฎๆฟ2็›–ๅธฝใ€‚ [['B-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'B-LOC', 'I-LOC', 'B-LOC', 'I-LOC', 'I-LOC', 'I-LOC', 'I-LOC', 'I-LOC', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-PER', 'I-PER', 'I-PER', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-PER', 'I-PER', 'I-PER', 'O', 'O', 'O', 'O', 'O', 'O', 'B-PER', 'I-PER', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']] LOC, ๅŒ—ไบฌ, ไบ”ๆฃตๆพไฝ“่‚ฒ้ฆ† PER, ่ตต็ปงไผŸ, ๆ˜“ๅปบ่”, ๅ‘จ็ฆ ORG, ไธญๅ›ฝ็”ท็ฏฎ, ๅง”ๅ†…็‘žๆ‹‰้˜Ÿ time used: 0.908481 sec ่พ“ๅ…ฅๅฅๅญ: ๅ‘จๆฐไผฆ๏ผˆJay Chou๏ผ‰๏ผŒ1979ๅนด1ๆœˆ18ๆ—ฅๅ‡บ็”ŸไบŽๅฐๆนพ็œๆ–ฐๅŒ—ๅธ‚๏ผŒๆฏ•ไธšไบŽๆทกๆฑŸไธญๅญฆ๏ผŒไธญๅ›ฝๅฐๆนพๆต่กŒไน็”ทๆญŒๆ‰‹ใ€‚ [['B-PER', 'I-PER', 'I-PER', 'O', 'B-PER', 'I-PER', 'I-PER', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'B-LOC', 'I-LOC', 'I-LOC', 'O', 'O', 'O', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'B-LOC', 'I-LOC', 'B-LOC', 'I-LOC', 'O', 'O', 'O', 'O', 'O', 'O', 'O']] LOC, ๅฐๆนพ็œ, ๆ–ฐๅŒ—ๅธ‚, ไธญๅ›ฝ, ๅฐๆนพ PER, ๅ‘จๆฐไผฆ, jaycho##u ORG, ๆทกๆฑŸไธญๅญฆ time used: 0.058148 sec ่พ“ๅ…ฅๅฅๅญ: ้ฉฌไบ‘๏ผŒ1964ๅนด9ๆœˆ10ๆ—ฅ็”ŸไบŽๆต™ๆฑŸ็œๆญๅทžๅธ‚๏ผŒ1988ๅนดๆฏ•ไธšไบŽๆญๅทžๅธˆ่Œƒๅญฆ้™ขๅค–่ฏญ็ณป๏ผŒๅŒๅนดๆ‹…ไปปๆญๅทž็”ตๅญๅทฅไธšๅญฆ้™ข่‹ฑๆ–‡ๅŠๅ›ฝ้™…่ดธๆ˜“ๆ•™ๅธˆใ€‚ [['B-PER', 'I-PER', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'B-LOC', 'I-LOC', 'I-LOC', 'O', 'O', 'O', 'O', 'O', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']] LOC, ๆต™ๆฑŸ็œ, ๆญๅทžๅธ‚ PER, ้ฉฌไบ‘ ORG, ๆญๅทžๅธˆ่Œƒๅญฆ้™ขๅค–่ฏญ็ณป, ๆญๅทž็”ตๅญๅทฅไธšๅญฆ้™ข time used: 0.065471 sec ่พ“ๅ…ฅๅฅๅญ: ๅ†่ง ๅ†่ง
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
100 pandas puzzlesInspired by [100 Numpy exerises](https://github.com/rougier/numpy-100), here are 100* short puzzles for testing your knowledge of [pandas'](http://pandas.pydata.org/) power.Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects. Many of the excerises here are stright-forward in that the solutions require no more than a few lines of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods and following best practices is the underlying goal.The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how inventive the required solution is.If you're just starting out with pandas and you are looking for some other resources, the official documentation is very extensive. In particular, some good places get a broader overview of pandas are...- [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html)- [pandas basics](http://pandas.pydata.org/pandas-docs/stable/basics.html)- [tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)- [cookbook and idioms](http://pandas.pydata.org/pandas-docs/stable/cookbook.htmlcookbook)Enjoy the puzzles!\* *the list of exercises is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.* Importing pandas Getting started and checking your pandas setupDifficulty: *easy* **1.** Import pandas under the alias `pd`.
import pandas as pd
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**2.** Print the version of pandas that has been imported.
pd.__version__
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**3.** Print out all the *version* information of the libraries that are required by the pandas library.
pd.show_versions()
INSTALLED VERSIONS ------------------ commit : 2cb96529396d93b46abab7bbc73a208e708c642e python : 3.8.8.final.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.22000 machine : AMD64 processor : Intel64 Family 6 Model 142 Stepping 10, GenuineIntel byteorder : little LC_ALL : None LANG : None LOCALE : English_United States.1252 pandas : 1.2.4 numpy : 1.20.1 pytz : 2020.1 dateutil : 2.8.1 pip : 20.2.2 setuptools : 52.0.0.post20210125 Cython : 0.29.23 pytest : 6.2.3 hypothesis : None sphinx : 4.0.1 blosc : None feather : None xlsxwriter : 1.3.8 lxml.etree : 4.6.3 html5lib : 1.1 pymysql : None psycopg2 : 2.8.6 (dt dec pq3 ext lo64) jinja2 : 2.11.3 IPython : 7.22.0 pandas_datareader: 0.10.0 bs4 : 4.9.3 bottleneck : 1.3.2 fsspec : 0.9.0 fastparquet : None gcsfs : None matplotlib : 3.3.4 numexpr : 2.7.3 odfpy : None openpyxl : 3.0.7 pandas_gbq : None pyarrow : None pyxlsb : None s3fs : None scipy : 1.6.2 sqlalchemy : 1.4.7 tables : 3.6.1 tabulate : None xarray : None xlrd : 2.0.1 xlwt : 1.3.0 numba : 0.53.1
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
DataFrame basics A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFramesDifficulty: *easy*Note: remember to import numpy using:```pythonimport numpy as np```Consider the following Python dictionary `data` and Python list `labels`:``` pythondata = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'], 'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3], 'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1], 'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']```(This is just some meaningless data I made up with the theme of animals and trips to a vet.)**4.** Create a DataFrame `df` from this dictionary `data` which has the index `labels`.
import numpy as np raw_data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'], 'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3], 'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1], 'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']} labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] df = pd.DataFrame(raw_data, index=labels)# (complete this line of code) df
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**5.** Display a summary of the basic information about this DataFrame and its data (*hint: there is a single method that can be called on the DataFrame*).
df.describe()
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**6.** Return the first 3 rows of the DataFrame `df`.
df.iloc[:3,:]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**7.** Select just the 'animal' and 'age' columns from the DataFrame `df`.
df[['animal', 'age']]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**8.** Select the data in rows `[3, 4, 8]` *and* in columns `['animal', 'age']`.
df.iloc[[3, 4, 8]][['animal','age']]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**9.** Select only the rows where the number of visits is greater than 3.
df[df['visits'] > 3]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**10.** Select the rows where the age is missing, i.e. it is `NaN`.
df[df['age'].isna()]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**11.** Select the rows where the animal is a cat *and* the age is less than 3.
df[(df['animal'] == 'cat') & (df['age'] < 3)]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**12.** Select the rows the age is between 2 and 4 (inclusive).
df.iloc[2:5]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**13.** Change the age in row 'f' to 1.5.
df.loc['f','age'] = 1.5 df
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**14.** Calculate the sum of all visits in `df` (i.e. find the total number of visits).
df['visits'].sum()
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**15.** Calculate the mean age for each different animal in `df`.
df.groupby('animal').agg({'age':'mean'})
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**16.** Append a new row 'k' to `df` with your choice of values for each column. Then delete that row to return the original DataFrame.
import numpy as np rnum = np.random.randint(10, size=len(df)) df['k'] = rnum
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**17.** Count the number of each type of animal in `df`.
df['animal'].value_counts()
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**18.** Sort `df` first by the values in the 'age' in *decending* order, then by the value in the 'visits' column in *ascending* order (so row `i` should be first, and row `d` should be last).
df.sort_values(by = ['age', 'visits'], ascending=[False, True])
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**19.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be `True` and 'no' should be `False`.
df['priority'] = df['priority'].map({'yes': True, 'no': False})
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**20.** In the 'animal' column, change the 'snake' entries to 'python'.
df['animal'] = df['animal'].replace('snake', 'python') df
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**21.** For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (*hint: use a pivot table*).
df.pivot_table(index = 'animal', columns = 'visits', values = 'age', aggfunc = 'mean').fillna(0)
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
DataFrames: beyond the basics Slightly trickier: you may need to combine two or more methods to get the right answerDifficulty: *medium*The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method. **22.** You have a DataFrame `df` with a column 'A' of integers. For example:```pythondf = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})```How do you filter out rows which contain the same integer as the row immediately above?You should be left with a column containing the following values:```python1, 2, 3, 4, 5, 6, 7``` **23.** Given a DataFrame of numeric values, say```pythondf = pd.DataFrame(np.random.random(size=(5, 3))) a 5x3 frame of float values```how do you subtract the row mean from each element in the row? **24.** Suppose you have DataFrame with 10 columns of real numbers, for example:```pythondf = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))```Which column of numbers has the smallest sum? Return that column's label. **25.** How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)? As input, use a DataFrame of zeros and ones with 10 rows and 3 columns.```pythondf = pd.DataFrame(np.random.randint(0, 2, size=(10, 3)))``` The next three puzzles are slightly harder.**26.** In the cell below, you have a DataFrame `df` that consists of 10 columns of floating-point numbers. Exactly 5 entries in each row are NaN values. For each row of the DataFrame, find the *column* which contains the *third* NaN value.You should return a Series of column labels: `e, c, d, h, d`
nan = np.nan data = [[0.04, nan, nan, 0.25, nan, 0.43, 0.71, 0.51, nan, nan], [ nan, nan, nan, 0.04, 0.76, nan, nan, 0.67, 0.76, 0.16], [ nan, nan, 0.5 , nan, 0.31, 0.4 , nan, nan, 0.24, 0.01], [0.49, nan, nan, 0.62, 0.73, 0.26, 0.85, nan, nan, nan], [ nan, nan, 0.41, nan, 0.05, nan, 0.61, nan, 0.48, 0.68]] columns = list('abcdefghij') df = pd.DataFrame(data, columns=columns) # write a solution to the question here
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**27.** A DataFrame has a column of groups 'grps' and and column of integer values 'vals': ```pythondf = pd.DataFrame({'grps': list('aaabbcaabcccbbc'), 'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})```For each *group*, find the sum of the three greatest values. You should end up with the answer as follows:```grpsa 409b 156c 345```
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'), 'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]}) # write a solution to the question here
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**28.** The DataFrame `df` constructed below has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive). For each group of 10 consecutive integers in 'A' (i.e. `(0, 10]`, `(10, 20]`, ...), calculate the sum of the corresponding values in column 'B'.The answer should be a Series as follows:```A(0, 10] 635(10, 20] 360(20, 30] 315(30, 40] 306(40, 50] 750(50, 60] 284(60, 70] 424(70, 80] 526(80, 90] 835(90, 100] 852```
df = pd.DataFrame(np.random.RandomState(8765).randint(1, 101, size=(100, 2)), columns = ["A", "B"]) # write a solution to the question here
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
DataFrames: harder problems These might require a bit of thinking outside the box......but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit `for` loops).Difficulty: *hard* **29.** Consider a DataFrame `df` where there is an integer column 'X':```pythondf = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})```For each value, count the difference back to the previous zero (or the start of the Series, whichever is closer). These values should therefore be ```[1, 2, 0, 1, 2, 3, 4, 0, 1, 2]```Make this a new column 'Y'. **30.** Consider the DataFrame constructed below which contains rows and columns of numerical data. Create a list of the column-row index locations of the 3 largest values in this DataFrame. In this case, the answer should be:```[(5, 7), (6, 4), (2, 5)]```
df = pd.DataFrame(np.random.RandomState(30).randint(1, 101, size=(8, 8)))
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**31.** You are given the DataFrame below with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals'.```pythondf = pd.DataFrame({"vals": np.random.RandomState(31).randint(-30, 30, size=15), "grps": np.random.RandomState(31).choice(["A", "B"], 15)})```Create a new column 'patched_values' which contains the same values as the 'vals' any negative values in 'vals' with the group mean:``` vals grps patched_vals0 -12 A 13.61 -7 B 28.02 -14 A 13.63 4 A 4.04 -7 A 13.65 28 B 28.06 -2 A 13.67 -1 A 13.68 8 A 8.09 -2 B 28.010 28 A 28.011 12 A 12.012 16 A 16.013 -24 A 13.614 -12 A 13.6``` **32.** Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame:```python>>> df = pd.DataFrame({'group': list('aabbabbbabab'), 'value': [1, 2, 3, np.nan, 2, 3, np.nan, 1, 7, 3, np.nan, 8]})>>> df group value0 a 1.01 a 2.02 b 3.03 b NaN4 a 2.05 b 3.06 b NaN7 b 1.08 a 7.09 b 3.010 a NaN11 b 8.0```The goal is to compute the Series:```0 1.0000001 1.5000002 3.0000003 3.0000004 1.6666675 3.0000006 3.0000007 2.0000008 3.6666679 2.00000010 4.50000011 4.000000```E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5. Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-NaN values are used to compute the mean (3+3)/2) Series and DatetimeIndex Exercises for creating and manipulating Series with datetime dataDifficulty: *easy/medium*pandas is fantastic for working with dates and times. These puzzles explore some of this functionality. **33.** Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series of random numbers. Let's call this Series `s`. **34.** Find the sum of the values in `s` for every Wednesday. **35.** For each calendar month in `s`, find the mean of values. **36.** For each group of four consecutive calendar months in `s`, find the date on which the highest value occurred. **37.** Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and 2016. Cleaning Data Making a DataFrame easier to work withDifficulty: *easy/medium*It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?Take this monstrosity as the DataFrame to use in the following puzzles:```pythondf = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm', 'Budapest_PaRis', 'Brussels_londOn'], 'FlightNumber': [10045, np.nan, 10065, np.nan, 10085], 'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]], 'Airline': ['KLM(!)', ' (12)', '(British Airways. )', '12. Air France', '"Swiss Air"']})```Formatted, it looks like this:``` From_To FlightNumber RecentDelays Airline0 LoNDon_paris 10045.0 [23, 47] KLM(!)1 MAdrid_miLAN NaN [] (12)2 londON_StockhOlm 10065.0 [24, 43, 87] (British Airways. )3 Budapest_PaRis NaN [13] 12. Air France4 Brussels_londOn 10085.0 [67, 32] "Swiss Air"```(It's some flight data I made up; it's not meant to be accurate in any way.) **38.** Some values in the the **FlightNumber** column are missing (they are `NaN`). These numbers are meant to increase by 10 with each row so 10055 and 10075 need to be put in place. Modify `df` to fill in these missing numbers and make the column an integer column (instead of a float column). **39.** The **From\_To** column would be better as two separate columns! Split each string on the underscore delimiter `_` to give a new temporary DataFrame called 'temp' with the correct values. Assign the correct column names 'From' and 'To' to this temporary DataFrame. **40.** Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame 'temp'. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".) **41.** Delete the **From_To** column from `df` and attach the temporary DataFrame 'temp' from the previous questions. **42**. In the **Airline** column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. `'(British Airways. )'` should become `'British Airways'`. **43**. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.Expand the Series of lists into a DataFrame named `delays`, rename the columns `delay_1`, `delay_2`, etc. and replace the unwanted RecentDelays column in `df` with `delays`. The DataFrame should look much better now.``` FlightNumber Airline From To delay_1 delay_2 delay_30 10045 KLM London Paris 23.0 47.0 NaN1 10055 Air France Madrid Milan NaN NaN NaN2 10065 British Airways London Stockholm 24.0 43.0 87.03 10075 Air France Budapest Paris 13.0 NaN NaN4 10085 Swiss Air Brussels London 67.0 32.0 NaN``` Using MultiIndexes Go beyond flat DataFrames with additional index levelsDifficulty: *medium*Previous exercises have seen us analysing data from DataFrames equipped with a single index level. However, pandas also gives you the possibilty of indexing your data using *multiple* levels. This is very much like adding new dimensions to a Series or a DataFrame. For example, a Series is 1D, but by using a MultiIndex with 2 levels we gain of much the same functionality as a 2D DataFrame.The set of puzzles below explores how you might use multiple index levels to enhance data analysis.To warm up, we'll look make a Series with two index levels. **44**. Given the lists `letters = ['A', 'B', 'C']` and `numbers = list(range(10))`, construct a MultiIndex object from the product of the two lists. Use it to index a Series of random numbers. Call this Series `s`. **45.** Check the index of `s` is lexicographically sorted (this is a necessary proprty for indexing to work correctly with a MultiIndex). **46**. Select the labels `1`, `3` and `6` from the second level of the MultiIndexed Series. **47**. Slice the Series `s`; slice up to label 'B' for the first level and from label 5 onwards for the second level. **48**. Sum the values in `s` for each label in the first level (you should have Series giving you a total for labels A, B and C). **49**. Suppose that `sum()` (and other methods) did not accept a `level` keyword argument. How else could you perform the equivalent of `s.sum(level=1)`? **50**. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this new Series properly lexsorted? If not, sort it. Minesweeper Generate the numbers for safe squares in a Minesweeper gridDifficulty: *medium* to *hard*If you've ever used an older version of Windows, there's a good chance you've played with Minesweeper:- https://en.wikipedia.org/wiki/Minesweeper_(video_game)If you're not familiar with the game, imagine a grid of squares: some of these squares conceal a mine. If you click on a mine, you lose instantly. If you click on a safe square, you reveal a number telling you how many mines are found in the squares that are immediately adjacent. The aim of the game is to uncover all squares in the grid that do not contain a mine.In this section, we'll make a DataFrame that contains the necessary data for a game of Minesweeper: coordinates of the squares, whether the square contains a mine and the number of mines found on adjacent squares. **51**. Let's suppose we're playing Minesweeper on a 5 by 4 grid, i.e.```X = 5Y = 4```To begin, generate a DataFrame `df` with two columns, `'x'` and `'y'` containing every coordinate for this grid. That is, the DataFrame should start:``` x y0 0 01 0 12 0 2``` **52**. For this DataFrame `df`, create a new column of zeros (safe) and ones (mine). The probability of a mine occuring at each location should be 0.4. **53**. Now create a new column for this DataFrame called `'adjacent'`. This column should contain the number of mines found on adjacent squares in the grid. (E.g. for the first row, which is the entry for the coordinate `(0, 0)`, count how many mines are found on the coordinates `(0, 1)`, `(1, 0)` and `(1, 1)`.) **54**. For rows of the DataFrame that contain a mine, set the value in the `'adjacent'` column to NaN. **55**. Finally, convert the DataFrame to grid of the adjacent mine counts: columns are the `x` coordinate, rows are the `y` coordinate. Plotting Visualize trends and patterns in dataDifficulty: *medium*To really get a good understanding of the data contained in your DataFrame, it is often essential to create plots: if you're lucky, trends and anomalies will jump right out at you. This functionality is baked into pandas and the puzzles below explore some of what's possible with the library.**56.** Pandas is highly integrated with the plotting library matplotlib, and makes plotting DataFrames very user-friendly! Plotting in a notebook environment usually makes use of the following boilerplate:```pythonimport matplotlib.pyplot as plt%matplotlib inlineplt.style.use('ggplot')```matplotlib is the plotting library which pandas' plotting functionality is built upon, and it is usually aliased to ```plt```.```%matplotlib inline``` tells the notebook to show plots inline, instead of creating them in a separate window. ```plt.style.use('ggplot')``` is a style theme that most people find agreeable, based upon the styling of R's ggplot package.For starters, make a scatter plot of this random data, but use black X's instead of the default markers. ```df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})```Consult the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) if you get stuck! **57.** Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping track of his performance at work over time, as well as how good he was feeling that day, and whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of this DataFrame.(Hint: If you're having trouble seeing the plot, try multiplying the Series which you choose to represent size by 10 or more)*The chart doesn't have to be pretty: this isn't a course in data viz!*```df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9], "hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2], "happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3], "caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})``` **58.** What if we want to plot multiple things? Pandas allows you to pass in a matplotlib *Axis* object for plots, and plots will also return an Axis object.Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in millions)```df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52], "advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9], "month":range(12) })``` Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock during a time window. The color of the "candle" (the thick part of the bar) is green if the stock closed above its opening price, or red if below.![Candlestick Example](img/candle.jpg)This was initially designed to be a pandas plotting challenge, but it just so happens that this type of plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have provided a function that will plot the chart for you so long as you can use pandas to get the data into the correct format.Your first step should be to get the data in the correct format using pandas' time-series grouping function. We would like each candle to represent an hour's worth of data. You can write your own aggregation function which returns the open/high/low/close, but pandas has a built-in which also does this. The below cell contains helper functions. Call ```day_stock_data()``` to generate a DataFrame containing the prices a hypothetical stock sold for, and the time the sale occurred. Call ```plot_candlestick(df)``` on your properly aggregated and formatted stock data to print the candlestick chart.
import numpy as np def float_to_time(x): return str(int(x)) + ":" + str(int(x%1 * 60)).zfill(2) + ":" + str(int(x*60 % 1 * 60)).zfill(2) def day_stock_data(): #NYSE is open from 9:30 to 4:00 time = 9.5 price = 100 results = [(float_to_time(time), price)] while time < 16: elapsed = np.random.exponential(.001) time += elapsed if time > 16: break price_diff = np.random.uniform(.999, 1.001) price *= price_diff results.append((float_to_time(time), price)) df = pd.DataFrame(results, columns = ['time','price']) df.time = pd.to_datetime(df.time) return df #Don't read me unless you get stuck! def plot_candlestick(agg): """ agg is a DataFrame which has a DatetimeIndex and five columns: ["open","high","low","close","color"] """ fig, ax = plt.subplots() for time in agg.index: ax.plot([time.hour] * 2, agg.loc[time, ["high","low"]].values, color = "black") ax.plot([time.hour] * 2, agg.loc[time, ["open","close"]].values, color = agg.loc[time, "color"], linewidth = 10) ax.set_xlim((8,16)) ax.set_ylabel("Price") ax.set_xlabel("Hour") ax.set_title("OHLC of Stock Value During Trading Day") plt.show()
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
sample data from a 1d gussian mixture model
n_samples = 200 n_components = 3 X, y, true_params = sample_1d_gmm(n_samples=n_samples, n_components=n_components, random_state=1) plot_scatter_1d(X)
_____no_output_____
MIT
doc/example_notebooks/Single view, gaussian mixture model.ipynb
idc9/mvmm
Fit a Gaussian mixture model
# fit a guassian mixture model with 3 (the true number) of components # from mvmm.single_view.gaussian_mixture.GaussianMixture() is similar to sklearn.mixture.GaussianMixture() gmm = GaussianMixture(n_components=3, n_init=10) # 10 random initalizations gmm.fit(X) # plot parameter estimates plot_scatter_1d(X) plot_est_params(gmm) # the GMM class has all the familiar sklearn functionality gmm.sample(n_samples=20) gmm.predict(X) gmm.score_samples(X) gmm.predict_proba(X) gmm.bic(X) # with a few added API features for convenience # sample from a single mixture component gmm.sample_from_comp(y=0) # observed data log-likelihood gmm.log_likelihood(X) # total number of cluster parameters gmm._n_parameters() # some additional metadata is stored such as the fit time (in seconds) gmm.metadata_['fit_time'] # gmm.opt_data_ stores the optimization history plot_opt_hist(loss_vals=gmm.opt_data_['history']['loss_val'], init_loss_vals=gmm.opt_data_['init_loss_vals'], loss_name='observed data negative log likelihood')
_____no_output_____
MIT
doc/example_notebooks/Single view, gaussian mixture model.ipynb
idc9/mvmm
Model selection with BIC
# setup the base estimator for the grid search # here we add some custom arguments base_estimator = GaussianMixture(reg_covar=1e-6, init_params_method='rand_pts', # initalize cluster means from random data points n_init=10, abs_tol=1e-8, rel_tol=1e-8, max_n_steps=200) # do a grid search from 1 to 10 components param_grid = {'n_components': np.arange(1, 10 + 1)} # setup grid search object and fit using the data grid_search = MMGridSearch(base_estimator=base_estimator, param_grid=param_grid) grid_search.fit(X) # the best model is stored in .best_estimator_ print('BIC selected the model with', grid_search.best_estimator_.n_components, ' components') # all fit estimators are containted in .estimators_ print(len(grid_search.estimators_)) # the model selection for each grid point are stored in /model_sel_scores_ print(grid_search.model_sel_scores_) # plot BIC n_comp_seq = grid_search.param_grid['n_components'] est_n_comp = grid_search.best_params_['n_components'] bic_values = grid_search.model_sel_scores_['bic'] plt.plot(n_comp_seq, bic_values, marker='.') plt.axvline(est_n_comp, label='estimated {} components'.format(est_n_comp), color='red') plt.legend() plt.xlabel('n_components') plt.ylabel('BIC') set_xaxis_int_ticks()
10 bic aic 0 516.654319 510.057684 1 353.053751 336.562164 2 167.420223 141.033684 3 172.957731 136.676240 4 181.186136 135.009693 5 198.034419 141.963024 6 201.967577 136.001230 7 226.684487 150.823187 8 230.178894 144.422643 9 244.147315 148.496112
MIT
doc/example_notebooks/Single view, gaussian mixture model.ipynb
idc9/mvmm
read the data (only **Germany**)
ger <- read.dta("/...your folder/DAYPOLLS_GER.dta")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
create the date (based on stata)
ger$date <- seq(as.Date("1957-09-16"),as.Date("2013-09-22"), by="day")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
subset the data
geroa <- ger[ger$date >= "2000-01-01",]
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
reducing the data
geroar <- cbind(geroa$poll_p1_ipo, geroa$poll_p4_ipo)
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
create the daily times series data
geroar <- zoo(geroar, geroa$date)
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
name the column (don't need for date)
colnames(geroar) <- c("CDU/CSU", "FDP")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
searching for the index of the date when scandal happend
which(time(geroar)=="2010-12-02") which(time(geroar)=="2011-02-16")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
create values for vline, one for each panel
v.panel <- function(x, ...){ lines(x, ...) panel.number <- parent.frame()$panel.number abline(v = vlines[panel.number], col = "red", lty=2) }
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
plot **CDU/CSU** after 2000
plot(geroar$CDU, main="CDU/CSU after 2000", xlab="Time", ylab="Approval Rate") abline(v=time(geroar$CDU)[3989], lty=2, col="red") abline(v=time(geroar$CDU)[4065], lty=2, col="red")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
plot **FDP** after 2000
plot(geroar$FDP, main="FDP after 2000", xlab="Time", ylab="Approval Rate") abline(v=time(geroar$CDU)[3989], lty=2, col="red") abline(v=time(geroar$CDU)[4065], lty=2, col="red")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
CS-109B Introduction to Data Science Lab 5: Convolutional Neural Networks**Harvard University****Spring 2019****Lab instructor:** Eleni Kaxiras**Instructors:** Pavlos Protopapas and Mark Glickman**Authors:** Eleni Kaxiras, Pavlos Protopapas, Patrick Ohiomoba, and Davis Sontag
# RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text HTML(styles)
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Learning GoalsIn this lab we will look at Convolutional Neural Networks (CNNs), and their building blocks.By the end of this lab, you should:- know how to put together the building blocks used in CNNs - such as convolutional layers and pooling layers - in `keras` with an example.- have a good undertanding on how images, a common type of data for a CNN, are represented in the computer and how to think of them as arrays of numbers. - be familiar with preprocessing images with `keras` and `sckit-learn`.- use `keras-viz` to produce Saliency maps. - learn best practices for configuring the hyperparameters of a CNN.- run your first CNN and see the error rate.
import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (5,5) import numpy as np from scipy.optimize import minimize import tensorflow as tf import keras from keras import layers from keras import models from keras import utils from keras.layers import Dense from keras.models import Sequential from keras.layers import Flatten from keras.layers import Dropout from keras.layers import Activation from keras.regularizers import l2 from keras.optimizers import SGD from keras.optimizers import RMSprop from keras import datasets from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import LearningRateScheduler from keras.callbacks import History from keras import losses from keras.datasets import mnist from keras.utils import to_categorical from sklearn.utils import shuffle print(tf.VERSION) print(tf.keras.__version__) %matplotlib inline
1.12.0 2.1.6-tf
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Prologue: `keras-viz` Visualization Toolkit`keras-vis` is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include:- Activation maximization- **Saliency maps** - Class activation mapsAll visualizations by default support N-dimensional image inputs. i.e., it generalizes to N-dim image inputs to your model. Compatible with both theano and tensorflow backends with 'channels_first', 'channels_last' data format.Read the documentation at https://raghakot.github.io/keras-vis.https://github.com/raghakot/keras-visTo install use `pip install git+https://github.com/raghakot/keras-vis.git --upgrade` SEAS JupyterHub[Instructions for Using SEAS JupyterHub](https://canvas.harvard.edu/courses/48088/pages/instructions-for-using-seas-jupyterhub)SEAS and FAS are providing you with a platform in AWS to use for the class (accessible from the 'Jupyter' menu link in Canvas). These are AWS p2 instances with a GPU, 10GB of disk space, and 61 GB of RAM, for faster training for your networks. Most of the libraries such as keras, tensorflow, pandas, etc. are pre-installed. If a library is missing you may install it via the Terminal.**NOTE : The AWS platform is funded by SEAS and FAS for the purposes of the class. It is not running against your individual credit. You are not allowed to use it for purposes not related to this course.****Help us keep this service: Make sure you stop your instance as soon as you do not need it.**![aws-dog](fig/aws-dog.jpeg) Part 1: Parts of a Convolutional Neural NetThere are three types of layers in a Convolutional Neural Network:- Convolutional Layers- Pooling Layers.- Dropout Layers.- Fully Connected Layers. a. Convolutional Layers.Convolutional layers are comprised of **filters** and **feature maps**. The filters are essentially the **neurons** of the layer. They have the weights and produce the input for the next layer. The feature map is the output of one filter applied to the previous layer. The fundamental difference between a densely connected layer and a convolution layer is that dense layers learn global patterns in their input feature space (for example, for an MNIST digit, patterns involving all pixels), whereas convolution layers learn local patterns: in the case of images, patterns found in small 2D windows of the inputs called *receptive fields*. This key characteristic gives convnets two interesting properties:- The patterns they learn are **translation invariant**. After learning a certain pattern in the lower-right corner of a picture, a convnet can recognize it anywhere: for example, in the upper-left corner. A densely connected network would have to learn the pattern anew if it appeared at a new location. This makes convnets data efficient when processing images (because the visual world is fundamentally translation invariant): they need fewer training samples to learn representations that have generalization power.- They can learn **spatial hierarchies of patterns**. A first convolution layer will learn small local patterns such as edges, a second convolution layer will learn larger patterns made of the features of the first layers, and so on. This allows convnets to efficiently learn increasingly complex and abstract visual concepts (because the visual world is fundamentally spatially hierarchical).Convolutions operate over 3D tensors, called feature maps, with two spatial axes (height and width) as well as a depth axis (also called the channels axis). For an RGB image, the dimension of the depth axis is 3, because the image has three color channels: red, green, and blue. For a black-and-white picture, like the MNIST digits, the depth is 1 (levels of gray). The convolution operation extracts patches from its input feature map and applies the same transformation to all of these patches, producing an output feature map. This output feature map is still a 3D tensor: it has a width and a height. Its depth can be arbitrary, because the output depth is a parameter of the layer, and the different channels in that depth axis no longer stand for specific colors as in RGB input; rather, they stand for filters. Filters encode specific aspects of the input data: at a high level, a single filter could encode the concept โ€œpresence of a face in the input,โ€ for instance.In the MNIST example that we will see, the first convolution layer takes a feature map of size (28, 28, 1) and outputs a feature map of size (26, 26, 32): it computes 32 filters over its input. Each of these 32 output channels contains a 26ร—26 grid of values, which is a response map of the filter over the input, indicating the response of that filter pattern at different locations in the input. Convolutions are defined by two key parameters:- Size of the patches extracted from the inputs. These are typically 3ร—3 or 5ร—5 - The number of filters computed by the convolution. **Padding**: One of "valid", "causal" or "same" (case-insensitive). "valid" means "no padding". "same" results in padding the input such that the output has the same length as the original input. "causal" results in causal (dilated) convolutions,In `keras` see [convolutional layers](https://keras.io/layers/convolutional/)**keras.layers.Conv2D**(filters, kernel_size, strides=(1, 1), padding='valid', activation=None, use_bias=True, kernel_initializer='glorot_uniform', data_format='channels_last', bias_initializer='zeros') How are the values in feature maps calculated?![title](fig/convolution-many-filters.png) Exercise 1: - Compute the operations by hand (assuming zero padding and same arrays for all channels) to produce the first element of the 4x4 feature map. How did we get the 4x4 output size? - Write this Conv layer in keras -- your answer here b. Pooling Layers.Pooling layers are also comprised of filters and feature maps. Let's say the pooling layer has a 2x2 receptive field and a stride of 2. This stride results in feature maps that are one half the size of the input feature maps. We can use a max() operation for each receptive field. In `keras` see [pooling layers](https://keras.io/layers/pooling/)**keras.layers.MaxPooling2D**(pool_size=(2, 2), strides=None, padding='valid', data_format=None)![Max Pool](fig/MaxPool.png) c. Dropout Layers.Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.In `keras` see [Dropout layers](https://keras.io/layers/core/)keras.layers.Dropout(rate, seed=None)rate: float between 0 and 1. Fraction of the input units to drop.seed: A Python integer to use as random seed.References[Dropout: A Simple Way to Prevent Neural Networks from Overfitting](http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf) d. Fully Connected Layers.A fully connected layer flattens the square feature map into a vector. Then we can use a sigmoid or softmax activation function to output probabilities of classes. In `keras` see [FC layers](https://keras.io/layers/core/)**keras.layers.Dense**(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros') IT'S ALL ABOUT THE HYPERPARAMETERS!- stride- size of filter- number of filters- poolsize Part 2: Preprocessing the data Taking a look at how images are represented in a computer using a photo of a Picasso sculpture
img = plt.imread('data/picasso.png') img.shape img[1,:,1] print(type(img[50][0][0])) # let's see the image imgplot = plt.imshow(img)
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Visualizing the channels
R_img = img[:,:,0] G_img = img[:,:,1] B_img = img[:,:,2] plt.subplot(221) plt.imshow(R_img, cmap=plt.cm.Reds) plt.subplot(222) plt.imshow(G_img, cmap=plt.cm.Greens) plt.subplot(223) plt.imshow(B_img, cmap=plt.cm.Blues) plt.subplot(224) plt.imshow(img) plt.show()
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
More on preprocessing data below! If you want to learn more: [Image Processing with Python and Scipy](http://prancer.physics.louisville.edu/astrowiki/index.php/Image_processing_with_Python_and_SciPy) Part 3: Putting the Parts together to make a small ConvNet ModelLet's put all the parts together to make a convnet for classifying our good old MNIST digits.
# Load data and preprocess (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # load MNIST data train_images.shape train_images.max(), train_images.min() train_images = train_images.reshape((60000, 28, 28, 1)) # Reshape to get third dimension train_images = train_images.astype('float32') / 255 # Normalize between 0 and 1 test_images = test_images.reshape((10000, 28, 28, 1)) # Reshape to get third dimension test_images = test_images.astype('float32') / 255 # Normalize between 0 and 1 # Convert labels to categorical data train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) mnist_cnn_model = models.Sequential() # Create sequential model # Add network layers mnist_cnn_model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1))) mnist_cnn_model.add(layers.MaxPooling2D((2, 2))) mnist_cnn_model.add(layers.Conv2D(64, (3, 3), activation='relu')) mnist_cnn_model.add(layers.MaxPooling2D((2, 2))) mnist_cnn_model.add(layers.Conv2D(64, (3, 3), activation='relu'))
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
The next step is to feed the last output tensor (of shape (3, 3, 64)) into a densely connected classifier network like those youโ€™re already familiar with: a stack of Dense layers. These classifiers process vectors, which are 1D, whereas the current output is a 3D tensor. First we have to flatten the 3D outputs to 1D, and then add a few Dense layers on top.
mnist_cnn_model.add(layers.Flatten()) mnist_cnn_model.add(layers.Dense(64, activation='relu')) mnist_cnn_model.add(layers.Dense(10, activation='softmax')) mnist_cnn_model.summary() # Compile model mnist_cnn_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # Fit the model mnist_cnn_model.fit(train_images, train_labels, epochs=5, batch_size=64) # Evaluate the model on the test data: test_loss, test_acc = mnist_cnn_model.evaluate(test_images, test_labels) test_acc
Epoch 1/5 60000/60000 [==============================] - 21s 343us/step - loss: 0.1780 - acc: 0.9456 Epoch 2/5 60000/60000 [==============================] - 21s 352us/step - loss: 0.0479 - acc: 0.9854 Epoch 3/5 60000/60000 [==============================] - 25s 419us/step - loss: 0.0341 - acc: 0.9896 Epoch 4/5 60000/60000 [==============================] - 21s 349us/step - loss: 0.0254 - acc: 0.9922 Epoch 5/5 60000/60000 [==============================] - 21s 347us/step - loss: 0.0200 - acc: 0.9941 10000/10000 [==============================] - 1s 124us/step
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
A densely connected network (MLP) running MNIST usually has a test accuracy of 97.8%, whereas our basic convnet has a test accuracy of 99.03%: we decreased the error rate by 68% (relative) with only 5 epochs. Not bad! But why does this simple convnet work so well, compared to a densely connected model? The answer is above on how convolutional layers work! Data Preprocessing : Meet the `ImageDataGenerator` class in `keras` [(docs)](https://keras.io/preprocessing/image/) The MNIST and other pre-loaded dataset are formatted in a way that is almost ready for feeding into the model. What about plain images? They should be formatted into appropriately preprocessed floating-point tensors before being fed into the network.The Dogs vs. Cats dataset that youโ€™ll use isnโ€™t packaged with Keras. It was made available by Kaggle as part of a computer-vision competition in late 2013, back when convnets werenโ€™t mainstream. The data has been downloaded for you from https://www.kaggle.com/c/dogs-vs-cats/data The pictures are medium-resolution color JPEGs.
# TODO: set your base dir to your correct local location base_dir = 'data/cats_and_dogs_small' import os, shutil # Set up directory information train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'validation') test_dir = os.path.join(base_dir, 'test') train_cats_dir = os.path.join(train_dir, 'cats') train_dogs_dir = os.path.join(train_dir, 'dogs') validation_cats_dir = os.path.join(validation_dir, 'cats') validation_dogs_dir = os.path.join(validation_dir, 'dogs') test_cats_dir = os.path.join(test_dir, 'cats') test_dogs_dir = os.path.join(test_dir, 'dogs') print('total training cat images:', len(os.listdir(train_cats_dir))) print('total training dog images:', len(os.listdir(train_dogs_dir))) print('total validation cat images:', len(os.listdir(validation_cats_dir))) print('total validation dog images:', len(os.listdir(validation_dogs_dir))) print('total test cat images:', len(os.listdir(test_cats_dir))) print('total test dog images:', len(os.listdir(test_dogs_dir)))
total training cat images: 1000 total training dog images: 1000 total validation cat images: 500 total validation dog images: 500 total test cat images: 500 total test dog images: 500
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
So you do indeed have 2,000 training images, 1,000 validation images, and 1,000 test images. Each split contains the same number of samples from each class: this is a balanced binary-classification problem, which means classification accuracy will be an appropriate measure of success. Building the network
from keras import layers from keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_4 (Conv2D) (None, 148, 148, 32) 896 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 74, 74, 32) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 72, 72, 64) 18496 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 36, 36, 64) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 34, 34, 128) 73856 _________________________________________________________________ max_pooling2d_5 (MaxPooling2 (None, 17, 17, 128) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 15, 15, 128) 147584 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 7, 7, 128) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 6272) 0 _________________________________________________________________ dense_3 (Dense) (None, 512) 3211776 _________________________________________________________________ dense_4 (Dense) (None, 1) 513 ================================================================= Total params: 3,453,121 Trainable params: 3,453,121 Non-trainable params: 0 _________________________________________________________________
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
For the compilation step, youโ€™ll go with the RMSprop optimizer. Because you ended the network with a single sigmoid unit, youโ€™ll use binary crossentropy as the loss.
from keras import optimizers model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc'])
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
The steps for getting it into the network are roughly as follows:1. Read the picture files.2. Decode the JPEG content to RGB grids of pixels.3. Convert these into floating-point tensors.4. Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).It may seem a bit daunting, but fortunately Keras has utilities to take care of these steps automatically with the class `ImageDataGenerator`, which lets you quickly set up Python generators that can automatically turn image files on disk into batches of preprocessed tensors. This is what youโ€™ll use here.
from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(150, 150), batch_size=20, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=20, class_mode='binary')
Found 2000 images belonging to 2 classes. Found 1000 images belonging to 2 classes.
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Letโ€™s look at the output of one of these generators: it yields batches of 150ร—150 RGB images (shape (20, 150, 150, 3)) and binary labels (shape (20,)). There are 20 samples in each batch (the batch size). Note that the generator yields these batches indefinitely: it loops endlessly over the images in the target folder. For this reason, you need to break the iteration loop at some point:
for data_batch, labels_batch in train_generator: print('data batch shape:', data_batch.shape) print('labels batch shape:', labels_batch.shape) break
data batch shape: (20, 150, 150, 3) labels batch shape: (20,)
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Letโ€™s fit the model to the data using the generator. You do so using the `.fit_generator` method, the equivalent of `.fit` for data generators like this one. It expects as its first argument a Python generator that will yield batches of inputs and targets indefinitely, like this one does. Because the data is being generated endlessly, the Keras model needs to know how many samples to draw from the generator before declaring an epoch over. This is the role of the `steps_per_epoch` argument: after having drawn steps_per_epoch batches from the generatorโ€”that is, after having run for steps_per_epoch gradient descent steps - the fitting process will go to the next epoch. In this case, batches are 20 samples, so it will take 100 batches until you see your target of 2,000 samples.When using fit_generator, you can pass a validation_data argument, much as with the fit method. Itโ€™s important to note that this argument is allowed to be a data generator, but it could also be a tuple of Numpy arrays. If you pass a generator as validation_data, then this generator is expected to yield batches of validation data endlessly; thus you should also specify the validation_steps argument, which tells the process how many batches to draw from the validation generator for evaluation
history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=5, # TODO: should be 30 validation_data=validation_generator, validation_steps=50) # Itโ€™s good practice to always save your models after training. model.save('cats_and_dogs_small_1.h5')
Epoch 1/5 100/100 [==============================] - 55s 549ms/step - loss: 0.6885 - acc: 0.5320 - val_loss: 0.6711 - val_acc: 0.6220 Epoch 2/5 100/100 [==============================] - 56s 558ms/step - loss: 0.6620 - acc: 0.5950 - val_loss: 0.6500 - val_acc: 0.6170 Epoch 3/5 100/100 [==============================] - 56s 562ms/step - loss: 0.6198 - acc: 0.6510 - val_loss: 0.6771 - val_acc: 0.5790 Epoch 4/5 100/100 [==============================] - 57s 567ms/step - loss: 0.5733 - acc: 0.6955 - val_loss: 0.5993 - val_acc: 0.6740 Epoch 5/5 100/100 [==============================] - 57s 566ms/step - loss: 0.5350 - acc: 0.7305 - val_loss: 0.6140 - val_acc: 0.6520
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1