markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
String formattingUsing curly brackets with the [various options](https://pyformat.info/) available to the `.format()` method, you can create string templates for your data. Some examples:
# date in m/d/yyyy format in_date = '8/17/1982' # split out individual pieces of the date # using a shortcut method to assign variables to the resulting list month, day, year = in_date.split('/') # reshuffle as yyyy-mm-dd using .format() # use a formatting option (:0>2) to left-pad month/day numbers with a zero out_date = '{}-{:0>2}-{:0>2}'.format(year, month, day) print(out_date) # construct a greeting template greeting = 'Hello, {}! My name is {}.' your_name = 'Pat' my_name = 'Cody' print(greeting.format(your_name, my_name))
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
Type coercionConsider:```python this is a number, can't do string-y things to itage = 32 this is a string, can't do number-y things to itage = '32'```There are several functions you can use to _coerce_ a value of one type to a value of another type. Here are a couple of them:- `int()` tries to convert to an integer- `str()` tries to convert to a string- `float()` tries to convert to a float
# two strings of numbers num_1 = '100' num_2 = '200' # what happens when you add them without coercing? concat = num_1 + num_2 print(concat) # coerce to integer, then add them added = int(num_1) + int(num_2) print(added)
_____no_output_____
MIT
completed/00. Python Fundamentals (Part 1).ipynb
cjwinchester/cfj-2017
Evaluation - Echo Chamber The primary goal of this project is to provide users with recommendations that are different from those produced by an ALS recommendation system, but not too different. To evaluate the performance of the augmented, four metrics were developed to evaluate how different the movies recommended by the augmented model are from those recommened by the ALS model.The first metric looks a user's Top 100 ALS recommendations (based on predicted user ratings) and the Top 100 recommendations from the user's cluster (based on the cluster centroid's ratings). The proportion of the ALS recommended movies also found in the cluster recommendations should be high. A high proportion indicates that the user's cluster is representative of the user's movie preferences. The second metric looks a user's Top 100 ALS recommendations (based on predicted user ratings) and the top recommendations from the augmented model (based on the cluster centroid's ratings for the two clusters nearest to the user's cluster). The proportion of the ALS recommended movies also found in the augmented model recommendations should be low. A low proportion indicates that the recommendations from the augmented model differ from those produced by the ALS model.The third metric utilizes the distance between movies (based on the ALS item factors) to evaluate the extent to which the movies from the augmented model are qualitatively different from the movies from the ALS model. For this metric, the mean squared distance between the Top 100 movies from the ALS model (excluding the distance between a movie and itself) is calculated for each user in the sample. Likewise, the mean squared distance between each of the Top 100 ALS movies and each of the top movies from augmented model is calculated for each user in the sample. The difference between these mean squared distances for the sample are tested using a t-test. A negative and statistically significant t-statistic indicates the mean difference between the two sets of recommendations is greater than the mean difference within the ALS recommendations and, thus, the movies from the augmented model are qualitatively different from the movies from the ALS model.The final metric evaluates whether the movies recommended by the augmented model are too qualitatively different from the ALS recommendations. The metric tests the difference between two differences: the difference between ALS and the augmented model and the difference between ALS and recommendations from the two clusters furthest away from the user's cluster. These differences are calculated in the same way as described above for third metric. The difference in differences is tested using a t-test. A positive and statistically significant t-statistic indicates the difference between the ALS recommendations and those from the furthest clusters is greater than the distance between the ALS recommendations and the augmented model's recommendations. A greater distance is evidence that the augmented model recommendations are not too different from the ALS recommendations since the movies recommended by the furthest clusters are more different. The performance of the augmented model is evaluated in the cells below by applying the above metrics to a sample of 1000 users from the MovieLens dataset. The links above can be used to skip to a particular metric in Sections 5 - 8. Sections 2 - 4 demonstrate the process for importing, sampling, filtering, and engineering the data for evaluation. Local Code Imports
from src import model as mdl from src import custom as cm
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Code Imports
import pandas as pd import numpy as np from joblib import load from scipy import stats import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Data Import Data Files
user_fac = pd.read_csv('../data/processed/user_factors.csv', index_col='id') item_fac = pd.read_csv( '../data/processed/item_factors_unstacked.csv', index_col='id')
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Build Rankings Matrix The following cell multiplies the user factors and item factors calculated from the ALS model implementation to get the ALS movie ratings for each user (see the SparkALS.py file in the model folder for the ALS implementation code).
ALS_rankings_matrix = user_fac.to_numpy().dot(item_fac.T.to_numpy()) ALS_rankings_matrix.shape
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Sample Users To sample users, an index of random values is generated. The index is then used to filter rows from the user factors object (user_fac) and the ALS rankings matrix. The sample is transformed from a numpy array to a dataframe, the columns set to be the movie IDs (take from the item_fac index), and then transposed so that each column represents a user and each row a movie. Lastly, the index is reset making the movieId a column in the dataframe.
idx = np.random.randint(0, 243658, size=1000) sample_user_facs = user_fac.to_numpy()[idx, :] sample = ALS_rankings_matrix[idx, :] sample_df = pd.DataFrame(sample) sample_df.columns = item_fac.index sample_T = sample_df.T sample_T.reset_index(inplace=True)
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Filter for Most Rated Movies The ALS and augmented models used in this project only recommend movies that have been rated by more than 50 users. The file most_rated.csv contains the movie ID, title, and genre for these often rated movies. Using the most rated movies, the sample is filtered to include only those movies with more than 50 user reviews. This is done so that the evaluation below is based on the same set of possible movie recommendations as used in the recommendation system.
most_rated = pd.read_csv( '../data/processed/most_rated.csv', index_col='Unnamed: 0') sample_redx = pd.merge(sample_T, most_rated, how='inner', left_on='id', right_on='movieId') sample_redx.set_index('id', inplace=True) sample_redx.drop(['movieId', 'title', 'genres'], axis=1, inplace=True)
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Get ALS Top 100 for Sample The following cell creates a list of the Top 100 rated movies for each user based on the ALS model.
als_top_100s = [] for idx, col in enumerate(sample_redx): top_100 = sample_redx[col].sort_values(ascending=False).head(100) top_100_df = pd.DataFrame(top_100) top_100_df.reset_index(inplace=True) als_top_100s.append(top_100_df)
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Get Cluster Top 100 for Sample The augmented model gets recommendations from the two clusters nearest to the user's cluster. The following cells predict the cluster of each user in the sample, gets creates a dataframe with the movie ratings for each cluster, and generates a list of the Top 100 movies recommendations for each user in the sample from the user's predicted cluster. Predict Users' Clusters
gbc = load('../models/fifp_classification.joblib') preds = gbc.predict(sample_user_facs)
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Get Cluster Centroid Ratings
centroids = pd.read_csv( '../data/processed/centroids.csv', index_col='Unnamed: 0') centroid_ratings_T_df = cm.get_centroid_ratings(centroids, item_fac) centroid_ratings_T_df.reset_index(inplace=True) centroid_ratings_redx = pd.merge( centroid_ratings_T_df, most_rated, how='inner', left_on='id', right_on='movieId') centroid_ratings_redx.set_index('id', inplace=True) centroid_ratings_redx.drop( ['movieId', 'title', 'genres'], axis=1, inplace=True)
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Get Cluster Top 100
cluster_top_100s = [] for cluster in preds: top_100 = centroid_ratings_redx[cluster].sort_values( ascending=False).head(100) top_100_df = pd.DataFrame(top_100) top_100_df.reset_index(inplace=True) cluster_top_100s.append(top_100_df)
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Evaluation Metric 1: Proportion of Top 100 Movies Shared between the ALS recommendations and the User's Cluster Centroid The first metric evaluates the similarity between the top 100 ALS recommendations (based on predicted user ratings) and those generated by the user's cluster (based on the cluster centroid's ratings). The proportion of the ALS recommended movies also found in the cluster recommendations should be high indicating the user's cluster is representative of the user's movie preferences.
proportions1 = [] for i in range(len(cluster_top_100s)): als_set = set(als_top_100s[i].iloc[:, 0]) cluster_set = set(cluster_top_100s[i].iloc[:, 0]) intersection = als_set.intersection(cluster_set) n_in_common = len(intersection) proportion_in_common = (n_in_common/100) proportions1.append(proportion_in_common) plt.figure(figsize=(12, 6)) plt.title('Proportion of Movies Common to a User\'s ALS Recommendations and a User\'s Own Cluster\'s Recommendations') plt.ylabel('Frequencies') plt.xlabel('Proportion of Movies in Common') plt.hist(proportions1, bins=20) plt.show; plt.savefig('../reports/figures/Metric1.png')
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Evaluation Metric 2: Proportion of Top 100 Movies Shared between the ALS and Augmented Recommendations The second metric evaluates the similarity between the top 100 ALS recommendations (based on predicted user ratings) and the top recommendations from the augmented model. The proportion of the ALS recommended movies also found in the augmented model recommendations should be low indicating the recommendations from the augmented model differ from those produced by the ALS model.
cluster_distances = pd.read_csv( '../data/processed/cluster_distances_df.csv', index_col='Unnamed: 0') cluster_distances.head() cm.get_nearest_clusters(cluster_distances, '8') proportions2 = [] for i in range(len(cluster_top_100s)): j = preds[i] nearest_clusters = cm.get_nearest_clusters( cluster_distances, '{}'.format(j)) cluster1 = nearest_clusters[0] cluster2 = nearest_clusters[1] als_set = set(als_top_100s[i].iloc[:, 0]) cluster1_set = set(cluster_top_100s[cluster1].iloc[:, 0]) cluster2_set = set(cluster_top_100s[cluster2].iloc[:, 0]) cluster_full = cluster1_set.union(cluster2_set) intersection = als_set.intersection(cluster_full) n_in_common = len(intersection) proportion_in_common = (n_in_common/100) proportions2.append(proportion_in_common) plt.figure(figsize=(12, 6)) plt.title('Proportion of Movies Common to a User\'s ALS and Augmented Recommendations') plt.ylabel('Frequencies') plt.xlabel('Proportion of Movies in Common') plt.hist(proportions2, bins=20) plt.show; plt.savefig('../reports/figures/Metric2.png')
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Evaluation Metric 3: Distances Between ALS Recommended Movies and Nearest Cluster Recommended Movies In addition to recommending a different set of movies than the ALS model, the goal of the augmented model is to recommend movies that are qualitatively more diverse. To test for differences in the qualities of movie recommended by each model, the third evaluation metric assesses the distances between movies. For each user, the mean squared distance between movies recommended by the ALS model is calculated. Then, the mean squared distance between movies recommended by ALS model and movies recommended by the augmented model is calculated for each user. For the sample, the difference in the two mean squared distance calculations is tested using a t-test. A negative and statistically significant t-statistic indicates the mean difference between the two sets of recommendations is greater than the mean difference within the ALS recommendations and, thus, the movies from the augmented model are qualitatively different from the movies from the ALS model.
movie_distances = cm.get_cluster_distances(item_fac) movie_distances.columns = item_fac.index movie_distances.index = item_fac.index nearest_clusters_top = [] for i in range(len(cluster_top_100s)): j = preds[i] nearest_clusters = cm.get_nearest_clusters( cluster_distances, '{}'.format(j)) cluster1 = nearest_clusters[0] cluster2 = nearest_clusters[1] cluster1_set = set(cluster_top_100s[cluster1].iloc[:, 0]) cluster2_set = set(cluster_top_100s[cluster2].iloc[:, 0]) cluster_full_set = cluster1_set.union(cluster2_set) cluster_full_list = list(cluster_full_set) nearest_clusters_top.append(cluster_full_list) MSS_distances_within = [] for i in range(len(als_top_100s)): within_distances = [] for j in als_top_100s[i].iloc[:, 0]: for k in als_top_100s[i].iloc[:, 0]: if j == k: pass else: within_distances.append(movie_distances[j][k]) sq_within_distances = [x**2 for x in within_distances] MSS_distances = np.mean(sq_within_distances) MSS_distances_within.append(MSS_distances) MSS_distances_between = [] for i in range(len(als_top_100s)): between_distances = [] for j in als_top_100s[i].iloc[:, 0]: for k in nearest_clusters_top[i]: between_distances.append(movie_distances[j][k]) sq_between_distances = [x**2 for x in between_distances] MSS_distances = np.mean(sq_between_distances) MSS_distances_between.append(MSS_distances) stats.ttest_ind(MSS_distances_within, MSS_distances_between, equal_var=False)
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
Evaluation: Difference in Distances Between ALS Recommended Movies and Nearest Cluster Recommended Movies and the Distances Between ALS Recommended Movies and Furthest Cluster Recommended Movies The goal of the augmented model is to recommend movies that are qualitatively different, but not too different, from those provided by the ALS model. Recommendations that are too different will be ignored or, worse, could result in the user no longer using the recommendation service. To test that the recommendations provided by the augmented model are not too different for the ALS recommendation, metric 4 compares the differences between the ALS and augmented recommendations to the difference between the ALS recommendations and those provided by the two clusters furthest from the user's cluster. Recommendations from the furthest clusters should be more different than recommendations from the nearest clusters (i.e., the augmented model). For the sample, the difference in differences is tested using a t-test. A positive and statistically significant t-statistic indicates the difference between the ALS recommendations and the furthest cluster recommendations is greater than the difference between the ALS recommendations and the augmented recommendations. If the difference is positive and statitically significant, then the augmented recommendations are not as different as the recommendations from the furthest clusters and therefore are not too different from the ALS recommendations.
furthest_clusters_top = [] for i in range(len(cluster_top_100s)): j = preds[i] nearest_clusters = cm.get_furthest_clusters( cluster_distances, '{}'.format(j)) cluster1 = nearest_clusters[0] cluster2 = nearest_clusters[1] cluster1_set = set(cluster_top_100s[cluster1].iloc[:, 0]) cluster2_set = set(cluster_top_100s[cluster2].iloc[:, 0]) cluster_full_set = cluster1_set.union(cluster2_set) cluster_full_list = list(cluster_full_set) furthest_clusters_top.append(cluster_full_list) MSS_distances_between_furthest = [] for i in range(len(als_top_100s)): within_distances = [] for j in als_top_100s[i].iloc[:, 0]: for k in furthest_clusters_top[i]: within_distances.append(movie_distances[j][k]) sq_within_distances = [x**2 for x in within_distances] MSS_distances = np.mean(sq_within_distances) MSS_distances_between_furthest.append(MSS_distances) within_between_differences = np.subtract( MSS_distances_between, MSS_distances_within) within_furthest_differences = np.subtract( MSS_distances_between_furthest, MSS_distances_within) stats.ttest_ind(within_furthest_differences, within_between_differences, equal_var=False)
_____no_output_____
ADSL
notebooks/evaluation_drm_ec.ipynb
DRyanMiller/Echo_Chamber
PyImageJ Tutorial===This notebook covers how to use ImageJ as a library from Python. A major advantage of this approach is the ability to combine ImageJ with other tools available from the Python software ecosystem, including NumPy, SciPy, scikit-image, CellProfiler, OpenCV, ITK and more.This notebook assumes familiarity with the ImageJ API. Detailed tutorials in that regard can be found in the other notebooks. 7 Visualizing large imagesBefore we begin: how much memory is Java using right now?
from scyjava import jimport Runtime = jimport('java.lang.Runtime') def java_mem(): rt = Runtime.getRuntime() mem_max = rt.maxMemory() mem_used = rt.totalMemory() - rt.freeMemory() return '{} of {} MB ({}%)'.format(int(mem_used)/2**20, int(mem_max/2**20), int(100*mem_used/mem_max)) java_mem()
_____no_output_____
Apache-2.0
doc/7-Working-with-Large-Images.ipynb
hinerm/pyimagej
Now let's open an obnoxiously huge synthetic dataset:
big_data = ij.scifio().datasetIO().open('lotsofplanes&lengths=512,512,16,1000,10000&axes=X,Y,Channel,Z,Time.fake')
_____no_output_____
Apache-2.0
doc/7-Working-with-Large-Images.ipynb
hinerm/pyimagej
How many total samples does this image have?
import numpy as np dims = [big_data.dimension(d) for d in range(big_data.numDimensions())] pix = np.prod(dims) str(pix/2**40) + " terapixels"
_____no_output_____
Apache-2.0
doc/7-Working-with-Large-Images.ipynb
hinerm/pyimagej
And how much did memory usage in Java increase?
java_mem()
_____no_output_____
Apache-2.0
doc/7-Working-with-Large-Images.ipynb
hinerm/pyimagej
Let's visualize this beast. First, we define a function for slicing out a single plane:
def plane(image, pos): while image.numDimensions() > 2: image = ij.op().transform().hyperSliceView(image, image.numDimensions() - 1, pos[-1]) pos.pop() return ij.py.from_java(image) ij.py.show(plane(big_data, [0, 0, 0]))
_____no_output_____
Apache-2.0
doc/7-Working-with-Large-Images.ipynb
hinerm/pyimagej
But we can do better. Let's provide some interaction. First, a function to extract the _non-planar_ axes as a dict:
def axes(dataset): axes = {} for d in range(2, dataset.numDimensions()): axis = dataset.axis(d) label = axis.type().getLabel() length = dataset.dimension(d) axes[label] = length return axes axes(big_data) import ipywidgets, matplotlib widgets = {} for label, length in axes(big_data).items(): label = str(label) # HINT: Convert Java string to a python string to use with ipywidgets. widgets[label] = ipywidgets.IntSlider(description=label, max=length-1) widgets def f(**kwargs): matplotlib.pyplot.imshow(plane(big_data, list(kwargs.values())), cmap='gray') ipywidgets.interact(f, **widgets);
_____no_output_____
Apache-2.0
doc/7-Working-with-Large-Images.ipynb
hinerm/pyimagej
Exampe 2.1: Fourier series
import numpy as np # Import numpy from matplotlib import pyplot as plt # pyplot module for plotting
_____no_output_____
MIT
python/jupyterNotebooks/Example 2_1 Fourier Series.ipynb
oiseth/TKT4108StructuralDynamics2
Define waveformWe start by defining a square waveform. We will integrate the waveform over one period. We have, however, plotted several periods to illustrate the behaviour of the Fourier series. The blue line in the figure shows four periods of the time series that we will approximate, while the brown line shows the line that we will integrate (one period)
dt = 0.01 # Time step t = np.arange(0,10.01,dt) # time vector x = np.zeros(t.shape) # Initialize the x array x[t<5] = 1.0 # Set the value of x to one for t<5 # Plot waveform plt.figure() plt.plot(np.hstack((t-20.0,t-10.0, t, t+10.0)),np.hstack((x,x,x,x))); # Plot four periods plt.plot(t,x); #Plot one period plt.ylim(-2, 2); plt.xlim(-20,20); plt.grid(); plt.xlabel('$t$'); plt.ylabel('$X(t)$');
_____no_output_____
MIT
python/jupyterNotebooks/Example 2_1 Fourier Series.ipynb
oiseth/TKT4108StructuralDynamics2
Alternative 1: Obtaining Fourier coefficients expressed by sin() and cos() by the trapezoidal ruleFourier series expressed in terms of sinus and cosine functions are defined by$$ X(t) = a_{0} + \sum_{k=1}^{\infty} \left( a_k cos\left(\frac{2\pi k}{T}t \right) + b_k sin\left(\frac{2\pi k}{T}t \right)\right) $$Here $a_0$, $a_k$ and $b_k$ are Fourier coefficients given by$$a_0 = \frac{1}{T} \int_{0}^{T}X(t)dt$$$$a_k = \frac{1}{T} \int_{0}^{T}X(t)cos\left(\frac{2\pi k}{T}t \right)dt$$$$b_k = \frac{1}{T} \int_{0}^{T}X(t)sin\left(\frac{2\pi k}{T}t \right)dt$$The integrals above can be solved analytically and by numerical integration. In this example, we will consider different methods for numerical integration to obtain the coefficients, and we use the trapezoidal rule in this section.
nterms = 50 # Number of Fourier coefficeints T = np.max(t) # The period of the waveform a0 = 1/T*np.trapz(x,t) # Mean value ak = np.zeros((nterms)) bk = np.zeros((nterms)) for k in range(nterms): # Integrate for all terms ak[k] = 1/T*np.trapz(x*np.cos(2.0*np.pi*(k+1.0)*t/T),t) bk[k] = 1/T*np.trapz(x*np.sin(2.0*np.pi*(k+1.0)*t/T),t) # Plot Fourier coeffecients fig, axs = plt.subplots(nrows=1, ncols=2, constrained_layout=True) ax1 = axs[0] ax1.plot(np.arange(1,nterms+1),ak) ax1.set_ylim(-1, 1) ax1.grid() ax1.set_ylabel('$a_k$'); ax1.set_xlabel('$k$'); ax2 = axs[1] ax2.plot(np.arange(1,nterms+1),bk) ax2.set_ylim(-1, 1) ax2.grid() ax2.set_ylabel('$b_k$'); ax2.set_xlabel('$k$');
_____no_output_____
MIT
python/jupyterNotebooks/Example 2_1 Fourier Series.ipynb
oiseth/TKT4108StructuralDynamics2
The mean value is 0,5, while the figures above show that the $a_k$ coefficients are all zero and that every second $b_k$ coefficient is zero and that the nonzero terms become smaller as $k$ increases.It is interesting to plot the Fourier approximation and see how its accuracy depends on the number of terms used in the approximation.
# Plot Fourier series approximation tp = np.linspace(-20,20,1000) X_Fourier = np.ones(tp.shape[0])*a0 for k in range(nterms): X_Fourier = X_Fourier + 2.0*(ak[k]*np.cos(2.0*np.pi*(k+1.0)*tp/T) + bk[k]*np.sin(2.0*np.pi*(k+1.0)*tp/T)) plt.figure(figsize=(8,4)) plt.plot(np.hstack((t-20.0,t-10.0, t, t+10.0)),np.hstack((x,x,x,x))); # Plot four periods plt.plot(tp,X_Fourier, label=('Fourier approximation Nterms='+str(nterms))); plt.ylim(-2, 2) plt.xlim(-20,20) plt.grid() plt.xlabel('$t$') plt.ylabel('$X(t)$') plt.legend();
_____no_output_____
MIT
python/jupyterNotebooks/Example 2_1 Fourier Series.ipynb
oiseth/TKT4108StructuralDynamics2
The figure above shows that the Fourier approximation fits the waveform reasonably well and that the approximation gets better as more terms are added to the approximation. Try to change the number of terms yourself and observe how the approximation improves. Also, note the high-frequency oscillations observed in the corners of the Fourier approximation. This is called the Gibbs phenomenon. Note that it is impossible to get rid of these oscillations and that a Fourier series will always slightly overshoot when approximating step changes. Alternative 2: Trapezoidal rule, complex Fourier seriesThis approach is the same as the example above. Still, now we use the complex format of the Fourier series, which is essentially only rewriting of the formula introducing Euler's formula $e^{i \omega t} = cos(\omega t) + i sin(\omega t)$. We define $$x_k = a_k -ib_k $$$$ e^{-i\left(\frac{2\pi kt}{T} \right)} =cos\left(\frac{2\pi k}{T}t \right) - i sin\left(\frac{2\pi k}{T}t \right) $$$$x_k = \frac{1}{T} \int_{0}^{T}X(t) \left( cos\left(\frac{2\pi k}{T}t \right) - i sin\left(\frac{2\pi k}{T}t \right) \right)dt$$$$x_k = \frac{1}{T} \int_{0}^{T} X(t) e^{-i\left(\frac{2\pi kt}{T} \right)}dt$$Note that the mean value is obtained when $k=0$ and that both negative and positive $k$ values are necessary to cancel the imaginary part of the approximation. We will now calculate the Fourier coefficients using the trapezoidal rule.
nterms = t.shape[0] # The numer of terms in the Fourier series Xk = np.zeros(nterms,dtype=complex) for k in range(nterms): Xk[k] = 1/T*np.trapz(x*np.exp(-1j*2*np.pi/T*k*t),t) # Plot Fourier coeffecients fig, axs = plt.subplots(nrows=1, ncols=2, constrained_layout=True) ax1 = axs[0] ax1.plot(np.arange(0,nterms),np.real(Xk)) ax1.set_ylim(-1, 1) ax1.set_xlim(0, 10) ax1.grid() ax1.set_ylabel('$Re(X_k)$'); ax1.set_xlabel('$k$'); ax2 = axs[1] ax2.plot(np.arange(0,nterms),np.imag(Xk)) ax2.set_ylim(-1, 1) ax2.set_xlim(0, 10) ax2.grid() ax2.set_ylabel('$Imag(X_k)$'); ax2.set_xlabel('$k$');
_____no_output_____
MIT
python/jupyterNotebooks/Example 2_1 Fourier Series.ipynb
oiseth/TKT4108StructuralDynamics2
The figures above and the defined relation $x_k=a_k-ib_k$ show that we get the same Fourier coefficients as the case above and that this is only a rewriting of the Fourier approximation. Alternative 3: Left rectangular rule, complex Fourier series The function is assumed constant and equal to the left side of the rectangle when integrating by the left rectangular rule. This is what is typically implemented in softwares as the descrete Fourier transform. The integral using the left rectangular rule can be expressed as $$x_k = \frac{1}{T} \int_{0}^{T} X(t) e^{-i\left(\frac{2\pi kt}{T} \right)}dt$$ $$x_k = \frac{1}{T} \sum_{r=0}^{N-1} X_r e^{-i\left(\frac{2\pi k r\Delta t}{T} \right)} \Delta t$$ $$x_k = \frac{1}{N} \sum_{r=0}^{N-1} X_r e^{-i\left(\frac{2\pi k r}{N} \right)}$$
N = t.shape[0] # The numer of terms in the Fourier series Xk = np.zeros(nterms,dtype=complex) for k in range(N): Xk[k] = 1/N*np.matmul(x,np.exp(-1j*2*np.pi/N*k*np.arange(N))) # Plot Fourier coeffecients fig, axs = plt.subplots(nrows=1, ncols=2, constrained_layout=True) ax1 = axs[0] ax1.plot(np.arange(0,nterms),np.real(Xk)) ax1.set_ylim(-1, 1) ax1.set_xlim(0, 10) ax1.grid() ax1.set_ylabel('$Re(X_k)$'); ax1.set_xlabel('$k$'); ax2 = axs[1] ax2.plot(np.arange(0,nterms),np.imag(Xk)) ax2.set_ylim(-1, 1) ax2.set_xlim(0, 10) ax2.grid() ax2.set_ylabel('$Imag(X_k)$'); ax2.set_xlabel('$k$');
_____no_output_____
MIT
python/jupyterNotebooks/Example 2_1 Fourier Series.ipynb
oiseth/TKT4108StructuralDynamics2
Alternative 4: The fast Fourier transformThe discrete Fourier transform can be implemented in a clever time-saving way. This implementation is called the fast Fourier transform and is implemented in many software and is available in the NumPy package. The Fourier coefficients can be obtained using the FFT as shown below.
N = t.shape[0] # The numer of terms in the Fourier series Xk = np.fft.fft(x)/N # Plot Fourier coeffecients fig, axs = plt.subplots(nrows=1, ncols=2, constrained_layout=True) ax1 = axs[0] ax1.plot(np.arange(0,nterms),np.real(Xk)) ax1.set_ylim(-1, 1) ax1.set_xlim(0, 10) ax1.grid() ax1.set_ylabel('$Re(X_k)$'); ax1.set_xlabel('$k$'); ax2 = axs[1] ax2.plot(np.arange(0,nterms),np.imag(Xk)) ax2.set_ylim(-1, 1) ax2.set_xlim(0, 10) ax2.grid() ax2.set_ylabel('$Imag(X_k)$'); ax2.set_xlabel('$k$');
_____no_output_____
MIT
python/jupyterNotebooks/Example 2_1 Fourier Series.ipynb
oiseth/TKT4108StructuralDynamics2
Transfer Learning Tutorial**Author**: `Sasank Chilamkurthy `_In this tutorial, you will learn how to train your network usingtransfer learning. You can read more about the transfer learning at `cs231nnotes `__Quoting these notes, In practice, very few people train an entire Convolutional Network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pretrain a ConvNet on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest.These two major transfer learning scenarios look as follows:- **Finetuning the convnet**: Instead of random initializaion, we initialize the network with a pretrained network, like the one that is trained on imagenet 1000 dataset. Rest of the training looks as usual.- **ConvNet as fixed feature extractor**: Here, we will freeze the weights for all of the network except that of the final fully connected layer. This last fully connected layer is replaced with a new one with random weights and only this layer is trained.
import os import time import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import torchvision import torchvision.models as models import torchvision.datasets as datasets import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np import copy plt.ion() # interactive mode
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
Load DataWe will use `torchvision` and `torch.utils.data` packages for loading thedata.The problem we're going to solve today is to train a model to classify**ants** and **bees**. We have about 120 training images each for ants and bees.There are 75 validation images for each class. Usually, this is a verysmall dataset to generalize upon, if trained from scratch. Since weare using transfer learning, we should be able to generalize reasonablywell.This dataset is a very small subset of imagenet... Note :: Download the data from `here `_ and extract it to the current directory.
# Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = '/media/storage/datasets/cat_vs_dog' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
Visualize a few imagesLet's visualize a few training images so as to understand the data augmentations.
def imshow(inp, title=None): """Imshow for Tensor.""" inp = inp.numpy().transpose((1, 2, 0)) # Convert [channel, height, width] --> [height, width, channel] mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updated # Get a batch of training data inputs, classes = next(iter(dataloaders['train'])) # Make a grid from batch out = torchvision.utils.make_grid(inputs) imshow(out, title=[class_names[x] for x in classes])
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
Training the model------------------Now, let's write a general function to train a model. Here, we willillustrate:- Scheduling the learning rate- Saving the best modelIn the following, parameter ``scheduler`` is an LR scheduler object from``torch.optim.lr_scheduler``.
def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print(f'Epoch {epoch}/{num_epochs - 1}') print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}') # deep copy the model if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print(f'Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s') print(f'Best val Acc: {best_acc:4f}') # load best model weights model.load_state_dict(best_model_wts) return model
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
Visualizing the model predictionsGeneric function to display predictions for a few images
def visualize_model(model, num_images=6): was_training = model.training model.eval() images_so_far = 0 fig = plt.figure() with torch.no_grad(): for i, (inputs, labels) in enumerate(dataloaders['val']): inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) _, preds = torch.max(outputs, 1) for j in range(inputs.size()[0]): images_so_far += 1 ax = plt.subplot(num_images//2, 2, images_so_far) ax.axis('off') ax.set_title('predicted: {}'.format(class_names[preds[j]])) imshow(inputs.cpu().data[j]) if images_so_far == num_images: model.train(mode=was_training) return model.train(mode=was_training)
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
Finetuning the convnet----------------------Load a pretrained model and reset final fully connected layer.
model_ft = models.resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 2) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
Train and evaluateIt should take around 15-25 min on CPU. On GPU though, it takes less than aminute.
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25) visualize_model(model_ft)
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
ConvNet as fixed feature extractor----------------------------------Here, we need to freeze all the network except the final layer. We needto set ``requires_grad == False`` to freeze the parameters so that thegradients are not computed in ``backward()``.You can read more about this in the documentation`here `__.
model_conv = torchvision.models.resnet18(pretrained=True) for param in model_conv.parameters(): param.requires_grad = False # Parameters of newly constructed modules have requires_grad=True by default num_ftrs = model_conv.fc.in_features model_conv.fc = nn.Linear(num_ftrs, 2) model_conv = model_conv.to(device) criterion = nn.CrossEntropyLoss() # Observe that only parameters of final layer are being optimized as # opposed to before. optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
Train and evaluateOn CPU this will take about half the time compared to previous scenario.This is expected as gradients don't need to be computed for most of thenetwork. However, forward does need to be computed.
model_conv = train_model(model_conv, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=25) visualize_model(model_conv) plt.ioff() plt.show()
_____no_output_____
MIT
research/tutorials/transfer_learning/transfer_learning_tutorial.ipynb
Nhat-Minh-Hoang-Tran-BSC2021/Intel-Convolutional_Neural_Networks
Plot estimated and observed clade frequenciesPlot the overall observed clade frequencies compared to the estimated frequencies at each timepoint. The differences between these frequencies tells us something about the error in frequency estimation due to missing data from the near future.
from collections import defaultdict import json import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import networkx as nx import numpy as np import pandas as pd import seaborn as sns %matplotlib inline plt.style.use("huddlej") !pwd with open("../results/builds/h3n2/5_viruses_per_month/sample_0/2005-10-01--2015-10-01/timepoints/2015-10-01/segments/ha/frequencies.json", "r") as fh: frequencies = json.load(fh) with open("../results/builds/h3n2/5_viruses_per_month/sample_0/2005-10-01--2015-10-01/timepoints/2015-10-01/segments/ha/clades.json", "r") as fh: clades = json.load(fh) tips = pd.read_csv("../results/builds/h3n2/5_viruses_per_month/sample_0/2005-10-01--2015-10-01/standardized_tip_attributes.tsv", sep="\t", parse_dates=["timepoint"]) tips = tips.loc[ tips["segment"] == "ha", ["strain", "clade_membership", "timepoint", "frequency"] ].copy() data_path = "../results/builds/h3n2/5_viruses_per_month/sample_0/2005-10-01--2015-10-01/tips_to_clades.tsv" df = pd.read_csv(data_path, sep="\t", parse_dates=["timepoint"]) # successful clade clade_name = "d4aa5d5" # unsuccessful clade #clade_name = "5f0cf16" clade_tips = df[df["clade_membership"] == clade_name]["tip"].unique() clade_tips.shape df["tip"].unique().shape clade_tips.shape estimated_clade_frequencies = tips[tips["strain"].isin(clade_tips)].groupby("timepoint")["frequency"].sum().reset_index() estimated_clade_frequencies["timepoint_float"] = estimated_clade_frequencies["timepoint"].dt.year + (estimated_clade_frequencies["timepoint"].dt.month - 1) / 12.0 estimated_clade_frequencies clade_frequencies = np.zeros_like(frequencies["data"]["pivots"]) for tip in clade_tips: clade_frequencies += frequencies["data"]["frequencies"][tip] clade_frequencies fig, ax = plt.subplots(1, 1, figsize=(8, 6)) ax.plot(frequencies["data"]["pivots"], clade_frequencies, "o-") ax.plot(estimated_clade_frequencies["timepoint_float"], estimated_clade_frequencies["frequency"], "o") ax.set_xlabel("Date") ax.set_ylabel("Frequency") #ax.set_ylim(0, 1) tips[tips["strain"].isin(clade_tips)] found_clade_tips = tips[tips["strain"].isin(clade_tips)]["strain"].unique() set(clade_tips) - set(found_clade_tips) tips[tips["strain"] == "A/Kenya/230/2012"] df[df["clade_membership"] == "5f0cf16"]
_____no_output_____
MIT
analyses/2019-03-19-plot-estimated-and-observed-clade-frequencies.ipynb
blab/flu-forecasting
Python Essentials Contents- [Python Essentials](Python-Essentials) - [Overview](Overview) - [Data Types](Data-Types) - [Input and Output](Input-and-Output) - [Iterating](Iterating) - [Comparisons and Logical Operators](Comparisons-and-Logical-Operators) - [More Functions](More-Functions) - [Coding Style and PEP8](Coding-Style-and-PEP8) - [Exercises](Exercises) - [Solutions](Solutions) OverviewWe have covered a lot of material quite quickly, with a focus on examples.Now let’s cover some core features of Python in a more systematic way.This approach is less exciting but helps clear up some details. Data TypesComputer programs typically keep track of a range of data types.For example, `1.5` is a floating point number, while `1` is an integer.Programs need to distinguish between these two types for various reasons.One is that they are stored in memory differently.Another is that arithmetic operations are different- For example, floating point arithmetic is implemented on most machines by a specialized Floating Point Unit (FPU). In general, floats are more informative but arithmetic operations on integersare faster and more accurate.Python provides numerous other built-in Python data types, some of which we’ve already met- strings, lists, etc. Let’s learn a bit more about them. Primitive Data TypesOne simple data type is **Boolean values**, which can be either `True` or `False`
x = True x
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
We can check the type of any object in memory using the `type()` function.
type(x)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
In the next line of code, the interpreter evaluates the expression on the right of = and binds y to this value
y = 100 < 10 y type(y)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
In arithmetic expressions, `True` is converted to `1` and `False` is converted `0`.This is called **Boolean arithmetic** and is often useful in programming.Here are some examples
x + y x * y True + True bools = [True, True, False, True] # List of Boolean values sum(bools)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Complex numbers are another primitive data type in Python
x = complex(1, 2) y = complex(2, 1) print(x * y) type(x)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
ContainersPython has several basic types for storing collections of (possibly heterogeneous) data.We’ve [already discussed lists](https://python-programming.quantecon.org/python_by_example.htmllists-ref).A related data type is **tuples**, which are “immutable” lists
x = ('a', 'b') # Parentheses instead of the square brackets x = 'a', 'b' # Or no brackets --- the meaning is identical x type(x)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
In Python, an object is called **immutable** if, once created, the object cannot be changed.Conversely, an object is **mutable** if it can still be altered after creation.Python lists are mutable
x = [1, 2] x[0] = 10 x
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
But tuples are not
x = (1, 2) x[0] = 10
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
We’ll say more about the role of mutable and immutable data a bit later.Tuples (and lists) can be “unpacked” as follows
integers = (10, 20, 30) x, y, z = integers x y
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
You’ve actually [seen an example of this](https://python-programming.quantecon.org/about_py.htmltuple-unpacking-example) already.Tuple unpacking is convenient and we’ll use it often. Slice NotationTo access multiple elements of a list or tuple, you can use Python’s slicenotation.For example,
a = [2, 4, 6, 8] a[1:] a[1:3]
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
The general rule is that `a[m:n]` returns `n - m` elements, starting at `a[m]`.Negative numbers are also permissible
a[-2:] # Last two elements of the list
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
The same slice notation works on tuples and strings
s = 'foobar' s[-3:] # Select the last three elements
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Sets and DictionariesTwo other container types we should mention before moving on are [sets](https://docs.python.org/3/tutorial/datastructures.htmlsets) and [dictionaries](https://docs.python.org/3/tutorial/datastructures.htmldictionaries).Dictionaries are much like lists, except that the items are named instead ofnumbered
d = {'name': 'Frodo', 'age': 33} type(d) d['age']
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
The names `'name'` and `'age'` are called the *keys*.The objects that the keys are mapped to (`'Frodo'` and `33`) are called the `values`.Sets are unordered collections without duplicates, and set methods provide theusual set-theoretic operations
s1 = {'a', 'b'} type(s1) s2 = {'b', 'c'} s1.issubset(s2) s1.intersection(s2)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
The `set()` function creates sets from sequences
s3 = set(('foo', 'bar', 'foo')) s3
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Input and OutputLet’s briefly review reading and writing to text files, starting with writing
f = open('newfile.txt', 'w') # Open 'newfile.txt' for writing f.write('Testing\n') # Here '\n' means new line f.write('Testing again') f.close()
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Here- The built-in function `open()` creates a file object for writing to. - Both `write()` and `close()` are methods of file objects. Where is this file that we’ve created?Recall that Python maintains a concept of the present working directory (pwd) that can be located from with Jupyter or IPython via
%pwd
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
If a path is not specified, then this is where Python writes to.We can also use Python to read the contents of `newline.txt` as follows
f = open('newfile.txt', 'r') out = f.read() out print(out)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
PathsNote that if `newfile.txt` is not in the present working directory then this call to `open()` fails.In this case, you can shift the file to the pwd or specify the [full path](https://en.wikipedia.org/wiki/Path_%28computing%29) to the file ```python3f = open('insert_full_path_to_file/newfile.txt', 'r')``` IteratingOne of the most important tasks in computing is stepping through asequence of data and performing a given action.One of Python’s strengths is its simple, flexible interface to this kind of iteration viathe `for` loop. Looping over Different ObjectsMany Python objects are “iterable”, in the sense that they can be looped over.To give an example, let’s write the file us_cities.txt, which lists US cities and their population, to the present working directory.
%%file us_cities.txt new york: 8244910 los angeles: 3819702 chicago: 2707120 houston: 2145146 philadelphia: 1536471 phoenix: 1469471 san antonio: 1359758 san diego: 1326179 dallas: 1223229
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Here %%file is an [IPython cell magic](https://ipython.readthedocs.io/en/stable/interactive/magics.htmlcell-magics).Suppose that we want to make the information more readable, by capitalizing names and adding commas to mark thousands.The program below reads the data in and makes the conversion:
data_file = open('us_cities.txt', 'r') for line in data_file: city, population = line.split(':') # Tuple unpacking city = city.title() # Capitalize city names population = f'{int(population):,}' # Add commas to numbers print(city.ljust(15) + population) data_file.close()
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Here `format()` is a string method [used for inserting variables into strings](https://docs.python.org/3/library/string.htmlformatspec).The reformatting of each line is the result of three different string methods,the details of which can be left till later.The interesting part of this program for us is line 2, which shows that1. The file object `data_file` is iterable, in the sense that it can be placed to the right of `in` within a `for` loop. 1. Iteration steps through each line in the file. This leads to the clean, convenient syntax shown in our program.Many other kinds of objects are iterable, and we’ll discuss some of them later on. Looping without IndicesOne thing you might have noticed is that Python tends to favor looping without explicit indexing.For example,
x_values = [1, 2, 3] # Some iterable x for x in x_values: print(x * x)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
is preferred to
for i in range(len(x_values)): print(x_values[i] * x_values[i])
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
When you compare these two alternatives, you can see why the first one is preferred.Python provides some facilities to simplify looping without indices.One is `zip()`, which is used for stepping through pairs from two sequences.For example, try running the following code
countries = ('Japan', 'Korea', 'China') cities = ('Tokyo', 'Seoul', 'Beijing') for country, city in zip(countries, cities): print(f'The capital of {country} is {city}')
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
The `zip()` function is also useful for creating dictionaries — forexample
names = ['Tom', 'John'] marks = ['E', 'F'] dict(zip(names, marks))
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
If we actually need the index from a list, one option is to use `enumerate()`.To understand what `enumerate()` does, consider the following example
letter_list = ['a', 'b', 'c'] for index, letter in enumerate(letter_list): print(f"letter_list[{index}] = '{letter}'")
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
List ComprehensionsWe can also simplify the code for generating the list of random draws considerably by using something called a *list comprehension*.[List comprehensions](https://en.wikipedia.org/wiki/List_comprehension) are an elegant Python tool for creating lists.Consider the following example, where the list comprehension is on theright-hand side of the second line
animals = ['dog', 'cat', 'bird'] plurals = [animal + 's' for animal in animals] plurals
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Here’s another example
range(8) doubles = [2 * x for x in range(8)] doubles
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Comparisons and Logical Operators ComparisonsMany different kinds of expressions evaluate to one of the Boolean values (i.e., `True` or `False`).A common type is comparisons, such as
x, y = 1, 2 x < y x > y
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
One of the nice features of Python is that we can *chain* inequalities
1 < 2 < 3 1 <= 2 <= 3
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
As we saw earlier, when testing for equality we use `==`
x = 1 # Assignment x == 2 # Comparison
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
For “not equal” use `!=`
1 != 2
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Note that when testing conditions, we can use **any** valid Python expression
x = 'yes' if 42 else 'no' x x = 'yes' if [] else 'no' x
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
What’s going on here?The rule is:- Expressions that evaluate to zero, empty sequences or containers (strings, lists, etc.) and `None` are all equivalent to `False`. - for example, `[]` and `()` are equivalent to `False` in an `if` clause - All other values are equivalent to `True`. - for example, `42` is equivalent to `True` in an `if` clause Combining ExpressionsWe can combine expressions using `and`, `or` and `not`.These are the standard logical connectives (conjunction, disjunction and denial)
1 < 2 and 'f' in 'foo' 1 < 2 and 'g' in 'foo' 1 < 2 or 'g' in 'foo' not True not not True
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Remember- `P and Q` is `True` if both are `True`, else `False` - `P or Q` is `False` if both are `False`, else `True` More FunctionsLet’s talk a bit more about functions, which are all important for good programming style. The Flexibility of Python FunctionsAs we discussed in the [previous lecture](https://python-programming.quantecon.org/python_by_example.htmlpython-by-example), Python functions are very flexible.In particular- Any number of functions can be defined in a given file. - Functions can be (and often are) defined inside other functions. - Any object can be passed to a function as an argument, including other functions. - A function can return any kind of object, including functions. We already [gave an example](https://python-programming.quantecon.org/functions.htmltest-program-6) of how straightforward it is to pass a function toa function.Note that a function can have arbitrarily many `return` statements (including zero).Execution of the function terminates when the first return is hit, allowingcode like the following example
def f(x): if x < 0: return 'negative' return 'nonnegative'
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Functions without a return statement automatically return the special Python object `None`. DocstringsPython has a system for adding comments to functions, modules, etc. called *docstrings*.The nice thing about docstrings is that they are available at run-time.Try running this
def f(x): """ This function squares its argument """ return x**2
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
After running this code, the docstring is available
f?
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
```ipythonType: functionString Form:File: /home/john/temp/temp.pyDefinition: f(x)Docstring: This function squares its argument```
f??
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
```ipythonType: functionString Form:File: /home/john/temp/temp.pyDefinition: f(x)Source:def f(x): """ This function squares its argument """ return x**2``` With one question mark we bring up the docstring, and with two we get the source code as well. One-Line Functions: `lambda`The `lambda` keyword is used to create simple functions on one line.For example, the definitions
def f(x): return x**3
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
and
f = lambda x: x**3
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
are entirely equivalent.To see why `lambda` is useful, suppose that we want to calculate $ \int_0^2 x^3 dx $ (and have forgotten our high-school calculus).The SciPy library has a function called `quad` that will do this calculation for us.The syntax of the `quad` function is `quad(f, a, b)` where `f` is a function and `a` and `b` are numbers.To create the function $ f(x) = x^3 $ we can use `lambda` as follows
from scipy.integrate import quad quad(lambda x: x**3, 0, 2)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Here the function created by `lambda` is said to be *anonymous* because it was never given a name. Keyword ArgumentsIn a [previous lecture](https://python-programming.quantecon.org/python_by_example.htmlpython-by-example), you came across the statement ```python3plt.plot(x, 'b-', label="white noise")``` In this call to Matplotlib’s `plot` function, notice that the last argument is passed in `name=argument` syntax.This is called a *keyword argument*, with `label` being the keyword.Non-keyword arguments are called *positional arguments*, since their meaningis determined by order- `plot(x, 'b-', label="white noise")` is different from `plot('b-', x, label="white noise")` Keyword arguments are particularly useful when a function has a lot of arguments, in which case it’s hard to remember the right order.You can adopt keyword arguments in user-defined functions with no difficulty.The next example illustrates the syntax
def f(x, a=1, b=1): return a + b * x
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
The keyword argument values we supplied in the definition of `f` become the default values
f(2)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
They can be modified as follows
f(2, a=4, b=5)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Coding Style and PEP8To learn more about the Python programming philosophy type `import this` at the prompt.Among other things, Python strongly favors consistency in programming style.We’ve all heard the saying about consistency and little minds.In programming, as in mathematics, the opposite is true- A mathematical paper where the symbols $ \cup $ and $ \cap $ were reversed would be very hard to read, even if the author told you so on the first page. In Python, the standard style is set out in [PEP8](https://www.python.org/dev/peps/pep-0008/).(Occasionally we’ll deviate from PEP8 in these lectures to better match mathematical notation) ExercisesSolve the following exercises.(For some, the built-in function `sum()` comes in handy). Exercise 1Part 1: Given two numeric lists or tuples `x_vals` and `y_vals` of equal length, computetheir inner product using `zip()`.Part 2: In one line, count the number of even numbers in 0,…,99.- Hint: `x % 2` returns 0 if `x` is even, 1 otherwise. Part 3: Given `pairs = ((2, 5), (4, 2), (9, 8), (12, 10))`, count the number of pairs `(a, b)`such that both `a` and `b` are even. Exercise 2Consider the polynomial$$p(x)= a_0 + a_1 x + a_2 x^2 + \cdots a_n x^n= \sum_{i=0}^n a_i x^i \tag{5.1}$$Write a function `p` such that `p(x, coeff)` that computes the value in [(5.1)](equation-polynom0) given a point `x` and a list of coefficients `coeff`.Try to use `enumerate()` in your loop. Exercise 3Write a function that takes a string as an argument and returns the number of capital letters in the string.Hint: `'foo'.upper()` returns `'FOO'`. Exercise 4Write a function that takes two sequences `seq_a` and `seq_b` as arguments andreturns `True` if every element in `seq_a` is also an element of `seq_b`, else`False`.- By “sequence” we mean a list, a tuple or a string. - Do the exercise without using [sets](https://docs.python.org/3/tutorial/datastructures.htmlsets) and set methods. Exercise 5When we cover the numerical libraries, we will see they include manyalternatives for interpolation and function approximation.Nevertheless, let’s write our own function approximation routine as an exercise.In particular, without using any imports, write a function `linapprox` that takes as arguments- A function `f` mapping some interval $ [a, b] $ into $ \mathbb R $. - Two scalars `a` and `b` providing the limits of this interval. - An integer `n` determining the number of grid points. - A number `x` satisfying `a <= x <= b`. and returns the [piecewise linear interpolation](https://en.wikipedia.org/wiki/Linear_interpolation) of `f` at `x`, based on `n` evenly spaced grid points `a = point[0] < point[1] < ... < point[n-1] = b`.Aim for clarity, not efficiency. Exercise 6Using list comprehension syntax, we can simplify the loop in the followingcode.
import numpy as np n = 100 ϵ_values = [] for i in range(n): e = np.random.randn() ϵ_values.append(e)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Solutions Exercise 1 Part 1 Solution:Here’s one possible solution
x_vals = [1, 2, 3] y_vals = [1, 1, 1] sum([x * y for x, y in zip(x_vals, y_vals)])
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
This also works
sum(x * y for x, y in zip(x_vals, y_vals))
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Part 2 Solution:One solution is
sum([x % 2 == 0 for x in range(100)])
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
This also works:
sum(x % 2 == 0 for x in range(100))
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Some less natural alternatives that nonetheless help to illustrate theflexibility of list comprehensions are
len([x for x in range(100) if x % 2 == 0])
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
and
sum([1 for x in range(100) if x % 2 == 0])
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Part 3 SolutionHere’s one possibility
pairs = ((2, 5), (4, 2), (9, 8), (12, 10)) sum([x % 2 == 0 and y % 2 == 0 for x, y in pairs])
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Exercise 2
def p(x, coeff): return sum(a * x**i for i, a in enumerate(coeff)) p(1, (2, 4))
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Exercise 3Here’s one solution:
def f(string): count = 0 for letter in string: if letter == letter.upper() and letter.isalpha(): count += 1 return count f('The Rain in Spain')
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
An alternative, more pythonic solution:
def count_uppercase_chars(s): return sum([c.isupper() for c in s]) count_uppercase_chars('The Rain in Spain')
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Exercise 4Here’s a solution:
def f(seq_a, seq_b): is_subset = True for a in seq_a: if a not in seq_b: is_subset = False return is_subset # == test == # print(f([1, 2], [1, 2, 3])) print(f([1, 2, 3], [1, 2]))
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Of course, if we use the `sets` data type then the solution is easier
def f(seq_a, seq_b): return set(seq_a).issubset(set(seq_b))
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Exercise 5
def linapprox(f, a, b, n, x): """ Evaluates the piecewise linear interpolant of f at x on the interval [a, b], with n evenly spaced grid points. Parameters ========== f : function The function to approximate x, a, b : scalars (floats or integers) Evaluation point and endpoints, with a <= x <= b n : integer Number of grid points Returns ======= A float. The interpolant evaluated at x """ length_of_interval = b - a num_subintervals = n - 1 step = length_of_interval / num_subintervals # === find first grid point larger than x === # point = a while point <= x: point += step # === x must lie between the gridpoints (point - step) and point === # u, v = point - step, point return f(u) + (x - u) * (f(v) - f(u)) / (v - u)
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
Exercise 6Here’s one solution.
n = 100 ϵ_values = [np.random.randn() for i in range(n)]
_____no_output_____
BSD-3-Clause
tests/project/ipynb/python_essentials.ipynb
QuantEcon/sphinx-tojupyter
`uarray` NumPy Compatability
from uarray import * import numpy as np from numba import njit
_____no_output_____
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray
Original Expression Let's look at this simple NumPy expression of calling the outer production of two values and then indexing it:
def some_fn(a, b): return np.multiply.outer(a, b)[5]
_____no_output_____
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray
We can see that this does a lot of extra work, since we discard most of the results of the outer product after indexing. We can look at the time:
args = [np.arange(1000), np.arange(10)] # NBVAL_IGNORE_OUTPUT %timeit some_fn(*args)
27.5 µs ± 214 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
BSD-3-Clause
notebooks/NumPy Compat.ipynb
costrouc/uarray