markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Gym Crowdedness Analysis with PCA > Objective : To **predict** how crowded a university gym would be at a given time of day (and some other features, including weather) > Data Decription : The dataset consists of 26,000 people counts (about every 10 minutes) over one year. The dataset also contains information about the weather and semester-specific information that might affect how crowded it is. The label is the number of people, which has to be predicted given some subset of the features.**Label**:- Number of people**Features**: 1. date (string; datetime of data) 2. timestamp (int; number of seconds since beginning of day) 3. dayofweek (int; 0 [monday] - 6 [sunday]) 4. is_weekend (int; 0 or 1) [boolean, if 1, it's either saturday or sunday, otherwise 0] 5. is_holiday (int; 0 or 1) [boolean, if 1 it's a federal holiday, 0 otherwise] 6. temperature (float; degrees fahrenheit) 7. isstartof_semester (int; 0 or 1) [boolean, if 1 it's the beginning of a school semester, 0 otherwise] 8. month (int; 1 [jan] - 12 [dec]) 9. hour (int; 0 - 23) > Approach The model would be built and PCA would be implemented in the following way : - **Data Cleaning and PreProcessing**- **Exploratory Data Analysis :** - Uni-Variate Analysis : Histograms , Distribution Plots - Bi-Variate Analysis : Pair Plots - Correlation Matrix - **Processing :** - OneHotEncoding - Feature Scaling : Standard Scaler- **Splitting Dataset** - **Principal Component Analysis**- **Modelling : Random Forest** - Random forest without PCA - Random Forest with PCA - **Conclusion** `1` Data Cleaning and PreProcessing **Importing Libraries and loading Dataset**
import numpy as np # linear algebra import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline df=pd.read_csv(r'C:\Users\kusht\OneDrive\Desktop\Excel-csv\PCA analysis.csv') #Replace it with your path where the data file is stored df.head()
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Print the `info()` of the dataset**
### START CODE HERE (~ 1 Line of code) ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Describe the dataset using `describe()`**
### START CODE HERE (~ 1 Line of code) ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Convert temperature in farenheit into celsius scale using the formula `Celsius=(Fahrenheit-32)* (5/9)`**
### START CODE HERE (~1 Line of code) ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Convert the timestamp into hours in 12 h format as its currently in seconds and drop `date` coulmn**
### START CODE HERE: (~ 1 Line of code) ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
`2` Exploratory Data Analysis `2.1` Uni-Variate and Bi-Variate Analysis - **Pair Plots** **TASK : Use `pairplot()` to make different pair scatter plots of the entire dataframe**
### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK: Now analyse scatter plots between `number_people` and all other attributes using a `for loop` to properly know what are the ideal conditions for people to come to the gym**
### START CODE HERE ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**Analyse the plots and understand :**1. **At what time , temperature , week of the day more people come in?** 2. **Whether people like to come to the gym in a holiday or a weekend or they prefer to come to gym during working days?** 3. **Which month is most preferable for people to come to the gym?** - **Distribution Plots** **TASK : Plot individual `distplot()` for `temperature` and `number_people` to check out the individual distribution of the attributes**
### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
`2.2` Correlation Matrix **TASK : Plot a correlation matrix and make it more understandable using `sns.heatmap`**
### START CODE HERE : ### END CODE HERE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**Analyse the correlation matrix and understand the different dependencies of attributes on each other** `3.` Processing : `3.1` One hot encoding :One hot encoding certain attributes to not give any ranking/priority to any instance **TASK: One Hot Encode following attributes `month` , `hour` , `day of week`**
## YOU CAN USE EITHER get_dummies() OR OneHotEncoder() ### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
`3.2` Feature Scaling :Some attributes ranges are ver different compared to other values and during PCA implementation this might give a problem thus you need to standardise some of the attributes **TASK: Using `StandardScaler()` , standardise `temperature` and `timestamp`**
## You can use two individual scalers one for temperature and other for timestamp ## you can use an array type data=df.values and standradise data then split data into X and y from sklearn.preprocessing import StandardScaler ### START CODE HERE : (Replace places having '#' with the code) data=df.values scaler1 = StandardScaler() scaler1.fit(#) # for timestamp data[#] = scaler1.transform(#) scaler2 = StandardScaler() scaler2.fit(data[#]) # for temperature data[#] = scaler2.transform(data[#]) ### END CODE HERE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
`4.` Splitting the dataset : **TASK : Split the dataset into dependent and independent variables and name them y and X respectively**
### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Split the X ,y into training and test set**
from sklearn.model_selection import train_test_split ### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
`5.` Principal Component Analysis Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. **How does it work? :**- First, a matrix is calculated that summarizes how our variables all relate to one another.- Secondly , The matrix is broken down into two separate components: direction and magnitude. so its easy to understand the “directions” of the data and its “magnitude” (or how “important” each direction is). The photo below, displays the two main directions in this data: the “red direction” and the “green direction.” In this case, the “red direction” is the more important one as given how the dots are arranged, “red direction” comprises most of the data and thus is s more important than the “green direction” (Hint: Think of What would fitting a line of best fit to this data look like?) - Then the data is transformed to align with these important directions (which are combinations of our original variables). The photo below is the same exact data as above, but transformed so that the x- and y-axes are now the “red direction” and “green direction.” What would the line of best fit look like here? So PCA tries to find the most important directions in which most of the data is spread and thus reduces it to those components thereby reducing the number of attributes to train and increasing computational speed. A 3D example is given below : As you can see above a 3D plot is reduced to a 2d plot still retaining most of the data **Now that you have understood this , lets try to implement it** **TASK : Print the PCA fit_transform of X(independent variables)**
from sklearn.decomposition import PCA ### START CODE HERE : (Replace spaces having '#' with the code) pca = PCA() pca.fit_transform(#) ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Get covariance using `get_covariance()`**
### START CODE HERE (~ 1 line of code) ### END CODE HERE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Get explained variance using `explained_variance_ratio`**
### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Plot a bar graph of `explained variance`**
# you can use plt.bar() ### START CODE HERE : (Replace spaces having '#' with the code) with plt.style.context('dark_background'): plt.figure(figsize=(15,12)) plt.bar(range(49), '#', alpha=0.5, align='center', label='individual explained variance') plt.ylabel('#') plt.xlabel('#') plt.legend(loc='best') plt.tight_layout() ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**Analyse the plot and estimate how many componenets you want to keep** **TASK : Make a `PCA()` object with n_components =20 and fit-transform in the dataset (X) and assign to a new variable `X_new`**
### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
Now , `X_new` is the dataset for PCA **TASK : Get Covariance using `get_covariance`**
### START CODE HERE (~1 Line of code) ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Get the explained variance using `explained_variance_ratio`**
### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Plot bar plot of `exlpained variance`**
# You can use plt.bar() ### START CODE HERE: ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
`6.` Modelling : Random Forest To understand Random forest classifier , lets first get a brief idea about Decision Trees in general. Decision Trees are very intuitive and at everyone have used this knowingly or unknowingly at some point . Basically the model keeps sorting them into categories forming a large tree by responses of some questons (decisions) and thats why its called decision tree. An image example would help understand it better : `Random Forest` : Random forest, like its name implies, consists of a large number of individual decision trees that operate as an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning) . Each individual tree in the random forest spits out a class prediction and the class with the most votes becomes our model’s prediction. The fundamental concept is large number of relatively uncorrelated models (trees) operating as a committee will outperform any of the individual constituent models. Since this dataset has very low correlation between attributes , random forest can be a good option. In this section you'll have to make a random forest model and train it on both without PCA dataset and with PCA datset to analyse the differences `6.1` Random Forest Without PCA **TASK : Make a random forest model and train it on without PCA training set**
# Establish model from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor() # Try different numbers of n_estimators and print the scores # You can use a variable estimators = np.arrange(10,200,10) and then a for loop to take all the values of estimators ### START CODE HERE : (Replace spaces having '#' with code) estimators = np.arange(10, 200, 10) scores = [] for n in estimators: model.set_params(n_estimators='#') model.fit('#', '#') scores.append(model.score(X_test, y_test)) print(scores) ### END CODE HERE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Make a plot between `n_estimator` and `scores` to properly get the best number of estimators**
## Use plt.plot ### START CODE HERE : ### END CODE HERE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
`6.2` Random Forest With PCA **TASK : Split the your dataset with PCA into training and testing set using `train_test_split`**
from sklearn.model_selection import train_test_split ### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Make a random forest model called `model_pca` and fit it into the new X_train and y_train and then print out the random forest scores for dataset with PCA applied to it**
# Establish model from sklearn.ensemble import RandomForestRegressor model_pca = RandomForestRegressor() # You can use different number of estimators # # You can use a variable estimators = np.arrange(10,200,10) and then a for loop to take all the values of estimators ### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
**TASK : Make a plot between `n_estimator` and `score` and find the best parameter**
# you can use plt.plot ### START CODE HERE : ### END CODE
_____no_output_____
MIT
Gym Crowd Analysis/Gym Crowd Analysis with PCA (ToDo Template).ipynb
abhisngh/Data-Science
Performance programming We've spent most of this course looking at how to make code readable and reliable. For research work, it is often also important that code is efficient: that it does what it needs to do *quickly*. It is very hard to work out beforehand whether code will be efficient or not: it is essential to *Profile* code, to measure its performance, to determine what aspects of it are slow. When we looked at Functional programming, we claimed that code which is conceptualised in terms of actions on whole data-sets rather than individual elements is more efficient. Let's measure the performance of some different ways of implementing some code and see how they perform. Two Mandelbrots You're probably familiar with a famous fractal called the [Mandelbrot Set](https://www.youtube.com/watch?v=ZDU40eUcTj0). For a complex number $c$, $c$ is in the Mandelbrot set if the series $z_{i+1}=z_{i}^2+c$ (With $z_0=c$) stays close to $0$.Traditionally, we plot a color showing how many steps are needed for $\left|z_i\right|>2$, whereupon we are sure the series will diverge. Here's a trivial python implementation:
def mandel1(position, limit=50): value = position while abs(value) < 2: limit -= 1 value = value**2 + position if limit < 0: return 0 return limit xmin = -1.5 ymin = -1.0 xmax = 0.5 ymax = 1.0 resolution = 300 xstep = (xmax - xmin) / resolution ystep = (ymax - ymin) / resolution xs = [(xmin + (xmax - xmin) * i / resolution) for i in range(resolution)] ys = [(ymin + (ymax - ymin) * i / resolution) for i in range(resolution)] %%timeit data = [[mandel1(complex(x, y)) for x in xs] for y in ys] data1 = [[mandel1(complex(x, y)) for x in xs] for y in ys] %matplotlib inline import matplotlib.pyplot as plt plt.imshow(data1, interpolation='none')
_____no_output_____
CC-BY-3.0
ch08performance/010intro.ipynb
jack89roberts/rsd-engineeringcourse
We will learn this lesson how to make a version of this code which works Ten Times faster:
import numpy as np def mandel_numpy(position,limit=50): value = position diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value = value**2+position diverging = value * np.conj(value) > 4 first_diverged_this_time = np.logical_and(diverging, diverged_at_count == 0) diverged_at_count[first_diverged_this_time] = limit value[diverging] = 2 return diverged_at_count ymatrix, xmatrix = np.mgrid[ymin:ymax:ystep, xmin:xmax:xstep] values = xmatrix + 1j * ymatrix data_numpy = mandel_numpy(values) %matplotlib inline import matplotlib.pyplot as plt plt.imshow(data_numpy, interpolation='none') %%timeit data_numpy = mandel_numpy(values)
50.9 ms ± 10.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
CC-BY-3.0
ch08performance/010intro.ipynb
jack89roberts/rsd-engineeringcourse
Note we get the same answer:
sum(sum(abs(data_numpy - data1)))
_____no_output_____
CC-BY-3.0
ch08performance/010intro.ipynb
jack89roberts/rsd-engineeringcourse
Matplotlib ( Matplotlib Pt. 3) Plot Appearence in Matplotlib
import matplotlib.pyplot as plt %matplotlib inline import numpy as np x = np.linspace(0,5,11) # We go from 0 to 5 and grab 11 points which are linearly spaced. y = x ** 2 fig = plt.figure() # Add a set of axes to the figure. ax = fig.add_axes([0,0,1,1]) # To add color to the plot there are multiple ways like directly typing the common color names or passing the HEX Color values. ax.plot(x,y,color='aqua') # Common color fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='#008080') # RGB Hex Code for Teal fig, ax = plt.subplots() ax.plot(x, x+1, color="blue", alpha=0.5) # half-transparant ax.plot(x, x+2, color="#8B008B") # RGB hex code ax.plot(x, x+3, color="#FF8C00") # RGB hex code
_____no_output_____
BSD-3-Clause
08. Data Visualization - Matplotlib/.ipynb_checkpoints/8.2 Matplotlib Pt 3-checkpoint.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Linewidth and Line Style
fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='#008080') # Default Line width # 5 times the linewidth fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='#008080',lw=5)#A shorthand is used here for linewidth which is lw # To get transparency on the plotted line we can pass the alpha parameter to plot function. fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='#008080',lw=3,alpha=0.5) # Alpha can be any value between [0.0,1.0] where 0.0 is line being 100% transparent (invisible) and 1 is 0% transparent. fig = plt.figure() ax = fig.add_axes([0,0,1,1]) # ax.plot(x,y,color='#008080',lw=3,linestyle='--') # Other variants of linestyle include # ax.plot(x,y,color='#008080',lw=3,ls='-.') ax.plot(x,y,color='#008080',lw=3,ls=':') # ax.plot(x,y,color='#008080',lw=3,ls='steps') # Steps on dotted -- # A solid line has linestyle = - by default. # ls also works as a shorthand for linestyle, and infact is more used than linestyle.
_____no_output_____
BSD-3-Clause
08. Data Visualization - Matplotlib/.ipynb_checkpoints/8.2 Matplotlib Pt 3-checkpoint.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Markers* Markers are used when we have just a few number of data points.
# Say we have x an array of len(x) data points. x len(x) # Say if we wanted to mark where those 11 points occured on the plot. fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='#008080',lw=3,marker='o',markersize=15,markerfacecolor='yellow', markeredgewidth=3,markeredgecolor='black')
_____no_output_____
BSD-3-Clause
08. Data Visualization - Matplotlib/.ipynb_checkpoints/8.2 Matplotlib Pt 3-checkpoint.ipynb
CommunityOfCoders/ML_Workshop_Teachers
More examples on line and marker styles
fig, ax = plt.subplots(figsize=(12,6)) ax.plot(x, x+1, color="red", linewidth=0.25) ax.plot(x, x+2, color="red", linewidth=0.50) ax.plot(x, x+3, color="red", linewidth=1.00) ax.plot(x, x+4, color="red", linewidth=2.00) # possible linestype options ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’ ax.plot(x, x+5, color="green", lw=3, linestyle='-') ax.plot(x, x+6, color="green", lw=3, ls='-.') ax.plot(x, x+7, color="green", lw=3, ls=':') # custom dash line, = ax.plot(x, x+8, color="black", lw=1.50) line.set_dashes([5, 10, 15, 10]) # format: line length, space length, ... # possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ... ax.plot(x, x+ 9, color="blue", lw=3, ls='-', marker='+') ax.plot(x, x+10, color="blue", lw=3, ls='--', marker='o') ax.plot(x, x+11, color="blue", lw=3, ls='-', marker='s') ax.plot(x, x+12, color="blue", lw=3, ls='--', marker='1') # marker size and color ax.plot(x, x+13, color="purple", lw=1, ls='-', marker='o', markersize=2) ax.plot(x, x+14, color="purple", lw=1, ls='-', marker='o', markersize=4) ax.plot(x, x+15, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red") ax.plot(x, x+16, color="purple", lw=1, ls='-', marker='s', markersize=8, markerfacecolor="yellow", markeredgewidth=3, markeredgecolor="green");
_____no_output_____
BSD-3-Clause
08. Data Visualization - Matplotlib/.ipynb_checkpoints/8.2 Matplotlib Pt 3-checkpoint.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Control over axis appearance* In this section we will look at controlling axis sizing properties in a matplotlib figure.
# Say we wanted to show the plot between 0 and 1 on the x-axis fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='#008080',lw=3,ls='--') # Say we wanted to show the plot between 0 and 1 on the x-axis fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y,color='#008080',lw=3,ls='--', markeredgewidth=3,markeredgecolor='black') ax.set_xlim([0,1]) # Call axis and set_xlim and then we pass in a list with an upper and lower bound. ax.set_ylim([0,2]) # Compare this with the plot above."
_____no_output_____
BSD-3-Clause
08. Data Visualization - Matplotlib/.ipynb_checkpoints/8.2 Matplotlib Pt 3-checkpoint.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Plot Range* We can configure the ranges of the axes using the set_ylim and set_xlim methods in the axis object, or axis('tight') for automatically getting \"tightly fitted\" axes ranges:
fig, axes = plt.subplots(1, 3, figsize=(12, 4)) axes[0].plot(x, x**2, label = 'X squaraed',color='red') axes[0].plot(x, x**3,label='X cube',color='green') axes[0].set_title("default axes ranges") axes[1].plot(x, x**2, label = 'X squaraed',color='red') axes[1].plot(x, x**3,label='X cube',color='green') axes[1].axis('tight') axes[1].set_title("tight axes") axes[2].plot(x, x**2, label = 'X squaraed',color='red') axes[2].plot(x, x**3,label='X cube',color='green') axes[2].set_ylim([0, 60]) axes[2].set_xlim([2, 5]) axes[2].set_title("custom axes range"); axes[0].legend(loc=0) axes[1].legend(loc=0) axes[2].legend(loc=0)
_____no_output_____
BSD-3-Clause
08. Data Visualization - Matplotlib/.ipynb_checkpoints/8.2 Matplotlib Pt 3-checkpoint.ipynb
CommunityOfCoders/ML_Workshop_Teachers
Special Plot Types* There are many specialized plots we can create, such as barplots, histograms, scatter plots, and much more. Most of these type of plots we will actually create using seaborn, a statistical plotting library for Python. But here are a few examples of these type of plots:
# Scatter plot plt.scatter(x,y) # Histrogram from random import sample data = sample(range(1, 1000), 100) plt.hist(data) data = [np.random.normal(0, std, 100) for std in range(1, 4)] # rectangular box plot plt.boxplot(data,vert=True,patch_artist=True);
_____no_output_____
BSD-3-Clause
08. Data Visualization - Matplotlib/.ipynb_checkpoints/8.2 Matplotlib Pt 3-checkpoint.ipynb
CommunityOfCoders/ML_Workshop_Teachers
--- 2. Select subsets from our dataset---
from digits.data import matimport from digits.data import select dataroot='../../data/thomas/artcorr/' imp = matimport.Importer(dataroot=dataroot)
_____no_output_____
MIT
data/Selecting.ipynb
eegdigits/notebooks
With `imp.open()` we can use HDF5 references to our samples and targets datasets without using up initial memory. The `samples` and `targets` objects are attached to the `store` attribute.In this notebook we will load the samples and targets from the file right away.
imp.open('3131.h5') samples = imp.store.samples targets = imp.store.targets 670*16 print(select.getsessionnames(samples)) for sess in select.getsessionnames(samples): print(samples.xs(sess, level='session').shape[0])
['01', '07', '08', '09', '10', '11', '12', '13', '14', '15', '16'] 632 650 652 652 683 687 669 658 610 672 609
MIT
data/Selecting.ipynb
eegdigits/notebooks
The functions in `digits.data.select` will provide a high level abstraction for subselecting and pruning the large dataset, specific to the studies parameters. For instance: column-wise+ select only sampling points from a time window with `select.fromtimerange(samples, min, max)`+ select all sampling points from a named list of channels with `select.fromchannellist(samples, list)`+ select all sampling points from a range with `select.fromchannelrange(samples, min, max)` row-wise+ sellect all trials from a list of named session ids with `select.fromtrialid(samples, id-list)`+ ...Some helper functions inside the `select` package help to get the names of the index/column labels.The idea is to incrementally reduce the dataset to the desired size and/or programmatically loop over a number of blocks in the dataset with a sliding window analysis in mind.Example:
print(select.getchannelnames(samples)) print(select.getsessionnames(samples)) print(select.getpresentationnames(samples)) print(select.getsessionnames(samples))
['01', '07', '08', '09', '10', '11', '12', '13', '14', '15', '16']
MIT
data/Selecting.ipynb
eegdigits/notebooks
The level/index names can be display with `head()` quite nicely:
samples.head()
_____no_output_____
MIT
data/Selecting.ipynb
eegdigits/notebooks
Now for the selection:
print(samples.shape) print(select.getsessionnames(samples)) samples, targets = select.fromsessionlist(samples, targets, ['14', '15']) samples.shape samples = select.fromchannellist(samples, ['C1', 'C2']) print(samples.shape) samples = select.fromtimerange(samples, 't_0200', 't_0201') print(samples.shape) samples, targets = select.frompresentationlist(samples, targets, ['1','2','3','4']) samples.head(10) print(samples.head(10).to_latex())
\begin{tabular}{llllrrrr} \toprule & & & & C1 & & C2 & \\ & & & & t\_0200 & t\_0201 & t\_0200 & t\_0201 \\ subject & session & trial & presentation & & & & \\ \midrule 3131 & 14 & 2 & 1 & -7.291202 & -8.348700 & -10.118226 & -11.385602 \\ & & & 2 & 5.475969 & 9.075162 & 9.528195 & 12.702423 \\ & & & 3 & 13.696177 & 14.089633 & 20.607351 & 20.646475 \\ & & & 4 & 3.592678 & 3.019830 & 5.601668 & 5.647057 \\ & 15 & 1 & 1 & -0.402161 & -0.086583 & -2.192876 & -2.454573 \\ & & & 2 & -9.592463 & -10.213227 & -16.365511 & -16.116913 \\ & & 2 & 3 & -4.315301 & -3.262175 & -3.724870 & -2.534175 \\ & & & 4 & 3.713143 & 5.164251 & 7.716980 & 9.143118 \\ \bottomrule \end{tabular}
MIT
data/Selecting.ipynb
eegdigits/notebooks
DAT210x - Programming with Python for DS Module5- Lab3
import pandas as pd from datetime import timedelta import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') # Look Pretty
_____no_output_____
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
A convenience function for you to use:
def clusterInfo(model): print("Cluster Analysis Inertia: ", model.inertia_) print('------------------------------------------') for i in range(len(model.cluster_centers_)): print("\n Cluster ", i) print(" Centroid ", model.cluster_centers_[i]) print(" #Samples ", (model.labels_==i).sum()) # NumPy Power # Find the cluster with the least # attached nodes def clusterWithFewestSamples(model): # Ensure there's at least on cluster... minSamples = len(model.labels_) minCluster = 0 for i in range(len(model.cluster_centers_)): if minSamples > (model.labels_==i).sum(): minCluster = i minSamples = (model.labels_==i).sum() print("\n Cluster With Fewest Samples: ", minCluster) return (model.labels_==minCluster)
_____no_output_____
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
CDRs A [call detail record](https://en.wikipedia.org/wiki/Call_detail_record) (CDR) is a data record produced by a telephone exchange or other telecommunications equipment that documents the details of a telephone call or other telecommunications transaction (e.g., text message) that passes through that facility or device.The record contains various attributes of the call, such as time, duration, completion status, source number, and destination number. It is the automated equivalent of the paper toll tickets that were written and timed by operators for long-distance calls in a manual telephone exchange.The dataset we've curated for you contains call records for 10 people, tracked over the course of 3 years. Your job in this assignment is to find out where each of these people likely live and where they work at!Start by loading up the dataset and taking a peek at its `head` and `dtypes`. You can convert date-strings to real date-time objects using `pd.to_datetime`, and the times using `pd.to_timedelta`:
df1 = pd.read_csv('Datasets/CDR.csv') df1 = df1.dropna() df1['CallDate'] = pd.to_datetime(df1['CallDate'], 'coerce') df1['CallTime'] = pd.to_timedelta(df1['CallTime']) df1['Duration'] = pd.to_timedelta(df1['Duration']) df1.dtypes
_____no_output_____
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Create a unique list of the phone number values (people) stored in the `In` column of the dataset, and save them in a regular python list called `unique_numbers`. Manually check through `unique_numbers` to ensure the order the numbers appear is the same order they (uniquely) appear in your dataset:
# .. your code here .. unique_numbers = df1.In.unique().tolist() unique_numbers
_____no_output_____
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Using some domain expertise, your intuition should direct you to know that people are likely to behave differently on weekends vs on weekdays: On Weekends1. People probably don't go into work1. They probably sleep in late on Saturday1. They probably run a bunch of random errands, since they couldn't during the week1. They should be home, at least during the very late hours, e.g. 1-4 AM On Weekdays1. People probably are at work during normal working hours1. They probably are at home in the early morning and during the late night1. They probably spend time commuting between work and home everyday
print("Examining person: ", unique_numbers[0])
Examining person: 4638472273
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Create a slice called `user1` that filters to only include dataset records where the `In` feature (user phone number) is equal to the first number on your unique list above:
# .. your code here .. user1 = df1[df1['In'] == unique_numbers[0]] user1
_____no_output_____
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Alter your slice so that it includes only Weekday (Mon-Fri) values:
# .. your code here .. pm5 = pd.to_timedelta('17:00:00') am730 = pd.to_timedelta('07:30:00') #user2 = user1[(((user1['DOW'] == 'Sat') | (user1['DOW'] == 'Sun')) & ((user1['CallTime'] > am1) & (user1['CallTime'] < am4)))] user2 = user1 user1 = user1[(((user1['DOW'] == 'Mon') | (user1['DOW'] == 'Tue') | (user1['DOW'] == 'Wed') | (user1['DOW'] == 'Thu') | (user1['DOW'] == 'Fri')) & ( (user1['CallTime'] < pm5 ) & (user1['CallTime'] > am730 ) ) ) ] user1
_____no_output_____
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
The idea is that the call was placed before 5pm. From Midnight-730a, the user is probably sleeping and won't call / wake up to take a call. There should be a brief time in the morning during their commute to work, then they'll spend the entire day at work. So the assumption is that most of the time is spent either at work, or in 2nd, at home:
# .. your code here ..
_____no_output_____
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Plot the Cell Towers the user connected to
# .. your code here .. %matplotlib notebook fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(user1.TowerLon,user1.TowerLat, c='g', marker='o', alpha=0.2) ax.set_title('Weedkay Calls (7:30am - 5pm)') plt.show() from sklearn.cluster import KMeans def doKMeans(data, num_clusters=0): # TODO: Be sure to only feed in Lat and Lon coordinates to the KMeans algo, since none of the other # data is suitable for your purposes. Since both Lat and Lon are (approximately) on the same scale, # no feature scaling is required. Print out the centroid locations and add them onto your scatter # plot. Use a distinguishable marker and color. # # Hint: Make sure you fit ONLY the coordinates, and in the CORRECT order (lat first). This is part # of your domain expertise. Also, *YOU* need to create, initialize (and return) the variable named # `model` here, which will be a SKLearn K-Means model for this to work: # .. your code here .. data = data[['TowerLat', 'TowerLon']] model = KMeans(n_clusters=num_clusters) model.fit(data) # Now we can print and plot the centroids: centroids = model.cluster_centers_ print(centroids) #ax.scatter(centroids[:,0], centroids[:,1], marker='x', c='red', alpha=0.3) return model
_____no_output_____
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Let's tun K-Means with `K=3` or `K=4`. There really should only be a two areas of concentration. If you notice multiple areas that are "hot" (multiple areas the user spends a lot of time at that are FAR apart from one another), then increase K=5, with the goal being that all centroids except two will sweep up the annoying outliers and not-home, not-work travel occasions. the other two will zero in on the user's approximate home location and work locations. Or rather the location of the cell tower closest to them.....
model = doKMeans(user1, 4)
[[ 32.84579692 -96.81976265] [ 32.89970164 -96.91026779] [ 32.87348968 -96.85115015] [ 32.911583 -96.892222 ]]
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Print out the mean `CallTime` value for the samples belonging to the cluster with the LEAST samples attached to it. If our logic is correct, the cluster with the MOST samples will be work. The cluster with the 2nd most samples will be home. And the `K=3` cluster with the least samples should be somewhere in between the two. What time, on average, is the user in between home and work, between the midnight and 5pm?
midWayClusterIndices = clusterWithFewestSamples(model) midWaySamples = user1[midWayClusterIndices] print(" Its Waypoint Time: ", midWaySamples.CallTime.mean())
Cluster With Fewest Samples: 3 Its Waypoint Time: 0 days 07:44:31.892341
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Let's visualize the results! First draw the X's for the clusters:
fig1 = plt.figure() ax1 = fig1.add_subplot(111) ax1.scatter(model.cluster_centers_[:,1], model.cluster_centers_[:,0], s=169, c='r', marker='x', alpha=0.8, linewidths=2) ax1.set_title('Weekday Calls Centroids') plt.show() clusterInfo(model) users_phones = [2068627935,2894365987,1559410755,3688089071] def examineNumber(df, number, num_clusters): print("Examining person: ", number) user = df[df['In'] == number] pm5 = pd.to_timedelta('17:00:00') am730 = pd.to_timedelta('07:30:00') user = user[(((user['DOW'] == 'Mon') | (user['DOW'] == 'Tue') | (user['DOW'] == 'Wed') | (user['DOW'] == 'Thu') | (user['DOW'] == 'Fri')) & ( (user['CallTime'] < pm5 ) & (user['CallTime'] > am730 ) ) ) ] data = user[['TowerLat', 'TowerLon']] model = KMeans(n_clusters=num_clusters) model.fit(data) # Now we can print and plot the centroids: clusterInfo(model) return model examineNumber(df1,users_phones[0],4) examineNumber(df1,users_phones[1],4)
Examining person: 2894365987 Cluster Analysis Inertia: 0.00584613804294 ------------------------------------------ Cluster 0 Centroid [ 32.717667 -96.875194] #Samples 141 Cluster 1 Centroid [ 32.72174109 -96.89194104] #Samples 2705 Cluster 2 Centroid [ 32.741889 -96.857611] #Samples 241 Cluster 3 Centroid [ 32.698088 -96.92053683] #Samples 6
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
2894365987 is the closest so far
examineNumber(df1,users_phones[2],4) examineNumber(df1,users_phones[3],4) def getClusterSamples(df, number, num_clusters): print("getting cluster for person: ", number) user = df[df['In'] == number] pm5 = pd.to_timedelta('17:00:00') am730 = pd.to_timedelta('07:30:00') user = user[(((user['DOW'] == 'Mon') | (user['DOW'] == 'Tue') | (user['DOW'] == 'Wed') | (user['DOW'] == 'Thu') | (user['DOW'] == 'Fri')) & ( (user['CallTime'] < pm5 ) & (user['CallTime'] > am730 ) ) ) ] data = user[['TowerLat', 'TowerLon']] model = KMeans(n_clusters=num_clusters) model.fit(data) # Now we can print and plot the centroids: smallest_cluster_index = clusterWithFewestSamples(model) sample = user[smallest_cluster_index] print(" Avg time : ", sample.CallTime.mean()) return sample clusterlist = [] for i in range(len(unique_numbers)): print("examining user : " , unique_numbers[i]) c = getClusterSamples(df1,unique_numbers[i],3) clusterlist.append(c)
examining user : 4638472273 getting cluster for person: 4638472273 Cluster With Fewest Samples: 1 Avg time : 0 days 07:44:01.395089 examining user : 1559410755 getting cluster for person: 1559410755 Cluster With Fewest Samples: 0 Avg time : 0 days 07:49:46.609049 examining user : 4931532174 getting cluster for person: 4931532174 Cluster With Fewest Samples: 2 Avg time : 0 days 10:25:23.941509 examining user : 2419930464 getting cluster for person: 2419930464 Cluster With Fewest Samples: 2 Avg time : 0 days 07:47:11.097689 examining user : 1884182865 getting cluster for person: 1884182865 Cluster With Fewest Samples: 1 Avg time : 0 days 07:44:52.338718 examining user : 3688089071 getting cluster for person: 3688089071 Cluster With Fewest Samples: 1 Avg time : 0 days 07:43:12.171078 examining user : 4555003213 getting cluster for person: 4555003213 Cluster With Fewest Samples: 1 Avg time : 0 days 08:04:09.204236 examining user : 2068627935 getting cluster for person: 2068627935 Cluster With Fewest Samples: 2 Avg time : 0 days 07:51:24.887866 examining user : 2894365987 getting cluster for person: 2894365987 Cluster With Fewest Samples: 2 Avg time : 0 days 07:50:14.382905 examining user : 8549533077 getting cluster for person: 8549533077 Cluster With Fewest Samples: 2 Avg time : 0 days 07:53:54.772647
MIT
Module5/Module5 - Lab3.ipynb
azharmgh/pyrepo
Yapay Öğrenmeye Giriş IAli Taylan Cemgil Parametrik Regresyon, Parametrik Fonksyon Oturtma Problemi (Parametric Regression, Function Fitting)Verilen girdi ve çıktı ikilileri $x, y$ için parametrik bir fonksyon $f$ oturtma problemi. Parametre $w$ değerlerini öyle bir seçelim ki $$y \approx f(x; w)$$$x$: Girdi (Input)$y$: Çıktı (Output)$w$: Parametre (Weight, ağırlık)$e$: HataÖrnek 1: $$e = y - f(x)$$Örnek 2:$$e = \frac{y}{f(x)}-1$$$E$, $D$: Hata fonksyonu (Error function), Iraksay (Divergence) Doğrusal Regresyon (Linear Regression)Oturtulacak $f$ fonksyonun **model parametreleri** $w$ cinsinden doğrusal olduğu durum (Girdiler $x$ cinsinden doğrusal olması gerekmez). Tanım: DoğrusallıkBir $g$ fonksyonu doğrusaldır demek, herhangi skalar $a$ ve $b$ içn$$g(aw_1 + b w_2) = a g(w_1) + b g(w_2)$$olması demektir. Örnek: Doğru oturtmak (Line Fitting)* Girdi-Çıktı ikilileri$$(x_i, y_i)$$$i=1\dots N$ * Model$$y_i \approx f(x; w_1, w_0) = w_0 + w_1 x $$> $x$ : Girdi > $w_1$: Eğim> $w_0$: Kesişme$f_i \equiv f(x_i; w_1, w_0)$ Örnek 2: Parabol Oturtma* Girdi-Çıktı ikilileri$$(x_i, y_i)$$$i=1\dots N$ * Model$$y_i \approx f(x_i; w_2, w_1, w_0) = w_0 + w_1 x_i + w_2 x_i^2$$> $x$ : Girdi > $w_2$: Karesel terimin katsayısı > $w_1$: Doğrusal terimin katsayısı> $w_0$: Sabit terim katsayısı$f_i \equiv f(x_i; w_2, w_1, w_0)$Bir parabol $x$'in doğrusal fonksyonu değil ama $w_2, w_1, w_0$ parametrelerinin doğrusal fonksyonu.
import matplotlib.pyplot as plt import numpy as np %matplotlib inline from __future__ import print_function from ipywidgets import interact, interactive, fixed import ipywidgets as widgets import matplotlib.pylab as plt from IPython.display import clear_output, display, HTML x = np.array([8.0 , 6.1 , 11., 7., 9., 12. , 4., 2., 10, 5, 3]) y = np.array([6.04, 4.95, 5.58, 6.81, 6.33, 7.96, 5.24, 2.26, 8.84, 2.82, 3.68]) def plot_fit(w1, w0): f = w0 + w1*x plt.figure(figsize=(4,3)) plt.plot(x,y,'sk') plt.plot(x,f,'o-r') #plt.axis('equal') plt.xlim((0,15)) plt.ylim((0,10)) for i in range(len(x)): plt.plot((x[i],x[i]),(f[i],y[i]),'b') # plt.show() # plt.figure(figsize=(4,1)) plt.bar(x,(f-y)**2/2) plt.title('Toplam kare hata = '+str(np.sum((f-y)**2/2))) plt.ylim((0,10)) plt.xlim((0,15)) plt.show() plot_fit(0.0,3.79) interact(plot_fit, w1=(-2, 2, 0.01), w0=(-5, 5, 0.01));
_____no_output_____
MIT
matkoy2021-1.ipynb
atcemgil/notes
Rasgele Arama
x = np.array([8.0 , 6.1 , 11., 7., 9., 12. , 4., 2., 10, 5, 3]) y = np.array([6.04, 4.95, 5.58, 6.81, 6.33, 7.96, 5.24, 2.26, 8.84, 2.82, 3.68]) def hata(y, x, w): N = len(y) f = x*w[1]+w[0] e = y-f return np.sum(e*e)/2 w = np.array([0, 0]) E = hata(y, x, w) for e in range(1000): g = 0.1*np.random.randn(2) w_temp = w + g E_temp = hata(y, x, w_temp) if E_temp<E: E = E_temp w = w_temp #print(e, E) print(e, E) w
999 6.88573142353
MIT
matkoy2021-1.ipynb
atcemgil/notes
Gerçek veri: Türkiyedeki araç sayıları
%matplotlib inline import scipy as sc import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pylab as plt df_arac = pd.read_csv(u'data/arac.csv',sep=';') df_arac[['Year','Car']] #df_arac BaseYear = 1995 x = np.matrix(df_arac.Year[0:]).T-BaseYear y = np.matrix(df_arac.Car[0:]).T/1000000. plt.plot(x+BaseYear, y, 'o-') plt.xlabel('Yil') plt.ylabel('Araba (Milyon)') plt.show() %matplotlib inline from __future__ import print_function from ipywidgets import interact, interactive, fixed import ipywidgets as widgets import matplotlib.pylab as plt from IPython.display import clear_output, display, HTML w_0 = 0.27150786 w_1 = 0.37332256 BaseYear = 1995 x = np.matrix(df_arac.Year[0:]).T-BaseYear y = np.matrix(df_arac.Car[0:]).T/1000000. fig, ax = plt.subplots() f = w_1*x + w_0 plt.plot(x+BaseYear, y, 'o-') ln, = plt.plot(x+BaseYear, f, 'r') plt.xlabel('Years') plt.ylabel('Number of Cars (Millions)') ax.set_ylim((-2,13)) plt.close(fig) def set_line(w_1, w_0): f = w_1*x + w_0 e = y - f ln.set_ydata(f) ax.set_title('Total Error = {} '.format(np.asscalar(e.T*e/2))) display(fig) set_line(0.32,3) interact(set_line, w_1=(-2, 2, 0.01), w_0=(-5, 5, 0.01)); w_0 = 0.27150786 w_1 = 0.37332256 w_2 = 0.1 BaseYear = 1995 x = np.array(df_arac.Year[0:]).T-BaseYear y = np.array(df_arac.Car[0:]).T/1000000. fig, ax = plt.subplots() f = w_2*x**2 + w_1*x + w_0 plt.plot(x+BaseYear, y, 'o-') ln, = plt.plot(x+BaseYear, f, 'r') plt.xlabel('Yıl') plt.ylabel('Araba Sayısı (Milyon)') ax.set_ylim((-2,13)) plt.close(fig) def set_line(w_2, w_1, w_0): f = w_2*x**2 + w_1*x + w_0 e = y - f ln.set_ydata(f) ax.set_title('Ortalama Kare Hata = {} '.format(np.sum(e*e/len(e)))) display(fig) set_line(w_2, w_1, w_0) interact(set_line, w_2=(-0.1,0.1,0.001), w_1=(-2, 2, 0.01), w_0=(-5, 5, 0.01))
_____no_output_____
MIT
matkoy2021-1.ipynb
atcemgil/notes
Örnek 1, devam: Modeli Öğrenmek* Öğrenmek: parametre kestirimi $w = [w_0, w_1]$* Genelde model veriyi hatasız açıklayamayacağı için her veri noktası için bir hata tanımlıyoruz:$$e_i = y_i - f(x_i; w)$$* Toplam kare hata $$E(w) = \frac{1}{2} \sum_i (y_i - f(x_i; w))^2 = \frac{1}{2} \sum_i e_i^2$$* Toplam kare hatayı $w_0$ ve $w_1$ parametrelerini değiştirerek azaltmaya çalışabiliriz.* Hata yüzeyi
from itertools import product BaseYear = 1995 x = np.matrix(df_arac.Year[0:]).T-BaseYear y = np.matrix(df_arac.Car[0:]).T/1000000. # Setup the vandermonde matrix N = len(x) A = np.hstack((np.ones((N,1)), x)) left = -5 right = 15 bottom = -4 top = 6 step = 0.05 W0 = np.arange(left,right, step) W1 = np.arange(bottom,top, step) ErrSurf = np.zeros((len(W1),len(W0))) for i,j in product(range(len(W1)), range(len(W0))): e = y - A*np.matrix([W0[j], W1[i]]).T ErrSurf[i,j] = e.T*e/2 plt.figure(figsize=(7,7)) plt.imshow(ErrSurf, interpolation='nearest', vmin=0, vmax=1000,origin='lower', extent=(left,right,bottom,top),cmap='Blues_r') plt.xlabel('w0') plt.ylabel('w1') plt.title('Error Surface') plt.colorbar(orientation='horizontal') plt.show()
_____no_output_____
MIT
matkoy2021-1.ipynb
atcemgil/notes
Modeli Nasıl Kestirebiliriz? Fikir: En küçük kare hata (Gauss 1795, Legendre 1805)* Toplam hatanın $w_0$ ve $w_1$'e göre türevini hesapla, sıfıra eşitle ve çıkan denklemleri çöz\begin{eqnarray}\left(\begin{array}{c}y_0 \\ y_1 \\ \vdots \\ y_{N-1} \end{array}\right)\approx\left(\begin{array}{cc}1 & x_0 \\ 1 & x_1 \\ \vdots \\ 1 & x_{N-1} \end{array}\right) \left(\begin{array}{c} w_0 \\ w_1 \end{array}\right)\end{eqnarray}\begin{eqnarray}y \approx A w\end{eqnarray}> $A = A(x)$: Model Matrisi> $w$: Model Parametreleri> $y$: Gözlemler* Hata vektörü: $$e = y - Aw$$\begin{eqnarray}E(w) & = & \frac{1}{2}e^\top e = \frac{1}{2}(y - Aw)^\top (y - Aw)\\& = & \frac{1}{2}y^\top y - \frac{1}{2} y^\top Aw - \frac{1}{2} w^\top A^\top y + \frac{1}{2} w^\top A^\top Aw \\& = & \frac{1}{2} y^\top y - y^\top Aw + \frac{1}{2} w^\top A^\top Aw \\\end{eqnarray} Gradyanhttps://tr.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/the-gradient\begin{eqnarray}\frac{d E}{d w } & = & \left(\begin{array}{c} \partial E/\partial w_0 \\ \partial E/\partial w_1 \\ \vdots \\ \partial E/\partial w_{K-1}\end{array}\right)\end{eqnarray} Toplam hatanın gradyanı\begin{eqnarray}\frac{d}{d w }E(w) & = & \frac{d}{d w }(\frac{1}{2} y^\top y) &+ \frac{d}{d w }(- y^\top Aw) &+ \frac{d}{d w }(\frac{1}{2} w^\top A^\top Aw) \\& = & 0 &- A^\top y &+ A^\top A w \\& = & - A^\top (y - Aw) \\& = & - A^\top e \\& \equiv & \nabla E(w)\end{eqnarray} Yapay zekaya gönül veren herkesin bilmesi gereken eşitlikler* Vektör iç çarpımının gradyeni\begin{eqnarray}\frac{d}{d w }(h^\top w) & = & h\end{eqnarray}* Karesel bir ifadenin gradyeni\begin{eqnarray}\frac{d}{d w }(w^\top K w) & = & (K+K^\top) w\end{eqnarray} En küçük kare hata çözümü doğrusal modellerde doğrusal denklemlerin çözümü ile bulunabiliyor\begin{eqnarray}w^* & = & \arg\min_{w} E(w)\end{eqnarray}* Eniyileme Şartı (gradyan sıfır olmalı )\begin{eqnarray}\nabla E(w^*) & = & 0\end{eqnarray}\begin{eqnarray}0 & = & - A^\top y + A^\top A w^* \\A^\top y & = & A^\top A w^* \\w^* & = & (A^\top A)^{-1} A^\top y \end{eqnarray}* Geometrik (Projeksyon) yorumu:\begin{eqnarray}f & = A w^* = A (A^\top A)^{-1} A^\top y \end{eqnarray}
# Solving the Normal Equations # Setup the Design matrix N = len(x) A = np.hstack((np.ones((N,1)), x)) #plt.imshow(A, interpolation='nearest') # Solve the least squares problem w_ls,E,rank,sigma = np.linalg.lstsq(A, y) print('Parametreler: \nw0 = ', w_ls[0],'\nw1 = ', w_ls[1] ) print('Toplam Kare Hata:', E/2) f = np.asscalar(w_ls[1])*x + np.asscalar(w_ls[0]) plt.plot(x+BaseYear, y, 'o-') plt.plot(x+BaseYear, f, 'r') plt.xlabel('Yıl') plt.ylabel('Araba sayısı (Milyon)') plt.show()
Parametreler: w0 = [[ 4.13258253]] w1 = [[ 0.20987778]] Toplam Kare Hata: [[ 37.19722385]]
MIT
matkoy2021-1.ipynb
atcemgil/notes
Polinomlar Parabol\begin{eqnarray}\left(\begin{array}{c}y_0 \\ y_1 \\ \vdots \\ y_{N-1} \end{array}\right)\approx\left(\begin{array}{ccc}1 & x_0 & x_0^2 \\ 1 & x_1 & x_1^2 \\ \vdots \\ 1 & x_{N-1} & x_{N-1}^2 \end{array}\right) \left(\begin{array}{c} w_0 \\ w_1 \\ w_2\end{array}\right)\end{eqnarray} $K$ derecesinde polinom\begin{eqnarray}\left(\begin{array}{c}y_0 \\ y_1 \\ \vdots \\ y_{N-1} \end{array}\right)\approx\left(\begin{array}{ccccc}1 & x_0 & x_0^2 & \dots & x_0^K \\ 1 & x_1 & x_1^2 & \dots & x_1^K\\ \vdots \\ 1 & x_{N-1} & x_{N-1}^2 & \dots & x_{N-1}^K \end{array}\right) \left(\begin{array}{c} w_0 \\ w_1 \\ w_2 \\ \vdots \\ w_K\end{array}\right)\end{eqnarray}\begin{eqnarray}y \approx A w\end{eqnarray}> $A = A(x)$: Model matrisi > $w$: Model Parametreleri> $y$: GözlemlerPolinom oturtmada ortaya çıkan özel yapılı matrislere __Vandermonde__ matrisleri de denmektedir.
x = np.array([10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]) N = len(x) x = x.reshape((N,1)) y = np.array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]).reshape((N,1)) #y = np.array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]).reshape((N,1)) #y = np.array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]).reshape((N,1)) def fit_and_plot_poly(degree): #A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2))) A = np.hstack((np.power(x,i) for i in range(degree+1))) # Setup the vandermonde matrix xx = np.matrix(np.linspace(np.asscalar(min(x))-1,np.asscalar(max(x))+1,300)).T A2 = np.hstack((np.power(xx,i) for i in range(degree+1))) #plt.imshow(A, interpolation='nearest') # Solve the least squares problem w_ls,E,rank,sigma = np.linalg.lstsq(A, y) f = A2*w_ls plt.plot(x, y, 'o') plt.plot(xx, f, 'r') plt.xlabel('x') plt.ylabel('y') plt.gca().set_ylim((0,20)) #plt.gca().set_xlim((1950,2025)) if E: plt.title('Mertebe = '+str(degree)+' Hata='+str(E[0])) else: plt.title('Mertebe = '+str(degree)+' Hata= 0') plt.show() fit_and_plot_poly(0) interact(fit_and_plot_poly, degree=(0,10))
_____no_output_____
MIT
matkoy2021-1.ipynb
atcemgil/notes
About https://www.kaggle.com/uladzimirkapeika/feature-engineering-lightgbm-top-1https://zhuanlan.zhihu.com/p/145969470 Version 1.0 Libraries> Check your versions
import pandas as pd import numpy as np from itertools import product import sklearn from sklearn.preprocessing import LabelEncoder from sklearn.linear_model import LinearRegression from sklearn.neighbors import KNeighborsRegressor from sklearn.ensemble import RandomForestRegressor import lightgbm as lgb import calendar from datetime import datetime import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline for p in [np, pd, sns, sklearn, lgb]: print (p.__name__, p.__version__)
numpy 1.20.1 pandas 1.2.3 seaborn 0.11.1 sklearn 0.24.1 lightgbm 3.2.0
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
**Load data**
test = pd.read_csv('../data/0_raw/sales/test.csv.gz') sales = pd.read_csv('../data/0_raw/sales/sales_train.csv.gz', encoding='UTF-8') # sales = pd.read_csv('https://storage.googleapis.com/kaggle-competitions-data/kaggle-v2/8587/868304/compressed/sales_train.csv.zip?GoogleAccessId=web-data@kaggle-161607.iam.gserviceaccount.com&Expires=1617358534&Signature=Y2MaOnxnEFJEJyHGEcWo8TUoNYmCqt2e7oF%2BeCgx%2B4qcXKBIkTieoXUu5xzSOctQt39ow6zEjHx3v9%2FLrNVokl76laWDQoaTbuu8%2Bojg2QssJdGql3rYDE4xtWfLiZibevNa5fgBHFpkyaau56M5nEbqDUw%2BT8TCNZINMNA6VmAcYIO1nKz%2FBruZP3sMQiePLHFkeD80JawbwgJ4OzQ2fq0t0qXNPwNwfhJ%2FicHwqEF5L4Ll7m%2Bd3d1FMUrURGq5CIiOCcZQNZNdQ1RtBOIR0WTSC%2ByN2Y6269N1KiItBf8R8xNW8mu9PRSkZLk2SCiETuQzCg8c6EjT498K12j6Ig%3D%3D&response-content-disposition=attachment%3B+filename%3Dsales_train.csv.zip') shops = pd.read_csv('../data/0_raw/sales/shops.csv') # items = pd.read_csv('../data/0_raw/sales/items.csv.gz', encoding='UTF-8') items = pd.read_csv('https://storage.googleapis.com/kaggle-competitions-data/kaggle-v2/8587/868304/compressed/items.csv.zip?GoogleAccessId=web-data@kaggle-161607.iam.gserviceaccount.com&Expires=1617357929&Signature=QFp3crHy5f1oihj2VTtqfgeXhBl2BvxDhWq%2BZhQrb%2BXOBFlbUY7dR9e7Qi4yLf%2FYh%2FLitHpTw1o4J4LNES6X380v9rEkKCE8uZK93qxm1r66%2BoS9Oj1rlDT%2F5ChHQi0gQpS%2BHYwZ%2FZKageqv7lfXUYqMV9%2FiaKgaaBcoRoVxP5PIbXnXE9l9nUl3CnVEnVHDZ%2BPf6lp%2FaeZV%2Fy%2BiaNYAAOQjXfs81Un8dq9GASTn6x4k%2Bx%2BcmWYct2AWpqQmZNqNVlERB1euDhkVI8Y2EMjJ6YyOlS9vvrkV%2FrkGnmaPp07nzUwbLroSP%2F2Z1LmJ8bntmi0dPyngn2cgfcS4ArY5Zg%3D%3D&response-content-disposition=attachment%3B+filename%3Ditems.csv.zip') item_cats = pd.read_csv('../data/0_raw/sales/item_categories.csv', encoding='UTF-8') print(len(test)) print(len(sales)) print(len(shops)) print(len(items)) print(len(item_cats)) print(items.head())
214200 2935849 60 22170 84 item_name item_id \ 0 ! ВО ВЛАСТИ НАВАЖДЕНИЯ (ПЛАСТ.) D 0 1 !ABBYY FineReader 12 Professional Edition Full... 1 2 ***В ЛУЧАХ СЛАВЫ (UNV) D 2 3 ***ГОЛУБАЯ ВОЛНА (Univ) D 3 4 ***КОРОБКА (СТЕКЛО) D 4 item_category_id 0 40 1 76 2 40 3 40 4 40
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
**Create dataset** Remove outliers
sns.boxplot(x=sales.item_cnt_day) sns.boxplot(x=sales.item_price) train = sales[(sales.item_price < 100000) & (sales.item_price > 0)] train = train[sales.item_cnt_day < 1001]
/Users/songjie/.local/share/virtualenvs/snp_mvp-8ex0mMfN/lib/python3.7/site-packages/ipykernel_launcher.py:2: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Detect same shops
print(shops[shops.shop_id.isin([0, 57])]['shop_name']) print(shops[shops.shop_id.isin([1, 58])]['shop_name']) print(shops[shops.shop_id.isin([40, 39])]['shop_name']) train.loc[train.shop_id == 0, 'shop_id'] = 57 test.loc[test.shop_id == 0, 'shop_id'] = 57 train.loc[train.shop_id == 1, 'shop_id'] = 58 test.loc[test.shop_id == 1, 'shop_id'] = 58 train.loc[train.shop_id == 40, 'shop_id'] = 39 test.loc[test.shop_id == 40, 'shop_id'] = 39
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Simple train dataset
index_cols = ['shop_id', 'item_id', 'date_block_num'] df = [] for block_num in train['date_block_num'].unique(): cur_shops = train.loc[sales['date_block_num'] == block_num, 'shop_id'].unique() cur_items = train.loc[sales['date_block_num'] == block_num, 'item_id'].unique() df.append(np.array(list(product(*[cur_shops, cur_items, [block_num]])),dtype='int32')) df = pd.DataFrame(np.vstack(df), columns = index_cols,dtype=np.int32) #Add month sales group = train.groupby(['date_block_num','shop_id','item_id']).agg({'item_cnt_day': ['sum']}) group.columns = ['item_cnt_month'] group.reset_index(inplace=True) df = pd.merge(df, group, on=index_cols, how='left') df['item_cnt_month'] = (df['item_cnt_month'] .fillna(0) .clip(0,20) .astype(np.float16)) df.head(5)
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Add test
test['date_block_num'] = 34 test['date_block_num'] = test['date_block_num'].astype(np.int8) test['shop_id'] = test['shop_id'].astype(np.int8) test['item_id'] = test['item_id'].astype(np.int16) df = pd.concat([df, test], ignore_index=True, sort=False, keys=index_cols) df.fillna(0, inplace=True)
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Feature engineering **Shop features*** City of a shop* City coords* Country part (0-4) based on the map
shops['city'] = shops['shop_name'].apply(lambda x: x.split()[0].lower()) shops.loc[shops.city == '!якутск', 'city'] = 'якутск' shops['city_code'] = LabelEncoder().fit_transform(shops['city']) coords = dict() coords['якутск'] = (62.028098, 129.732555, 4) coords['адыгея'] = (44.609764, 40.100516, 3) coords['балашиха'] = (55.8094500, 37.9580600, 1) coords['волжский'] = (53.4305800, 50.1190000, 3) coords['вологда'] = (59.2239000, 39.8839800, 2) coords['воронеж'] = (51.6720400, 39.1843000, 3) coords['выездная'] = (0, 0, 0) coords['жуковский'] = (55.5952800, 38.1202800, 1) coords['интернет-магазин'] = (0, 0, 0) coords['казань'] = (55.7887400, 49.1221400, 4) coords['калуга'] = (54.5293000, 36.2754200, 4) coords['коломна'] = (55.0794400, 38.7783300, 4) coords['красноярск'] = (56.0183900, 92.8671700, 4) coords['курск'] = (51.7373300, 36.1873500, 3) coords['москва'] = (55.7522200, 37.6155600, 1) coords['мытищи'] = (55.9116300, 37.7307600, 1) coords['н.новгород'] = (56.3286700, 44.0020500, 4) coords['новосибирск'] = (55.0415000, 82.9346000, 4) coords['омск'] = (54.9924400, 73.3685900, 4) coords['ростовнадону'] = (47.2313500, 39.7232800, 3) coords['спб'] = (59.9386300, 30.3141300, 2) coords['самара'] = (53.2000700, 50.1500000, 4) coords['сергиев'] = (56.3000000, 38.1333300, 4) coords['сургут'] = (61.2500000, 73.4166700, 4) coords['томск'] = (56.4977100, 84.9743700, 4) coords['тюмень'] = (57.1522200, 65.5272200, 4) coords['уфа'] = (54.7430600, 55.9677900, 4) coords['химки'] = (55.8970400, 37.4296900, 1) coords['цифровой'] = (0, 0, 0) coords['чехов'] = (55.1477000, 37.4772800, 4) coords['ярославль'] = (57.6298700, 39.8736800, 2) shops['city_coord_1'] = shops['city'].apply(lambda x: coords[x][0]) shops['city_coord_2'] = shops['city'].apply(lambda x: coords[x][1]) shops['country_part'] = shops['city'].apply(lambda x: coords[x][2]) shops = shops[['shop_id', 'city_code', 'city_coord_1', 'city_coord_2', 'country_part']] df = pd.merge(df, shops, on=['shop_id'], how='left')
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
**Item features*** Item category* More common item category
map_dict = { 'Чистые носители (штучные)': 'Чистые носители', 'Чистые носители (шпиль)' : 'Чистые носители', 'PC ': 'Аксессуары', 'Служебные': 'Служебные ' } items = pd.merge(items, item_cats, on='item_category_id') items['item_category'] = items['item_category_name'].apply(lambda x: x.split('-')[0]) items['item_category'] = items['item_category'].apply(lambda x: map_dict[x] if x in map_dict.keys() else x) items['item_category_common'] = LabelEncoder().fit_transform(items['item_category']) items['item_category_code'] = LabelEncoder().fit_transform(items['item_category_name']) items = items[['item_id', 'item_category_common', 'item_category_code']] df = pd.merge(df, items, on=['item_id'], how='left')
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
**Date features*** Weekends count (4 or 5)* Number of days in a month
def count_days(date_block_num): year = 2013 + date_block_num // 12 month = 1 + date_block_num % 12 weeknd_count = len([1 for i in calendar.monthcalendar(year, month) if i[6] != 0]) days_in_month = calendar.monthrange(year, month)[1] return weeknd_count, days_in_month, month map_dict = {i: count_days(i) for i in range(35)} df['weeknd_count'] = df['date_block_num'].apply(lambda x: map_dict[x][0]) df['days_in_month'] = df['date_block_num'].apply(lambda x: map_dict[x][1])
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
**Interaction features*** Item is new* Item was bought in this shop before
first_item_block = df.groupby(['item_id'])['date_block_num'].min().reset_index() first_item_block['item_first_interaction'] = 1 first_shop_item_buy_block = df[df['date_block_num'] > 0].groupby(['shop_id', 'item_id'])['date_block_num'].min().reset_index() first_shop_item_buy_block['first_date_block_num'] = first_shop_item_buy_block['date_block_num'] df = pd.merge(df, first_item_block[['item_id', 'date_block_num', 'item_first_interaction']], on=['item_id', 'date_block_num'], how='left') df = pd.merge(df, first_shop_item_buy_block[['item_id', 'shop_id', 'first_date_block_num']], on=['item_id', 'shop_id'], how='left') df['first_date_block_num'].fillna(100, inplace=True) df['shop_item_sold_before'] = (df['first_date_block_num'] < df['date_block_num']).astype('int8') df.drop(['first_date_block_num'], axis=1, inplace=True) df['item_first_interaction'].fillna(0, inplace=True) df['shop_item_sold_before'].fillna(0, inplace=True) df['item_first_interaction'] = df['item_first_interaction'].astype('int8') df['shop_item_sold_before'] = df['shop_item_sold_before'].astype('int8')
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
**Basic lag features**
def lag_feature(df, lags, col): tmp = df[['date_block_num','shop_id','item_id',col]] for i in lags: shifted = tmp.copy() shifted.columns = ['date_block_num','shop_id','item_id', col+'_lag_'+str(i)] shifted['date_block_num'] += i df = pd.merge(df, shifted, on=['date_block_num','shop_id','item_id'], how='left') df[col+'_lag_'+str(i)] = df[col+'_lag_'+str(i)].astype('float16') return df #Add sales lags for last 3 months df = lag_feature(df, [1, 2, 3], 'item_cnt_month') #Add avg shop/item price index_cols = ['shop_id', 'item_id', 'date_block_num'] group = train.groupby(index_cols)['item_price'].mean().reset_index().rename(columns={"item_price": "avg_shop_price"}, errors="raise") df = pd.merge(df, group, on=index_cols, how='left') df['avg_shop_price'] = (df['avg_shop_price'] .fillna(0) .astype(np.float16)) index_cols = ['item_id', 'date_block_num'] group = train.groupby(['date_block_num','item_id'])['item_price'].mean().reset_index().rename(columns={"item_price": "avg_item_price"}, errors="raise") df = pd.merge(df, group, on=index_cols, how='left') df['avg_item_price'] = (df['avg_item_price'] .fillna(0) .astype(np.float16)) df['item_shop_price_avg'] = (df['avg_shop_price'] - df['avg_item_price']) / df['avg_item_price'] df['item_shop_price_avg'].fillna(0, inplace=True) df = lag_feature(df, [1, 2, 3], 'item_shop_price_avg') df.drop(['avg_shop_price', 'avg_item_price', 'item_shop_price_avg'], axis=1, inplace=True)
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
**Target encoding**
#Add target encoding for items for last 3 months item_id_target_mean = df.groupby(['date_block_num','item_id'])['item_cnt_month'].mean().reset_index().rename(columns={"item_cnt_month": "item_target_enc"}, errors="raise") df = pd.merge(df, item_id_target_mean, on=['date_block_num','item_id'], how='left') df['item_target_enc'] = (df['item_target_enc'] .fillna(0) .astype(np.float16)) df = lag_feature(df, [1, 2, 3], 'item_target_enc') df.drop(['item_target_enc'], axis=1, inplace=True) #Add target encoding for item/city for last 3 months item_id_target_mean = df.groupby(['date_block_num','item_id', 'city_code'])['item_cnt_month'].mean().reset_index().rename(columns={ "item_cnt_month": "item_loc_target_enc"}, errors="raise") df = pd.merge(df, item_id_target_mean, on=['date_block_num','item_id', 'city_code'], how='left') df['item_loc_target_enc'] = (df['item_loc_target_enc'] .fillna(0) .astype(np.float16)) df = lag_feature(df, [1, 2, 3], 'item_loc_target_enc') df.drop(['item_loc_target_enc'], axis=1, inplace=True) #Add target encoding for item/shop for last 3 months item_id_target_mean = df.groupby(['date_block_num','item_id', 'shop_id'])['item_cnt_month'].mean().reset_index().rename(columns={ "item_cnt_month": "item_shop_target_enc"}, errors="raise") df = pd.merge(df, item_id_target_mean, on=['date_block_num','item_id', 'shop_id'], how='left') df['item_shop_target_enc'] = (df['item_shop_target_enc'] .fillna(0) .astype(np.float16)) df = lag_feature(df, [1, 2, 3], 'item_shop_target_enc') df.drop(['item_shop_target_enc'], axis=1, inplace=True)
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Extra interaction features
#For new items add avg category sales for last 3 months item_id_target_mean = df[df['item_first_interaction'] == 1].groupby(['date_block_num','item_category_code'])['item_cnt_month'].mean().reset_index().rename(columns={ "item_cnt_month": "new_item_cat_avg"}, errors="raise") df = pd.merge(df, item_id_target_mean, on=['date_block_num','item_category_code'], how='left') df['new_item_cat_avg'] = (df['new_item_cat_avg'] .fillna(0) .astype(np.float16)) df = lag_feature(df, [1, 2, 3], 'new_item_cat_avg') df.drop(['new_item_cat_avg'], axis=1, inplace=True) #For new items add avg category sales in a separate store for last 3 months item_id_target_mean = df[df['item_first_interaction'] == 1].groupby(['date_block_num','item_category_code', 'shop_id'])['item_cnt_month'].mean().reset_index().rename(columns={ "item_cnt_month": "new_item_shop_cat_avg"}, errors="raise") df = pd.merge(df, item_id_target_mean, on=['date_block_num','item_category_code', 'shop_id'], how='left') df['new_item_shop_cat_avg'] = (df['new_item_shop_cat_avg'] .fillna(0) .astype(np.float16)) df = lag_feature(df, [1, 2, 3], 'new_item_shop_cat_avg') df.drop(['new_item_shop_cat_avg'], axis=1, inplace=True)
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Add sales for the last three months for similar item (item with id = item_id - 1;kinda tricky feature, but increased the metric significantly)
def lag_feature_adv(df, lags, col): tmp = df[['date_block_num','shop_id','item_id',col]] for i in lags: shifted = tmp.copy() shifted.columns = ['date_block_num','shop_id','item_id', col+'_lag_'+str(i)+'_adv'] shifted['date_block_num'] += i shifted['item_id'] -= 1 df = pd.merge(df, shifted, on=['date_block_num','shop_id','item_id'], how='left') df[col+'_lag_'+str(i)+'_adv'] = df[col+'_lag_'+str(i)+'_adv'].astype('float16') return df df = lag_feature_adv(df, [1, 2, 3], 'item_cnt_month')
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Remove data for the first three months
df.fillna(0, inplace=True) df = df[(df['date_block_num'] > 2)] df.head() df.columns #Save dataset df.drop(['ID'], axis=1, inplace=True, errors='ignore') df.to_pickle('../output/models/sales_df.pkl')
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Train model
df = pd.read_pickle('../output/models/sales_df.pkl') df.info() X_train = df[df.date_block_num < 33].drop(['item_cnt_month'], axis=1) Y_train = df[df.date_block_num < 33]['item_cnt_month'] X_valid = df[df.date_block_num == 33].drop(['item_cnt_month'], axis=1) Y_valid = df[df.date_block_num == 33]['item_cnt_month'] X_test = df[df.date_block_num == 34].drop(['item_cnt_month'], axis=1) del df feature_name = X_train.columns.tolist() params = { 'objective': 'mse', 'metric': 'rmse', 'num_leaves': 2 ** 7 - 1, 'learning_rate': 0.005, 'feature_fraction': 0.75, 'bagging_fraction': 0.75, 'bagging_freq': 5, 'seed': 1, 'verbose': 1 } feature_name_indexes = [ 'country_part', 'item_category_common', 'item_category_code', 'city_code', ] lgb_train = lgb.Dataset(X_train[feature_name], Y_train) lgb_eval = lgb.Dataset(X_valid[feature_name], Y_valid, reference=lgb_train) evals_result = {} gbm = lgb.train( params, lgb_train, num_boost_round=3000, valid_sets=(lgb_train, lgb_eval), feature_name = feature_name, categorical_feature = feature_name_indexes, verbose_eval=5, evals_result = evals_result, early_stopping_rounds = 100) lgb.plot_importance( gbm, max_num_features=50, importance_type='gain', figsize=(12,8));
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Stacking didn't work for me. I'd tried 2 approaches:1. XGBoost + CatBoost + LightGBM at the first level and LinearRegression/LightGBM at the second level1. LinearRegression + LightGBM + RandomForest at the first level and LinearRegression/LightGBM at the second level
test = pd.read_csv('../data/0_raw/sales/test.csv.gz') Y_test = gbm.predict(X_test[feature_name]).clip(0, 20) submission = pd.DataFrame({ "ID": test.index, "item_cnt_month": Y_test }) submission.to_csv('../output/gbm_submission.csv', index=False)
_____no_output_____
MIT
{{cookiecutter.project_slug}}/notebooks/02_sj_salse_predict_ml.ipynb
juforg/cookiecutter-ds-py3tkinter
Displaying Surfacespy3Dmol supports the following surface types:* VDW - van der Waals surface* MS - molecular surface* SES - solvent excluded surface* SAS - solvent accessible surface
import py3Dmol
_____no_output_____
Apache-2.0
1-3D-visualization/4-Surfaces.ipynb
NicholasAKovacs/mmtf-workshop
Add surfaceIn the structure below (HLA complex with antigen peptide pVR), we add a solvent excluded surface (SES) to the heavy chain to highlight the binding pocket for the antigen peptide (rendered as spheres).
viewer = py3Dmol.view(query='pdb:5XS3') heavychain = {'chain':'A'} lightchain = {'chain':'B'} antigen = {'chain':'C'} viewer.setStyle(heavychain,{'cartoon':{'color':'blue'}}) viewer.setStyle(lightchain,{'cartoon':{'color':'yellow'}}) viewer.setStyle(antigen,{'sphere':{'colorscheme':'orangeCarbon'}}) viewer.addSurface(py3Dmol.SES,{'opacity':0.9,'color':'lightblue'}, heavychain) viewer.show()
_____no_output_____
Apache-2.0
1-3D-visualization/4-Surfaces.ipynb
NicholasAKovacs/mmtf-workshop
NETWORK = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl" STEPS = 300 FPS = 30 FREEZE_STEPS = 30
_____no_output_____
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
Upload Starting ImageChoose your starting image.
import os from google.colab import files uploaded = files.upload() if len(uploaded) != 1: print("Upload exactly 1 file for source.") else: for k, v in uploaded.items(): _, ext = os.path.splitext(k) os.remove(k) SOURCE_NAME = f"source{ext}" open(SOURCE_NAME, 'wb').write(v)
_____no_output_____
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
Also, choose your ending image.
uploaded = files.upload() if len(uploaded) != 1: print("Upload exactly 1 file for target.") else: for k, v in uploaded.items(): _, ext = os.path.splitext(k) os.remove(k) TARGET_NAME = f"target{ext}" open(TARGET_NAME, 'wb').write(v)
_____no_output_____
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
Install SoftwareSome software must be installed into Colab, for this notebook to work. We are specificially using these technologies:* [Training Generative Adversarial Networks with Limited Data](https://arxiv.org/abs/2006.06676)Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila* [One millisecond face alignment with an ensemble of regression trees](https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Kazemi_One_Millisecond_Face_2014_CVPR_paper.pdf) Vahid Kazemi, Josephine Sullivan
!wget http://dlib.net/files/shape_predictor_5_face_landmarks.dat.bz2 !bzip2 -d shape_predictor_5_face_landmarks.dat.bz2 import sys !git clone https://github.com/NVlabs/stylegan2-ada-pytorch.git !pip install ninja sys.path.insert(0, "/content/stylegan2-ada-pytorch")
fatal: destination path 'stylegan2-ada-pytorch' already exists and is not an empty directory. Requirement already satisfied: ninja in /usr/local/lib/python3.7/dist-packages (1.10.2.3)
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
Preprocess Images for Best StyleGAN ResultsThe following are helper functions for the preprocessing.
import cv2 import numpy as np from PIL import Image import dlib detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor('shape_predictor_5_face_landmarks.dat') def find_eyes(img): gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) rects = detector(gray, 0) if len(rects) == 0: raise ValueError("No faces detected") elif len(rects) > 1: raise ValueError("Multiple faces detected") shape = predictor(gray, rects[0]) features = [] for i in range(0, 5): features.append((i, (shape.part(i).x, shape.part(i).y))) return (int(features[3][1][0] + features[2][1][0]) // 2, \ int(features[3][1][1] + features[2][1][1]) // 2), \ (int(features[1][1][0] + features[0][1][0]) // 2, \ int(features[1][1][1] + features[0][1][1]) // 2) def crop_stylegan(img): left_eye, right_eye = find_eyes(img) d = abs(right_eye[0] - left_eye[0]) z = 255/d ar = img.shape[0]/img.shape[1] w = img.shape[1] * z img2 = cv2.resize(img, (int(w), int(w*ar))) bordersize = 1024 img3 = cv2.copyMakeBorder( img2, top=bordersize, bottom=bordersize, left=bordersize, right=bordersize, borderType=cv2.BORDER_REPLICATE) left_eye2, right_eye2 = find_eyes(img3) crop1 = left_eye2[0] - 385 crop0 = left_eye2[1] - 490 return img3[crop0:crop0+1024,crop1:crop1+1024]
_____no_output_____
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
The following will preprocess and crop your images. If you receive an error indicating multiple faces were found, try to crop your image better or obscure the background. If the program does not see a face, then attempt to obtain a clearer and more high-resolution image.
from matplotlib import pyplot as plt import cv2 image_source = cv2.imread(SOURCE_NAME) if image_source is None: raise ValueError("Source image not found") image_target = cv2.imread(TARGET_NAME) if image_target is None: raise ValueError("Source image not found") cropped_source = crop_stylegan(image_source) cropped_target = crop_stylegan(image_target) img = cv2.cvtColor(cropped_source, cv2.COLOR_BGR2RGB) plt.imshow(img) plt.title('source') plt.show() img = cv2.cvtColor(cropped_target, cv2.COLOR_BGR2RGB) plt.imshow(img) plt.title('target') plt.show() cv2.imwrite("cropped_source.png", cropped_source) cv2.imwrite("cropped_target.png", cropped_target) #print(find_eyes(cropped_source)) #print(find_eyes(cropped_target))
_____no_output_____
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
Convert Source to a GANFirst, we convert the source to a GAN latent vector. This process will take several minutes.
cmd = f"python /content/stylegan2-ada-pytorch/projector.py --save-video 0 --num-steps 1000 --outdir=out_source --target=cropped_source.png --network={NETWORK}" !{cmd}
Loading networks from "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl"... Computing W midpoint and stddev using 10000 samples... Setting up PyTorch plugin "bias_act_plugin"... Done. Setting up PyTorch plugin "upfirdn2d_plugin"... Done. step 1/1000: dist 0.64 loss 24569.53 step 2/1000: dist 0.52 loss 27642.74 step 3/1000: dist 0.57 loss 27167.59 step 4/1000: dist 0.57 loss 26253.69 step 5/1000: dist 0.59 loss 24958.55 step 6/1000: dist 0.50 loss 23356.06 step 7/1000: dist 0.55 loss 21513.72 step 8/1000: dist 0.45 loss 19485.98 step 9/1000: dist 0.59 loss 17342.24 step 10/1000: dist 0.55 loss 15145.47 step 11/1000: dist 0.49 loss 12946.06 step 12/1000: dist 0.53 loss 10817.87 step 13/1000: dist 0.52 loss 8803.69 step 14/1000: dist 0.50 loss 6948.97 step 15/1000: dist 0.55 loss 5315.48 step 16/1000: dist 0.49 loss 3971.71 step 17/1000: dist 0.51 loss 2944.83 step 18/1000: dist 0.53 loss 2212.46 step 19/1000: dist 0.43 loss 1761.67 step 20/1000: dist 0.44 loss 1569.43 step 21/1000: dist 0.42 loss 1600.47 step 22/1000: dist 0.41 loss 1789.50 step 23/1000: dist 0.41 loss 2053.34 step 24/1000: dist 0.45 loss 2325.79 step 25/1000: dist 0.39 loss 2539.81 step 26/1000: dist 0.43 loss 2634.70 step 27/1000: dist 0.45 loss 2600.18 step 28/1000: dist 0.44 loss 2475.98 step 29/1000: dist 0.38 loss 2313.33 step 30/1000: dist 0.43 loss 2118.93 step 31/1000: dist 0.38 loss 1879.80 step 32/1000: dist 0.39 loss 1625.18 step 33/1000: dist 0.41 loss 1387.66 step 34/1000: dist 0.42 loss 1185.58 step 35/1000: dist 0.40 loss 1027.94 step 36/1000: dist 0.43 loss 909.22 step 37/1000: dist 0.38 loss 829.03 step 38/1000: dist 0.38 loss 808.39 step 39/1000: dist 0.39 loss 816.71 step 40/1000: dist 0.36 loss 828.89 step 41/1000: dist 0.37 loss 824.38 step 42/1000: dist 0.38 loss 764.19 step 43/1000: dist 0.35 loss 660.69 step 44/1000: dist 0.34 loss 546.33 step 45/1000: dist 0.36 loss 413.25 step 46/1000: dist 0.38 loss 317.19 step 47/1000: dist 0.37 loss 283.69 step 48/1000: dist 0.38 loss 248.79 step 49/1000: dist 0.35 loss 281.98 step 50/1000: dist 0.35 loss 320.02 step 51/1000: dist 0.37 loss 388.44 step 52/1000: dist 0.36 loss 405.21 step 53/1000: dist 0.32 loss 394.16 step 54/1000: dist 0.40 loss 350.83 step 55/1000: dist 0.34 loss 276.97 step 56/1000: dist 0.34 loss 199.58 step 57/1000: dist 0.33 loss 127.22 step 58/1000: dist 0.34 loss 80.25 step 59/1000: dist 0.34 loss 52.98 step 60/1000: dist 0.31 loss 55.57 step 61/1000: dist 0.34 loss 60.26 step 62/1000: dist 0.37 loss 85.02 step 63/1000: dist 0.34 loss 109.96 step 64/1000: dist 0.33 loss 128.13 step 65/1000: dist 0.34 loss 127.04 step 66/1000: dist 0.31 loss 119.92 step 67/1000: dist 0.33 loss 96.65 step 68/1000: dist 0.34 loss 74.48 step 69/1000: dist 0.37 loss 46.98 step 70/1000: dist 0.32 loss 28.45 step 71/1000: dist 0.38 loss 18.21 step 72/1000: dist 0.32 loss 16.81 step 73/1000: dist 0.35 loss 18.24 step 74/1000: dist 0.31 loss 27.94 step 75/1000: dist 0.30 loss 32.35 step 76/1000: dist 0.33 loss 35.55 step 77/1000: dist 0.35 loss 34.99 step 78/1000: dist 0.32 loss 31.92 step 79/1000: dist 0.34 loss 26.74 step 80/1000: dist 0.33 loss 22.05 step 81/1000: dist 0.32 loss 15.75 step 82/1000: dist 0.32 loss 14.46 step 83/1000: dist 0.30 loss 15.93 step 84/1000: dist 0.34 loss 16.64 step 85/1000: dist 0.32 loss 14.85 step 86/1000: dist 0.31 loss 12.58 step 87/1000: dist 0.31 loss 11.42 step 88/1000: dist 0.29 loss 10.57 step 89/1000: dist 0.34 loss 9.21 step 90/1000: dist 0.32 loss 7.58 step 91/1000: dist 0.32 loss 6.95 step 92/1000: dist 0.32 loss 5.87 step 93/1000: dist 0.32 loss 5.37 step 94/1000: dist 0.31 loss 6.09 step 95/1000: dist 0.31 loss 8.19 step 96/1000: dist 0.33 loss 9.66 step 97/1000: dist 0.30 loss 10.27 step 98/1000: dist 0.32 loss 9.74 step 99/1000: dist 0.29 loss 9.90 step 100/1000: dist 0.29 loss 11.77 step 101/1000: dist 0.31 loss 14.67 step 102/1000: dist 0.29 loss 12.99 step 103/1000: dist 0.32 loss 6.41 step 104/1000: dist 0.28 loss 6.58 step 105/1000: dist 0.30 loss 14.45 step 106/1000: dist 0.31 loss 16.89 step 107/1000: dist 0.30 loss 11.51 step 108/1000: dist 0.30 loss 9.70 step 109/1000: dist 0.29 loss 10.27 step 110/1000: dist 0.31 loss 7.18 step 111/1000: dist 0.31 loss 6.35 step 112/1000: dist 0.34 loss 9.71 step 113/1000: dist 0.29 loss 8.66 step 114/1000: dist 0.29 loss 6.15 step 115/1000: dist 0.30 loss 9.49 step 116/1000: dist 0.30 loss 12.30 step 117/1000: dist 0.31 loss 9.64 step 118/1000: dist 0.28 loss 6.67 step 119/1000: dist 0.29 loss 5.59 step 120/1000: dist 0.28 loss 4.67 step 121/1000: dist 0.30 loss 5.22 step 122/1000: dist 0.30 loss 6.40 step 123/1000: dist 0.28 loss 6.17 step 124/1000: dist 0.28 loss 7.35 step 125/1000: dist 0.32 loss 10.93 step 126/1000: dist 0.30 loss 12.18 step 127/1000: dist 0.32 loss 8.82 step 128/1000: dist 0.29 loss 4.46 step 129/1000: dist 0.29 loss 3.04 step 130/1000: dist 0.29 loss 4.23 step 131/1000: dist 0.30 loss 5.47 step 132/1000: dist 0.32 loss 4.74 step 133/1000: dist 0.29 loss 3.27 step 134/1000: dist 0.29 loss 3.45 step 135/1000: dist 0.28 loss 4.13 step 136/1000: dist 0.29 loss 3.15 step 137/1000: dist 0.28 loss 1.94 step 138/1000: dist 0.32 loss 2.75 step 139/1000: dist 0.28 loss 3.94 step 140/1000: dist 0.30 loss 3.72 step 141/1000: dist 0.29 loss 4.03 step 142/1000: dist 0.28 loss 7.67 step 143/1000: dist 0.30 loss 13.63 step 144/1000: dist 0.31 loss 17.92 step 145/1000: dist 0.35 loss 16.38 step 146/1000: dist 0.30 loss 9.34 step 147/1000: dist 0.30 loss 4.54 step 148/1000: dist 0.28 loss 5.87 step 149/1000: dist 0.29 loss 8.01 step 150/1000: dist 0.29 loss 6.58 step 151/1000: dist 0.27 loss 6.16 step 152/1000: dist 0.30 loss 10.04 step 153/1000: dist 0.27 loss 13.59 step 154/1000: dist 0.29 loss 16.03 step 155/1000: dist 0.28 loss 20.94 step 156/1000: dist 0.28 loss 21.56 step 157/1000: dist 0.28 loss 11.10 step 158/1000: dist 0.29 loss 2.93 step 159/1000: dist 0.28 loss 7.48 step 160/1000: dist 0.27 loss 12.92 step 161/1000: dist 0.29 loss 10.25 step 162/1000: dist 0.29 loss 9.80 step 163/1000: dist 0.27 loss 15.27 step 164/1000: dist 0.27 loss 16.10 step 165/1000: dist 0.28 loss 11.64 step 166/1000: dist 0.29 loss 10.11 step 167/1000: dist 0.29 loss 9.98 step 168/1000: dist 0.28 loss 7.51 step 169/1000: dist 0.28 loss 6.89 step 170/1000: dist 0.26 loss 7.22 step 171/1000: dist 0.28 loss 4.37 step 172/1000: dist 0.27 loss 3.35 step 173/1000: dist 0.28 loss 6.82 step 174/1000: dist 0.27 loss 7.90 step 175/1000: dist 0.27 loss 5.80 step 176/1000: dist 0.29 loss 7.36 step 177/1000: dist 0.26 loss 11.93 step 178/1000: dist 0.27 loss 13.67 step 179/1000: dist 0.27 loss 11.41 step 180/1000: dist 0.28 loss 7.97 step 181/1000: dist 0.27 loss 6.89 step 182/1000: dist 0.28 loss 10.19 step 183/1000: dist 0.27 loss 14.75 step 184/1000: dist 0.27 loss 14.63 step 185/1000: dist 0.27 loss 9.03 step 186/1000: dist 0.27 loss 4.84 step 187/1000: dist 0.26 loss 6.00 step 188/1000: dist 0.27 loss 9.26 step 189/1000: dist 0.29 loss 10.86 step 190/1000: dist 0.28 loss 10.24 step 191/1000: dist 0.26 loss 8.39 step 192/1000: dist 0.28 loss 6.99 step 193/1000: dist 0.27 loss 7.95 step 194/1000: dist 0.26 loss 11.12 step 195/1000: dist 0.26 loss 13.72 step 196/1000: dist 0.27 loss 14.93 step 197/1000: dist 0.27 loss 15.84 step 198/1000: dist 0.27 loss 14.96 step 199/1000: dist 0.26 loss 10.89 step 200/1000: dist 0.27 loss 9.39 step 201/1000: dist 0.26 loss 14.98 step 202/1000: dist 0.29 loss 21.36 step 203/1000: dist 0.27 loss 18.34 step 204/1000: dist 0.27 loss 8.61 step 205/1000: dist 0.29 loss 7.13 step 206/1000: dist 0.27 loss 17.44 step 207/1000: dist 0.26 loss 29.47 step 208/1000: dist 0.27 loss 35.20 step 209/1000: dist 0.27 loss 28.47 step 210/1000: dist 0.26 loss 14.23 step 211/1000: dist 0.28 loss 14.35 step 212/1000: dist 0.26 loss 21.21 step 213/1000: dist 0.28 loss 12.26 step 214/1000: dist 0.26 loss 4.31 step 215/1000: dist 0.25 loss 13.05 step 216/1000: dist 0.27 loss 12.83 step 217/1000: dist 0.27 loss 4.95 step 218/1000: dist 0.26 loss 10.89 step 219/1000: dist 0.25 loss 12.28 step 220/1000: dist 0.25 loss 5.34 step 221/1000: dist 0.26 loss 10.46 step 222/1000: dist 0.26 loss 14.74 step 223/1000: dist 0.25 loss 10.68 step 224/1000: dist 0.26 loss 10.15 step 225/1000: dist 0.26 loss 7.35 step 226/1000: dist 0.26 loss 4.16 step 227/1000: dist 0.27 loss 7.75 step 228/1000: dist 0.26 loss 8.76 step 229/1000: dist 0.26 loss 7.65 step 230/1000: dist 0.27 loss 7.31 step 231/1000: dist 0.28 loss 2.95 step 232/1000: dist 0.25 loss 1.14 step 233/1000: dist 0.25 loss 3.84 step 234/1000: dist 0.26 loss 3.98 step 235/1000: dist 0.26 loss 3.64 step 236/1000: dist 0.26 loss 3.83 step 237/1000: dist 0.26 loss 3.02 step 238/1000: dist 0.26 loss 4.02 step 239/1000: dist 0.27 loss 5.47 step 240/1000: dist 0.25 loss 6.20 step 241/1000: dist 0.25 loss 8.73 step 242/1000: dist 0.25 loss 11.06 step 243/1000: dist 0.27 loss 10.66 step 244/1000: dist 0.26 loss 7.53 step 245/1000: dist 0.26 loss 2.97 step 246/1000: dist 0.25 loss 1.28 step 247/1000: dist 0.26 loss 3.80 step 248/1000: dist 0.26 loss 5.61 step 249/1000: dist 0.25 loss 3.95 step 250/1000: dist 0.25 loss 1.53 step 251/1000: dist 0.25 loss 1.28 step 252/1000: dist 0.26 loss 2.98 step 253/1000: dist 0.27 loss 3.58 step 254/1000: dist 0.25 loss 1.83 step 255/1000: dist 0.26 loss 0.80 step 256/1000: dist 0.25 loss 1.72 step 257/1000: dist 0.25 loss 2.22 step 258/1000: dist 0.25 loss 1.33 step 259/1000: dist 0.25 loss 0.77 step 260/1000: dist 0.25 loss 1.49 step 261/1000: dist 0.25 loss 2.48 step 262/1000: dist 0.25 loss 2.85 step 263/1000: dist 0.27 loss 3.60 step 264/1000: dist 0.25 loss 6.46 step 265/1000: dist 0.25 loss 11.65 step 266/1000: dist 0.25 loss 17.50 step 267/1000: dist 0.25 loss 21.21 step 268/1000: dist 0.25 loss 19.16 step 269/1000: dist 0.28 loss 13.71 step 270/1000: dist 0.25 loss 10.98 step 271/1000: dist 0.25 loss 10.90 step 272/1000: dist 0.25 loss 8.70 step 273/1000: dist 0.25 loss 4.63 step 274/1000: dist 0.25 loss 4.78 step 275/1000: dist 0.24 loss 8.88 step 276/1000: dist 0.24 loss 8.90 step 277/1000: dist 0.25 loss 5.05 step 278/1000: dist 0.24 loss 6.89 step 279/1000: dist 0.24 loss 15.41 step 280/1000: dist 0.24 loss 20.94 step 281/1000: dist 0.25 loss 18.48 step 282/1000: dist 0.24 loss 11.80 step 283/1000: dist 0.24 loss 6.38 step 284/1000: dist 0.24 loss 6.47 step 285/1000: dist 0.24 loss 11.40 step 286/1000: dist 0.24 loss 15.73 step 287/1000: dist 0.25 loss 16.75 step 288/1000: dist 0.24 loss 20.77 step 289/1000: dist 0.24 loss 28.41 step 290/1000: dist 0.24 loss 28.15 step 291/1000: dist 0.24 loss 19.34 step 292/1000: dist 0.25 loss 18.30 step 293/1000: dist 0.25 loss 26.93 step 294/1000: dist 0.25 loss 28.72 step 295/1000: dist 0.24 loss 19.25 step 296/1000: dist 0.24 loss 11.74 step 297/1000: dist 0.24 loss 11.70 step 298/1000: dist 0.26 loss 11.16 step 299/1000: dist 0.24 loss 8.96 step 300/1000: dist 0.24 loss 10.46 step 301/1000: dist 0.26 loss 11.82 step 302/1000: dist 0.24 loss 8.48 step 303/1000: dist 0.26 loss 7.42 step 304/1000: dist 0.25 loss 13.78 step 305/1000: dist 0.26 loss 20.67 step 306/1000: dist 0.26 loss 21.33 step 307/1000: dist 0.24 loss 16.80 step 308/1000: dist 0.24 loss 10.06 step 309/1000: dist 0.24 loss 5.55 step 310/1000: dist 0.26 loss 6.88 step 311/1000: dist 0.25 loss 10.07 step 312/1000: dist 0.27 loss 8.14 step 313/1000: dist 0.25 loss 3.85 step 314/1000: dist 0.25 loss 4.87 step 315/1000: dist 0.25 loss 7.51 step 316/1000: dist 0.25 loss 5.37 step 317/1000: dist 0.24 loss 3.86 step 318/1000: dist 0.26 loss 8.04 step 319/1000: dist 0.26 loss 12.12 step 320/1000: dist 0.25 loss 14.70 step 321/1000: dist 0.25 loss 22.49 step 322/1000: dist 0.24 loss 32.50 step 323/1000: dist 0.25 loss 33.77 step 324/1000: dist 0.26 loss 25.98 step 325/1000: dist 0.25 loss 17.55 step 326/1000: dist 0.27 loss 12.76 step 327/1000: dist 0.25 loss 12.36 step 328/1000: dist 0.25 loss 16.27 step 329/1000: dist 0.25 loss 20.26 step 330/1000: dist 0.25 loss 21.06 step 331/1000: dist 0.25 loss 19.85 step 332/1000: dist 0.26 loss 16.32 step 333/1000: dist 0.25 loss 10.43 step 334/1000: dist 0.24 loss 6.92 step 335/1000: dist 0.24 loss 8.13 step 336/1000: dist 0.25 loss 10.03 step 337/1000: dist 0.24 loss 8.85 step 338/1000: dist 0.23 loss 6.06 step 339/1000: dist 0.24 loss 5.11 step 340/1000: dist 0.23 loss 6.57 step 341/1000: dist 0.24 loss 8.19 step 342/1000: dist 0.24 loss 8.89 step 343/1000: dist 0.25 loss 11.45 step 344/1000: dist 0.24 loss 18.44 step 345/1000: dist 0.24 loss 26.13 step 346/1000: dist 0.23 loss 26.16 step 347/1000: dist 0.25 loss 16.98 step 348/1000: dist 0.25 loss 10.33 step 349/1000: dist 0.25 loss 13.76 step 350/1000: dist 0.25 loss 16.96 step 351/1000: dist 0.23 loss 10.23 step 352/1000: dist 0.23 loss 2.40 step 353/1000: dist 0.24 loss 4.89 step 354/1000: dist 0.23 loss 9.91 step 355/1000: dist 0.24 loss 7.70 step 356/1000: dist 0.23 loss 5.10 step 357/1000: dist 0.23 loss 7.23 step 358/1000: dist 0.23 loss 7.69 step 359/1000: dist 0.24 loss 7.68 step 360/1000: dist 0.23 loss 14.25 step 361/1000: dist 0.23 loss 22.76 step 362/1000: dist 0.23 loss 27.02 step 363/1000: dist 0.23 loss 27.44 step 364/1000: dist 0.23 loss 21.31 step 365/1000: dist 0.23 loss 9.12 step 366/1000: dist 0.24 loss 4.08 step 367/1000: dist 0.24 loss 10.17 step 368/1000: dist 0.25 loss 14.86 step 369/1000: dist 0.24 loss 12.47 step 370/1000: dist 0.24 loss 11.04 step 371/1000: dist 0.24 loss 15.59 step 372/1000: dist 0.24 loss 21.70 step 373/1000: dist 0.23 loss 21.08 step 374/1000: dist 0.23 loss 11.89 step 375/1000: dist 0.23 loss 3.62 step 376/1000: dist 0.24 loss 5.20 step 377/1000: dist 0.24 loss 10.61 step 378/1000: dist 0.24 loss 10.26 step 379/1000: dist 0.24 loss 5.68 step 380/1000: dist 0.23 loss 4.73 step 381/1000: dist 0.23 loss 8.90 step 382/1000: dist 0.23 loss 13.03 step 383/1000: dist 0.22 loss 14.77 step 384/1000: dist 0.23 loss 16.90 step 385/1000: dist 0.24 loss 18.41 step 386/1000: dist 0.23 loss 12.95 step 387/1000: dist 0.22 loss 3.64 step 388/1000: dist 0.23 loss 1.90 step 389/1000: dist 0.23 loss 8.10 step 390/1000: dist 0.23 loss 10.58 step 391/1000: dist 0.23 loss 5.27 step 392/1000: dist 0.23 loss 1.42 step 393/1000: dist 0.23 loss 3.97 step 394/1000: dist 0.23 loss 6.74 step 395/1000: dist 0.23 loss 5.49 step 396/1000: dist 0.23 loss 4.20 step 397/1000: dist 0.23 loss 7.03 step 398/1000: dist 0.23 loss 12.62 step 399/1000: dist 0.23 loss 17.72 step 400/1000: dist 0.23 loss 20.13 step 401/1000: dist 0.22 loss 16.85 step 402/1000: dist 0.22 loss 8.06 step 403/1000: dist 0.22 loss 2.19 step 404/1000: dist 0.22 loss 4.97 step 405/1000: dist 0.22 loss 9.83 step 406/1000: dist 0.23 loss 7.65 step 407/1000: dist 0.22 loss 2.01 step 408/1000: dist 0.22 loss 2.44 step 409/1000: dist 0.23 loss 6.09 step 410/1000: dist 0.22 loss 4.63 step 411/1000: dist 0.22 loss 1.21 step 412/1000: dist 0.22 loss 2.40 step 413/1000: dist 0.22 loss 4.24 step 414/1000: dist 0.22 loss 2.42 step 415/1000: dist 0.22 loss 1.49 step 416/1000: dist 0.21 loss 3.41 step 417/1000: dist 0.22 loss 4.07 step 418/1000: dist 0.23 loss 4.14 step 419/1000: dist 0.22 loss 7.32 step 420/1000: dist 0.22 loss 12.23 step 421/1000: dist 0.22 loss 17.08 step 422/1000: dist 0.22 loss 22.48 step 423/1000: dist 0.22 loss 23.91 step 424/1000: dist 0.22 loss 17.03 step 425/1000: dist 0.22 loss 7.85 step 426/1000: dist 0.21 loss 4.78 step 427/1000: dist 0.22 loss 6.84 step 428/1000: dist 0.22 loss 8.27 step 429/1000: dist 0.22 loss 7.21 step 430/1000: dist 0.22 loss 5.56 step 431/1000: dist 0.22 loss 4.85 step 432/1000: dist 0.21 loss 4.97 step 433/1000: dist 0.22 loss 5.12 step 434/1000: dist 0.22 loss 5.80 step 435/1000: dist 0.22 loss 7.97 step 436/1000: dist 0.22 loss 10.74 step 437/1000: dist 0.22 loss 13.37 step 438/1000: dist 0.22 loss 16.01 step 439/1000: dist 0.21 loss 15.78 step 440/1000: dist 0.22 loss 9.38 step 441/1000: dist 0.22 loss 1.87 step 442/1000: dist 0.22 loss 1.77 step 443/1000: dist 0.21 loss 7.18 step 444/1000: dist 0.22 loss 8.50 step 445/1000: dist 0.21 loss 3.64 step 446/1000: dist 0.21 loss 0.61 step 447/1000: dist 0.21 loss 3.14 step 448/1000: dist 0.21 loss 5.25 step 449/1000: dist 0.22 loss 2.99 step 450/1000: dist 0.21 loss 0.63 step 451/1000: dist 0.22 loss 1.87 step 452/1000: dist 0.21 loss 3.36 step 453/1000: dist 0.22 loss 2.02 step 454/1000: dist 0.22 loss 0.49 step 455/1000: dist 0.21 loss 1.35 step 456/1000: dist 0.22 loss 2.30 step 457/1000: dist 0.22 loss 1.29 step 458/1000: dist 0.21 loss 0.38 step 459/1000: dist 0.21 loss 1.11 step 460/1000: dist 0.21 loss 1.64 step 461/1000: dist 0.21 loss 0.89 step 462/1000: dist 0.21 loss 0.50 step 463/1000: dist 0.21 loss 1.24 step 464/1000: dist 0.21 loss 1.74 step 465/1000: dist 0.21 loss 1.74 step 466/1000: dist 0.21 loss 2.68 step 467/1000: dist 0.21 loss 5.13 step 468/1000: dist 0.22 loss 8.59 step 469/1000: dist 0.21 loss 13.25 step 470/1000: dist 0.21 loss 17.97 step 471/1000: dist 0.21 loss 18.72 step 472/1000: dist 0.21 loss 12.27 step 473/1000: dist 0.22 loss 5.68 step 474/1000: dist 0.21 loss 8.15 step 475/1000: dist 0.21 loss 18.08 step 476/1000: dist 0.22 loss 25.61 step 477/1000: dist 0.21 loss 26.45 step 478/1000: dist 0.21 loss 25.22 step 479/1000: dist 0.22 loss 24.59 step 480/1000: dist 0.22 loss 22.39 step 481/1000: dist 0.21 loss 22.72 step 482/1000: dist 0.21 loss 28.97 step 483/1000: dist 0.22 loss 31.27 step 484/1000: dist 0.22 loss 19.48 step 485/1000: dist 0.22 loss 8.35 step 486/1000: dist 0.22 loss 13.96 step 487/1000: dist 0.21 loss 26.12 step 488/1000: dist 0.22 loss 29.77 step 489/1000: dist 0.21 loss 31.39 step 490/1000: dist 0.21 loss 32.33 step 491/1000: dist 0.21 loss 19.30 step 492/1000: dist 0.21 loss 4.10 step 493/1000: dist 0.22 loss 10.64 step 494/1000: dist 0.21 loss 22.35 step 495/1000: dist 0.21 loss 13.20 step 496/1000: dist 0.21 loss 2.36 step 497/1000: dist 0.21 loss 11.19 step 498/1000: dist 0.21 loss 19.00 step 499/1000: dist 0.21 loss 14.68 step 500/1000: dist 0.21 loss 19.29 step 501/1000: dist 0.21 loss 35.39 step 502/1000: dist 0.21 loss 41.24 step 503/1000: dist 0.21 loss 30.73 step 504/1000: dist 0.21 loss 16.24 step 505/1000: dist 0.21 loss 8.37 step 506/1000: dist 0.21 loss 12.74 step 507/1000: dist 0.21 loss 22.70 step 508/1000: dist 0.21 loss 23.55 step 509/1000: dist 0.21 loss 17.83 step 510/1000: dist 0.21 loss 21.63 step 511/1000: dist 0.21 loss 28.18 step 512/1000: dist 0.21 loss 18.78 step 513/1000: dist 0.21 loss 3.13 step 514/1000: dist 0.21 loss 4.73 step 515/1000: dist 0.21 loss 15.44 step 516/1000: dist 0.20 loss 13.41 step 517/1000: dist 0.20 loss 4.28 step 518/1000: dist 0.20 loss 4.65 step 519/1000: dist 0.21 loss 10.01 step 520/1000: dist 0.21 loss 9.90 step 521/1000: dist 0.21 loss 8.84 step 522/1000: dist 0.21 loss 13.07 step 523/1000: dist 0.21 loss 19.42 step 524/1000: dist 0.20 loss 22.70 step 525/1000: dist 0.20 loss 19.35 step 526/1000: dist 0.21 loss 10.37 step 527/1000: dist 0.21 loss 4.25 step 528/1000: dist 0.20 loss 6.68 step 529/1000: dist 0.21 loss 12.51 step 530/1000: dist 0.20 loss 14.87 step 531/1000: dist 0.20 loss 13.81 step 532/1000: dist 0.21 loss 15.03 step 533/1000: dist 0.20 loss 19.86 step 534/1000: dist 0.21 loss 20.04 step 535/1000: dist 0.21 loss 10.33 step 536/1000: dist 0.20 loss 1.39 step 537/1000: dist 0.21 loss 4.11 step 538/1000: dist 0.20 loss 10.67 step 539/1000: dist 0.20 loss 9.11 step 540/1000: dist 0.20 loss 3.17 step 541/1000: dist 0.21 loss 2.40 step 542/1000: dist 0.21 loss 5.16 step 543/1000: dist 0.21 loss 5.18 step 544/1000: dist 0.21 loss 3.03 step 545/1000: dist 0.20 loss 2.24 step 546/1000: dist 0.20 loss 2.83 step 547/1000: dist 0.20 loss 3.01 step 548/1000: dist 0.20 loss 2.39 step 549/1000: dist 0.20 loss 1.78 step 550/1000: dist 0.20 loss 1.90 step 551/1000: dist 0.20 loss 2.44 step 552/1000: dist 0.20 loss 2.59 step 553/1000: dist 0.20 loss 2.85 step 554/1000: dist 0.20 loss 4.64 step 555/1000: dist 0.20 loss 8.37 step 556/1000: dist 0.20 loss 14.01 step 557/1000: dist 0.20 loss 21.97 step 558/1000: dist 0.20 loss 29.33 step 559/1000: dist 0.20 loss 28.73 step 560/1000: dist 0.20 loss 20.77 step 561/1000: dist 0.20 loss 18.42 step 562/1000: dist 0.20 loss 23.37 step 563/1000: dist 0.20 loss 20.45 step 564/1000: dist 0.21 loss 7.26 step 565/1000: dist 0.20 loss 3.21 step 566/1000: dist 0.20 loss 13.95 step 567/1000: dist 0.20 loss 18.83 step 568/1000: dist 0.20 loss 11.43 step 569/1000: dist 0.20 loss 11.67 step 570/1000: dist 0.21 loss 22.04 step 571/1000: dist 0.20 loss 22.34 step 572/1000: dist 0.20 loss 11.08 step 573/1000: dist 0.20 loss 4.80 step 574/1000: dist 0.20 loss 5.54 step 575/1000: dist 0.20 loss 6.82 step 576/1000: dist 0.20 loss 8.49 step 577/1000: dist 0.20 loss 8.09 step 578/1000: dist 0.20 loss 4.01 step 579/1000: dist 0.21 loss 2.28 step 580/1000: dist 0.20 loss 4.52 step 581/1000: dist 0.20 loss 5.59 step 582/1000: dist 0.20 loss 4.01 step 583/1000: dist 0.20 loss 2.31 step 584/1000: dist 0.20 loss 2.80 step 585/1000: dist 0.20 loss 5.70 step 586/1000: dist 0.20 loss 7.95 step 587/1000: dist 0.20 loss 10.06 step 588/1000: dist 0.20 loss 17.79 step 589/1000: dist 0.20 loss 29.40 step 590/1000: dist 0.20 loss 31.06 step 591/1000: dist 0.20 loss 17.57 step 592/1000: dist 0.20 loss 3.14 step 593/1000: dist 0.20 loss 4.75 step 594/1000: dist 0.20 loss 14.58 step 595/1000: dist 0.20 loss 12.48 step 596/1000: dist 0.20 loss 2.05 step 597/1000: dist 0.20 loss 3.60 step 598/1000: dist 0.20 loss 10.30 step 599/1000: dist 0.20 loss 5.32 step 600/1000: dist 0.20 loss 0.58 step 601/1000: dist 0.20 loss 5.65 step 602/1000: dist 0.19 loss 5.91 step 603/1000: dist 0.20 loss 1.06 step 604/1000: dist 0.20 loss 3.21 step 605/1000: dist 0.20 loss 5.28 step 606/1000: dist 0.20 loss 2.65 step 607/1000: dist 0.20 loss 4.39 step 608/1000: dist 0.20 loss 8.00 step 609/1000: dist 0.20 loss 9.54 step 610/1000: dist 0.20 loss 15.93 step 611/1000: dist 0.20 loss 25.47 step 612/1000: dist 0.20 loss 31.15 step 613/1000: dist 0.20 loss 30.34 step 614/1000: dist 0.20 loss 20.05 step 615/1000: dist 0.20 loss 6.91 step 616/1000: dist 0.19 loss 5.53 step 617/1000: dist 0.20 loss 11.91 step 618/1000: dist 0.20 loss 12.20 step 619/1000: dist 0.20 loss 9.48 step 620/1000: dist 0.20 loss 13.73 step 621/1000: dist 0.20 loss 22.44 step 622/1000: dist 0.20 loss 28.32 step 623/1000: dist 0.19 loss 30.48 step 624/1000: dist 0.19 loss 29.10 step 625/1000: dist 0.19 loss 21.16 step 626/1000: dist 0.19 loss 14.49 step 627/1000: dist 0.19 loss 24.16 step 628/1000: dist 0.20 loss 44.09 step 629/1000: dist 0.19 loss 50.18 step 630/1000: dist 0.19 loss 40.78 step 631/1000: dist 0.19 loss 35.55 step 632/1000: dist 0.19 loss 31.29 step 633/1000: dist 0.19 loss 18.04 step 634/1000: dist 0.19 loss 12.32 step 635/1000: dist 0.19 loss 19.60 step 636/1000: dist 0.19 loss 19.46 step 637/1000: dist 0.19 loss 9.39 step 638/1000: dist 0.19 loss 8.80 step 639/1000: dist 0.19 loss 13.70 step 640/1000: dist 0.19 loss 9.23 step 641/1000: dist 0.19 loss 5.33 step 642/1000: dist 0.19 loss 9.08 step 643/1000: dist 0.20 loss 7.76 step 644/1000: dist 0.19 loss 3.66 step 645/1000: dist 0.20 loss 6.59 step 646/1000: dist 0.20 loss 7.39 step 647/1000: dist 0.19 loss 4.09 step 648/1000: dist 0.19 loss 6.68 step 649/1000: dist 0.19 loss 9.72 step 650/1000: dist 0.20 loss 9.29 step 651/1000: dist 0.19 loss 13.37 step 652/1000: dist 0.19 loss 17.01 step 653/1000: dist 0.19 loss 14.20 step 654/1000: dist 0.19 loss 11.40 step 655/1000: dist 0.19 loss 10.01 step 656/1000: dist 0.19 loss 10.03 step 657/1000: dist 0.19 loss 16.46 step 658/1000: dist 0.19 loss 22.85 step 659/1000: dist 0.19 loss 22.69 step 660/1000: dist 0.19 loss 24.12 step 661/1000: dist 0.19 loss 31.11 step 662/1000: dist 0.19 loss 36.68 step 663/1000: dist 0.19 loss 34.13 step 664/1000: dist 0.19 loss 22.20 step 665/1000: dist 0.19 loss 12.63 step 666/1000: dist 0.19 loss 18.53 step 667/1000: dist 0.19 loss 32.65 step 668/1000: dist 0.19 loss 36.18 step 669/1000: dist 0.19 loss 24.75 step 670/1000: dist 0.19 loss 10.51 step 671/1000: dist 0.20 loss 7.93 step 672/1000: dist 0.19 loss 14.91 step 673/1000: dist 0.19 loss 18.84 step 674/1000: dist 0.19 loss 18.05 step 675/1000: dist 0.20 loss 20.94 step 676/1000: dist 0.19 loss 29.30 step 677/1000: dist 0.19 loss 33.39 step 678/1000: dist 0.19 loss 24.18 step 679/1000: dist 0.19 loss 8.93 step 680/1000: dist 0.19 loss 5.25 step 681/1000: dist 0.19 loss 13.43 step 682/1000: dist 0.19 loss 15.88 step 683/1000: dist 0.19 loss 7.32 step 684/1000: dist 0.19 loss 3.30 step 685/1000: dist 0.19 loss 8.71 step 686/1000: dist 0.19 loss 9.38 step 687/1000: dist 0.19 loss 3.24 step 688/1000: dist 0.19 loss 3.12 step 689/1000: dist 0.19 loss 6.93 step 690/1000: dist 0.19 loss 4.45 step 691/1000: dist 0.19 loss 1.48 step 692/1000: dist 0.19 loss 3.86 step 693/1000: dist 0.19 loss 4.11 step 694/1000: dist 0.19 loss 1.57 step 695/1000: dist 0.19 loss 2.23 step 696/1000: dist 0.19 loss 2.94 step 697/1000: dist 0.19 loss 1.43 step 698/1000: dist 0.19 loss 1.67 step 699/1000: dist 0.19 loss 2.20 step 700/1000: dist 0.19 loss 0.99 step 701/1000: dist 0.19 loss 1.19 step 702/1000: dist 0.19 loss 1.86 step 703/1000: dist 0.19 loss 0.78 step 704/1000: dist 0.19 loss 0.68 step 705/1000: dist 0.19 loss 1.50 step 706/1000: dist 0.19 loss 0.80 step 707/1000: dist 0.19 loss 0.38 step 708/1000: dist 0.19 loss 1.06 step 709/1000: dist 0.19 loss 0.81 step 710/1000: dist 0.19 loss 0.34 step 711/1000: dist 0.19 loss 0.68 step 712/1000: dist 0.19 loss 0.66 step 713/1000: dist 0.19 loss 0.40 step 714/1000: dist 0.19 loss 0.54 step 715/1000: dist 0.19 loss 0.47 step 716/1000: dist 0.19 loss 0.34 step 717/1000: dist 0.19 loss 0.51 step 718/1000: dist 0.19 loss 0.42 step 719/1000: dist 0.19 loss 0.28 step 720/1000: dist 0.19 loss 0.45 step 721/1000: dist 0.19 loss 0.49 step 722/1000: dist 0.19 loss 0.43 step 723/1000: dist 0.19 loss 0.70 step 724/1000: dist 0.19 loss 1.09 step 725/1000: dist 0.19 loss 1.68 step 726/1000: dist 0.19 loss 3.08 step 727/1000: dist 0.19 loss 5.64 step 728/1000: dist 0.19 loss 9.91 step 729/1000: dist 0.19 loss 16.09 step 730/1000: dist 0.19 loss 22.22 step 731/1000: dist 0.19 loss 22.00 step 732/1000: dist 0.19 loss 12.77 step 733/1000: dist 0.19 loss 2.04 step 734/1000: dist 0.19 loss 1.52 step 735/1000: dist 0.19 loss 8.65 step 736/1000: dist 0.19 loss 10.33 step 737/1000: dist 0.20 loss 3.74 step 738/1000: dist 0.19 loss 0.40 step 739/1000: dist 0.20 loss 4.81 step 740/1000: dist 0.20 loss 6.79 step 741/1000: dist 0.19 loss 2.75 step 742/1000: dist 0.20 loss 2.11 step 743/1000: dist 0.19 loss 6.83 step 744/1000: dist 0.20 loss 9.30 step 745/1000: dist 0.19 loss 10.75 step 746/1000: dist 0.19 loss 17.31 step 747/1000: dist 0.20 loss 23.07 step 748/1000: dist 0.19 loss 18.51 step 749/1000: dist 0.19 loss 7.66 step 750/1000: dist 0.19 loss 2.05 step 751/1000: dist 0.19 loss 4.31 step 752/1000: dist 0.19 loss 9.06 step 753/1000: dist 0.19 loss 9.68 step 754/1000: dist 0.19 loss 4.25 step 755/1000: dist 0.19 loss 0.59 step 756/1000: dist 0.19 loss 4.03 step 757/1000: dist 0.19 loss 6.96 step 758/1000: dist 0.19 loss 3.32 step 759/1000: dist 0.19 loss 0.21 step 760/1000: dist 0.19 loss 2.69 step 761/1000: dist 0.19 loss 4.42 step 762/1000: dist 0.19 loss 1.90 step 763/1000: dist 0.19 loss 0.44 step 764/1000: dist 0.19 loss 2.07 step 765/1000: dist 0.19 loss 2.65 step 766/1000: dist 0.19 loss 1.15 step 767/1000: dist 0.19 loss 0.61 step 768/1000: dist 0.19 loss 1.60 step 769/1000: dist 0.19 loss 1.80 step 770/1000: dist 0.19 loss 0.93 step 771/1000: dist 0.18 loss 0.92 step 772/1000: dist 0.19 loss 1.89 step 773/1000: dist 0.19 loss 2.19 step 774/1000: dist 0.19 loss 2.12 step 775/1000: dist 0.19 loss 3.23 step 776/1000: dist 0.18 loss 4.86 step 777/1000: dist 0.19 loss 5.75 step 778/1000: dist 0.18 loss 6.38 step 779/1000: dist 0.18 loss 6.87 step 780/1000: dist 0.18 loss 5.61 step 781/1000: dist 0.18 loss 2.92 step 782/1000: dist 0.18 loss 0.97 step 783/1000: dist 0.18 loss 0.59 step 784/1000: dist 0.18 loss 1.16 step 785/1000: dist 0.18 loss 2.17 step 786/1000: dist 0.18 loss 2.62 step 787/1000: dist 0.18 loss 1.67 step 788/1000: dist 0.18 loss 0.43 step 789/1000: dist 0.18 loss 0.35 step 790/1000: dist 0.18 loss 1.05 step 791/1000: dist 0.18 loss 1.36 step 792/1000: dist 0.18 loss 1.00 step 793/1000: dist 0.18 loss 0.44 step 794/1000: dist 0.18 loss 0.29 step 795/1000: dist 0.18 loss 0.65 step 796/1000: dist 0.18 loss 0.87 step 797/1000: dist 0.19 loss 0.55 step 798/1000: dist 0.18 loss 0.21 step 799/1000: dist 0.18 loss 0.35 step 800/1000: dist 0.19 loss 0.60 step 801/1000: dist 0.18 loss 0.46 step 802/1000: dist 0.18 loss 0.23 step 803/1000: dist 0.18 loss 0.27 step 804/1000: dist 0.18 loss 0.40 step 805/1000: dist 0.18 loss 0.37 step 806/1000: dist 0.18 loss 0.24 step 807/1000: dist 0.18 loss 0.22 step 808/1000: dist 0.18 loss 0.32 step 809/1000: dist 0.18 loss 0.31 step 810/1000: dist 0.18 loss 0.20 step 811/1000: dist 0.18 loss 0.22 step 812/1000: dist 0.18 loss 0.29 step 813/1000: dist 0.18 loss 0.24 step 814/1000: dist 0.18 loss 0.19 step 815/1000: dist 0.18 loss 0.23 step 816/1000: dist 0.18 loss 0.24 step 817/1000: dist 0.19 loss 0.20 step 818/1000: dist 0.18 loss 0.20 step 819/1000: dist 0.19 loss 0.22 step 820/1000: dist 0.18 loss 0.21 step 821/1000: dist 0.18 loss 0.19 step 822/1000: dist 0.18 loss 0.20 step 823/1000: dist 0.18 loss 0.21 step 824/1000: dist 0.18 loss 0.19 step 825/1000: dist 0.18 loss 0.19 step 826/1000: dist 0.18 loss 0.20 step 827/1000: dist 0.18 loss 0.19 step 828/1000: dist 0.18 loss 0.19 step 829/1000: dist 0.18 loss 0.20 step 830/1000: dist 0.18 loss 0.19 step 831/1000: dist 0.18 loss 0.19 step 832/1000: dist 0.18 loss 0.19 step 833/1000: dist 0.18 loss 0.18 step 834/1000: dist 0.18 loss 0.19 step 835/1000: dist 0.18 loss 0.19 step 836/1000: dist 0.18 loss 0.18 step 837/1000: dist 0.18 loss 0.18 step 838/1000: dist 0.18 loss 0.19 step 839/1000: dist 0.18 loss 0.18 step 840/1000: dist 0.18 loss 0.18 step 841/1000: dist 0.18 loss 0.18 step 842/1000: dist 0.18 loss 0.18 step 843/1000: dist 0.18 loss 0.18 step 844/1000: dist 0.18 loss 0.18 step 845/1000: dist 0.18 loss 0.18 step 846/1000: dist 0.18 loss 0.18 step 847/1000: dist 0.18 loss 0.18 step 848/1000: dist 0.18 loss 0.18 step 849/1000: dist 0.18 loss 0.18 step 850/1000: dist 0.18 loss 0.18 step 851/1000: dist 0.18 loss 0.18 step 852/1000: dist 0.18 loss 0.18 step 853/1000: dist 0.18 loss 0.18 step 854/1000: dist 0.18 loss 0.18 step 855/1000: dist 0.18 loss 0.18 step 856/1000: dist 0.18 loss 0.18 step 857/1000: dist 0.18 loss 0.18 step 858/1000: dist 0.18 loss 0.18 step 859/1000: dist 0.18 loss 0.18 step 860/1000: dist 0.18 loss 0.18 step 861/1000: dist 0.18 loss 0.18 step 862/1000: dist 0.18 loss 0.18 step 863/1000: dist 0.18 loss 0.18 step 864/1000: dist 0.18 loss 0.18 step 865/1000: dist 0.18 loss 0.18 step 866/1000: dist 0.18 loss 0.18 step 867/1000: dist 0.18 loss 0.18 step 868/1000: dist 0.18 loss 0.18 step 869/1000: dist 0.18 loss 0.18 step 870/1000: dist 0.18 loss 0.18 step 871/1000: dist 0.18 loss 0.18 step 872/1000: dist 0.18 loss 0.18 step 873/1000: dist 0.18 loss 0.18 step 874/1000: dist 0.18 loss 0.18 step 875/1000: dist 0.18 loss 0.18 step 876/1000: dist 0.18 loss 0.18 step 877/1000: dist 0.18 loss 0.18 step 878/1000: dist 0.18 loss 0.18 step 879/1000: dist 0.18 loss 0.18 step 880/1000: dist 0.18 loss 0.18 step 881/1000: dist 0.18 loss 0.18 step 882/1000: dist 0.18 loss 0.18 step 883/1000: dist 0.18 loss 0.18 step 884/1000: dist 0.18 loss 0.18 step 885/1000: dist 0.18 loss 0.18 step 886/1000: dist 0.18 loss 0.18 step 887/1000: dist 0.18 loss 0.18 step 888/1000: dist 0.18 loss 0.18 step 889/1000: dist 0.18 loss 0.18 step 890/1000: dist 0.18 loss 0.18 step 891/1000: dist 0.18 loss 0.18 step 892/1000: dist 0.18 loss 0.18 step 893/1000: dist 0.18 loss 0.18 step 894/1000: dist 0.18 loss 0.18 step 895/1000: dist 0.18 loss 0.18 step 896/1000: dist 0.18 loss 0.18 step 897/1000: dist 0.18 loss 0.18 step 898/1000: dist 0.18 loss 0.18 step 899/1000: dist 0.18 loss 0.18 step 900/1000: dist 0.18 loss 0.18 step 901/1000: dist 0.18 loss 0.18 step 902/1000: dist 0.18 loss 0.18 step 903/1000: dist 0.18 loss 0.18 step 904/1000: dist 0.18 loss 0.18 step 905/1000: dist 0.18 loss 0.18 step 906/1000: dist 0.18 loss 0.18 step 907/1000: dist 0.18 loss 0.18 step 908/1000: dist 0.18 loss 0.18 step 909/1000: dist 0.18 loss 0.18 step 910/1000: dist 0.18 loss 0.18 step 911/1000: dist 0.18 loss 0.18 step 912/1000: dist 0.18 loss 0.18 step 913/1000: dist 0.18 loss 0.18 step 914/1000: dist 0.18 loss 0.18 step 915/1000: dist 0.18 loss 0.18 step 916/1000: dist 0.18 loss 0.18 step 917/1000: dist 0.18 loss 0.18 step 918/1000: dist 0.18 loss 0.18 step 919/1000: dist 0.18 loss 0.18 step 920/1000: dist 0.18 loss 0.18 step 921/1000: dist 0.18 loss 0.18 step 922/1000: dist 0.18 loss 0.18 step 923/1000: dist 0.18 loss 0.18 step 924/1000: dist 0.18 loss 0.18 step 925/1000: dist 0.18 loss 0.18 step 926/1000: dist 0.18 loss 0.18 step 927/1000: dist 0.18 loss 0.18 step 928/1000: dist 0.18 loss 0.18 step 929/1000: dist 0.18 loss 0.18 step 930/1000: dist 0.18 loss 0.18 step 931/1000: dist 0.18 loss 0.18 step 932/1000: dist 0.18 loss 0.18 step 933/1000: dist 0.18 loss 0.18 step 934/1000: dist 0.18 loss 0.18 step 935/1000: dist 0.18 loss 0.18 step 936/1000: dist 0.18 loss 0.18 step 937/1000: dist 0.18 loss 0.18 step 938/1000: dist 0.18 loss 0.18 step 939/1000: dist 0.18 loss 0.18 step 940/1000: dist 0.18 loss 0.18 step 941/1000: dist 0.18 loss 0.18 step 942/1000: dist 0.18 loss 0.18 step 943/1000: dist 0.18 loss 0.18 step 944/1000: dist 0.18 loss 0.18 step 945/1000: dist 0.18 loss 0.18 step 946/1000: dist 0.18 loss 0.18 step 947/1000: dist 0.18 loss 0.18 step 948/1000: dist 0.18 loss 0.18 step 949/1000: dist 0.18 loss 0.18 step 950/1000: dist 0.18 loss 0.18 step 951/1000: dist 0.18 loss 0.18 step 952/1000: dist 0.18 loss 0.18 step 953/1000: dist 0.18 loss 0.18 step 954/1000: dist 0.18 loss 0.18 step 955/1000: dist 0.18 loss 0.18 step 956/1000: dist 0.18 loss 0.18 step 957/1000: dist 0.18 loss 0.18 step 958/1000: dist 0.18 loss 0.18 step 959/1000: dist 0.18 loss 0.18 step 960/1000: dist 0.18 loss 0.18 step 961/1000: dist 0.18 loss 0.18 step 962/1000: dist 0.18 loss 0.18 step 963/1000: dist 0.18 loss 0.18 step 964/1000: dist 0.18 loss 0.18 step 965/1000: dist 0.18 loss 0.18 step 966/1000: dist 0.18 loss 0.18 step 967/1000: dist 0.18 loss 0.18 step 968/1000: dist 0.18 loss 0.18 step 969/1000: dist 0.18 loss 0.18 step 970/1000: dist 0.18 loss 0.18 step 971/1000: dist 0.18 loss 0.18 step 972/1000: dist 0.18 loss 0.18 step 973/1000: dist 0.18 loss 0.18 step 974/1000: dist 0.18 loss 0.18 step 975/1000: dist 0.18 loss 0.18 step 976/1000: dist 0.18 loss 0.18 step 977/1000: dist 0.18 loss 0.18 step 978/1000: dist 0.18 loss 0.18 step 979/1000: dist 0.18 loss 0.18 step 980/1000: dist 0.18 loss 0.18 step 981/1000: dist 0.18 loss 0.18 step 982/1000: dist 0.18 loss 0.18 step 983/1000: dist 0.18 loss 0.18 step 984/1000: dist 0.18 loss 0.18 step 985/1000: dist 0.18 loss 0.18 step 986/1000: dist 0.18 loss 0.18 step 987/1000: dist 0.18 loss 0.18 step 988/1000: dist 0.18 loss 0.18 step 989/1000: dist 0.18 loss 0.18 step 990/1000: dist 0.18 loss 0.18 step 991/1000: dist 0.18 loss 0.18 step 992/1000: dist 0.18 loss 0.18 step 993/1000: dist 0.18 loss 0.18 step 994/1000: dist 0.18 loss 0.18 step 995/1000: dist 0.18 loss 0.18 step 996/1000: dist 0.18 loss 0.18 step 997/1000: dist 0.18 loss 0.18 step 998/1000: dist 0.18 loss 0.18 step 999/1000: dist 0.18 loss 0.18 step 1000/1000: dist 0.18 loss 0.18 Elapsed: 735.9 s
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
Convert Target to a GANNext, we convert the target to a GAN latent vector. This process will also take several minutes.
cmd = f"python /content/stylegan2-ada-pytorch/projector.py --save-video 0 --num-steps 1000 --outdir=out_target --target=cropped_target.png --network={NETWORK}" !{cmd}
Loading networks from "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl"... Computing W midpoint and stddev using 10000 samples... Setting up PyTorch plugin "bias_act_plugin"... Done. Setting up PyTorch plugin "upfirdn2d_plugin"... Done. step 1/1000: dist 0.55 loss 24569.43 step 2/1000: dist 0.59 loss 27642.82 step 3/1000: dist 0.53 loss 27167.56 step 4/1000: dist 0.50 loss 26253.62 step 5/1000: dist 0.56 loss 24958.53 step 6/1000: dist 0.53 loss 23356.09 step 7/1000: dist 0.51 loss 21513.68 step 8/1000: dist 0.55 loss 19486.07 step 9/1000: dist 0.56 loss 17342.21 step 10/1000: dist 0.43 loss 15145.35 step 11/1000: dist 0.48 loss 12946.05 step 12/1000: dist 0.48 loss 10817.83 step 13/1000: dist 0.41 loss 8803.58 step 14/1000: dist 0.46 loss 6948.92 step 15/1000: dist 0.37 loss 5315.30 step 16/1000: dist 0.43 loss 3971.64 step 17/1000: dist 0.38 loss 2944.70 step 18/1000: dist 0.43 loss 2212.37 step 19/1000: dist 0.30 loss 1761.54 step 20/1000: dist 0.30 loss 1569.29 step 21/1000: dist 0.33 loss 1600.37 step 22/1000: dist 0.33 loss 1789.42 step 23/1000: dist 0.31 loss 2053.24 step 24/1000: dist 0.30 loss 2325.63 step 25/1000: dist 0.33 loss 2539.76 step 26/1000: dist 0.28 loss 2634.56 step 27/1000: dist 0.30 loss 2600.04 step 28/1000: dist 0.28 loss 2475.83 step 29/1000: dist 0.28 loss 2313.24 step 30/1000: dist 0.26 loss 2118.77 step 31/1000: dist 0.27 loss 1879.70 step 32/1000: dist 0.31 loss 1625.11 step 33/1000: dist 0.29 loss 1387.50 step 34/1000: dist 0.29 loss 1185.39 step 35/1000: dist 0.26 loss 1027.74 step 36/1000: dist 0.28 loss 909.05 step 37/1000: dist 0.24 loss 828.90 step 38/1000: dist 0.27 loss 808.24 step 39/1000: dist 0.26 loss 816.58 step 40/1000: dist 0.23 loss 828.90 step 41/1000: dist 0.24 loss 824.36 step 42/1000: dist 0.27 loss 764.10 step 43/1000: dist 0.24 loss 660.27 step 44/1000: dist 0.25 loss 545.92 step 45/1000: dist 0.25 loss 414.40 step 46/1000: dist 0.23 loss 316.63 step 47/1000: dist 0.23 loss 284.52 step 48/1000: dist 0.22 loss 248.88 step 49/1000: dist 0.25 loss 281.35 step 50/1000: dist 0.23 loss 319.21 step 51/1000: dist 0.23 loss 386.35 step 52/1000: dist 0.25 loss 403.70 step 53/1000: dist 0.22 loss 392.97 step 54/1000: dist 0.24 loss 351.73 step 55/1000: dist 0.25 loss 276.54 step 56/1000: dist 0.25 loss 198.30 step 57/1000: dist 0.22 loss 126.93 step 58/1000: dist 0.22 loss 80.36 step 59/1000: dist 0.21 loss 51.89 step 60/1000: dist 0.21 loss 55.98 step 61/1000: dist 0.22 loss 60.16 step 62/1000: dist 0.22 loss 83.41 step 63/1000: dist 0.21 loss 110.58 step 64/1000: dist 0.21 loss 128.36 step 65/1000: dist 0.22 loss 125.53 step 66/1000: dist 0.21 loss 120.70 step 67/1000: dist 0.22 loss 96.44 step 68/1000: dist 0.21 loss 73.06 step 69/1000: dist 0.21 loss 48.13 step 70/1000: dist 0.21 loss 28.11 step 71/1000: dist 0.21 loss 16.93 step 72/1000: dist 0.21 loss 17.87 step 73/1000: dist 0.20 loss 17.53 step 74/1000: dist 0.21 loss 27.00 step 75/1000: dist 0.19 loss 33.26 step 76/1000: dist 0.22 loss 34.77 step 77/1000: dist 0.22 loss 33.92 step 78/1000: dist 0.19 loss 32.56 step 79/1000: dist 0.21 loss 26.51 step 80/1000: dist 0.19 loss 22.06 step 81/1000: dist 0.20 loss 16.45 step 82/1000: dist 0.22 loss 13.82 step 83/1000: dist 0.19 loss 15.12 step 84/1000: dist 0.22 loss 17.31 step 85/1000: dist 0.19 loss 15.14 step 86/1000: dist 0.20 loss 12.40 step 87/1000: dist 0.20 loss 11.58 step 88/1000: dist 0.20 loss 10.85 step 89/1000: dist 0.22 loss 9.29 step 90/1000: dist 0.20 loss 8.13 step 91/1000: dist 0.21 loss 7.89 step 92/1000: dist 0.18 loss 7.26 step 93/1000: dist 0.20 loss 7.65 step 94/1000: dist 0.18 loss 9.41 step 95/1000: dist 0.19 loss 11.92 step 96/1000: dist 0.18 loss 12.47 step 97/1000: dist 0.18 loss 11.17 step 98/1000: dist 0.21 loss 8.52 step 99/1000: dist 0.18 loss 6.70 step 100/1000: dist 0.19 loss 5.75 step 101/1000: dist 0.18 loss 5.43 step 102/1000: dist 0.18 loss 4.70 step 103/1000: dist 0.20 loss 4.34 step 104/1000: dist 0.18 loss 5.94 step 105/1000: dist 0.19 loss 9.35 step 106/1000: dist 0.19 loss 12.18 step 107/1000: dist 0.19 loss 11.78 step 108/1000: dist 0.20 loss 8.73 step 109/1000: dist 0.19 loss 5.70 step 110/1000: dist 0.19 loss 5.26 step 111/1000: dist 0.20 loss 6.63 step 112/1000: dist 0.20 loss 7.51 step 113/1000: dist 0.18 loss 6.69 step 114/1000: dist 0.18 loss 6.20 step 115/1000: dist 0.19 loss 8.08 step 116/1000: dist 0.18 loss 10.39 step 117/1000: dist 0.19 loss 9.92 step 118/1000: dist 0.17 loss 6.65 step 119/1000: dist 0.20 loss 4.21 step 120/1000: dist 0.18 loss 4.17 step 121/1000: dist 0.19 loss 5.02 step 122/1000: dist 0.17 loss 5.28 step 123/1000: dist 0.17 loss 5.63 step 124/1000: dist 0.18 loss 7.52 step 125/1000: dist 0.21 loss 10.43 step 126/1000: dist 0.17 loss 11.60 step 127/1000: dist 0.22 loss 8.84 step 128/1000: dist 0.20 loss 4.10 step 129/1000: dist 0.19 loss 2.21 step 130/1000: dist 0.19 loss 3.88 step 131/1000: dist 0.19 loss 5.37 step 132/1000: dist 0.18 loss 4.36 step 133/1000: dist 0.18 loss 2.98 step 134/1000: dist 0.19 loss 3.43 step 135/1000: dist 0.17 loss 4.23 step 136/1000: dist 0.18 loss 3.74 step 137/1000: dist 0.17 loss 3.62 step 138/1000: dist 0.19 loss 6.17 step 139/1000: dist 0.17 loss 10.56 step 140/1000: dist 0.18 loss 14.11 step 141/1000: dist 0.18 loss 13.07 step 142/1000: dist 0.18 loss 6.69 step 143/1000: dist 0.19 loss 3.15 step 144/1000: dist 0.19 loss 7.17 step 145/1000: dist 0.22 loss 10.86 step 146/1000: dist 0.17 loss 8.68 step 147/1000: dist 0.17 loss 8.92 step 148/1000: dist 0.18 loss 14.72 step 149/1000: dist 0.17 loss 15.07 step 150/1000: dist 0.18 loss 8.11 step 151/1000: dist 0.17 loss 6.89 step 152/1000: dist 0.17 loss 12.58 step 153/1000: dist 0.18 loss 15.14 step 154/1000: dist 0.17 loss 14.59 step 155/1000: dist 0.16 loss 16.59 step 156/1000: dist 0.18 loss 20.48 step 157/1000: dist 0.16 loss 20.98 step 158/1000: dist 0.17 loss 13.63 step 159/1000: dist 0.18 loss 5.66 step 160/1000: dist 0.17 loss 9.10 step 161/1000: dist 0.17 loss 17.11 step 162/1000: dist 0.18 loss 15.17 step 163/1000: dist 0.17 loss 10.86 step 164/1000: dist 0.17 loss 15.81 step 165/1000: dist 0.18 loss 17.92 step 166/1000: dist 0.18 loss 10.53 step 167/1000: dist 0.17 loss 7.81 step 168/1000: dist 0.17 loss 10.69 step 169/1000: dist 0.18 loss 8.86 step 170/1000: dist 0.16 loss 5.30 step 171/1000: dist 0.16 loss 4.86 step 172/1000: dist 0.17 loss 5.41 step 173/1000: dist 0.18 loss 7.24 step 174/1000: dist 0.18 loss 8.55 step 175/1000: dist 0.16 loss 6.34 step 176/1000: dist 0.19 loss 6.69 step 177/1000: dist 0.16 loss 12.38 step 178/1000: dist 0.15 loss 15.53 step 179/1000: dist 0.16 loss 12.84 step 180/1000: dist 0.16 loss 9.40 step 181/1000: dist 0.17 loss 9.07 step 182/1000: dist 0.16 loss 12.29 step 183/1000: dist 0.16 loss 15.74 step 184/1000: dist 0.18 loss 14.12 step 185/1000: dist 0.17 loss 9.99 step 186/1000: dist 0.16 loss 11.96 step 187/1000: dist 0.16 loss 17.86 step 188/1000: dist 0.15 loss 17.83 step 189/1000: dist 0.16 loss 11.73 step 190/1000: dist 0.15 loss 8.67 step 191/1000: dist 0.17 loss 11.36 step 192/1000: dist 0.18 loss 13.68 step 193/1000: dist 0.18 loss 12.41 step 194/1000: dist 0.16 loss 11.61 step 195/1000: dist 0.16 loss 15.26 step 196/1000: dist 0.16 loss 21.39 step 197/1000: dist 0.16 loss 25.69 step 198/1000: dist 0.18 loss 24.97 step 199/1000: dist 0.16 loss 16.93 step 200/1000: dist 0.15 loss 7.62 step 201/1000: dist 0.17 loss 6.36 step 202/1000: dist 0.17 loss 11.48 step 203/1000: dist 0.16 loss 12.83 step 204/1000: dist 0.16 loss 7.99 step 205/1000: dist 0.18 loss 5.61 step 206/1000: dist 0.16 loss 8.06 step 207/1000: dist 0.15 loss 9.08 step 208/1000: dist 0.17 loss 7.99 step 209/1000: dist 0.17 loss 8.99 step 210/1000: dist 0.15 loss 10.22 step 211/1000: dist 0.15 loss 8.12 step 212/1000: dist 0.15 loss 5.60 step 213/1000: dist 0.17 loss 5.12 step 214/1000: dist 0.15 loss 3.91 step 215/1000: dist 0.16 loss 2.19 step 216/1000: dist 0.16 loss 3.38 step 217/1000: dist 0.15 loss 5.53 step 218/1000: dist 0.15 loss 4.37 step 219/1000: dist 0.16 loss 1.71 step 220/1000: dist 0.15 loss 1.38 step 221/1000: dist 0.15 loss 2.54 step 222/1000: dist 0.16 loss 2.86 step 223/1000: dist 0.15 loss 2.55 step 224/1000: dist 0.15 loss 2.17 step 225/1000: dist 0.15 loss 1.71 step 226/1000: dist 0.16 loss 1.78 step 227/1000: dist 0.15 loss 2.59 step 228/1000: dist 0.15 loss 3.30 step 229/1000: dist 0.16 loss 3.73 step 230/1000: dist 0.15 loss 4.50 step 231/1000: dist 0.16 loss 5.35 step 232/1000: dist 0.15 loss 5.33 step 233/1000: dist 0.15 loss 4.37 step 234/1000: dist 0.15 loss 3.00 step 235/1000: dist 0.16 loss 1.67 step 236/1000: dist 0.15 loss 0.93 step 237/1000: dist 0.15 loss 1.45 step 238/1000: dist 0.15 loss 2.99 step 239/1000: dist 0.14 loss 4.60 step 240/1000: dist 0.15 loss 6.04 step 241/1000: dist 0.15 loss 8.39 step 242/1000: dist 0.15 loss 12.39 step 243/1000: dist 0.17 loss 15.72 step 244/1000: dist 0.16 loss 13.30 step 245/1000: dist 0.15 loss 5.80 step 246/1000: dist 0.15 loss 2.59 step 247/1000: dist 0.14 loss 7.21 step 248/1000: dist 0.14 loss 11.86 step 249/1000: dist 0.15 loss 11.24 step 250/1000: dist 0.15 loss 12.73 step 251/1000: dist 0.15 loss 22.06 step 252/1000: dist 0.15 loss 29.17 step 253/1000: dist 0.16 loss 21.62 step 254/1000: dist 0.15 loss 7.14 step 255/1000: dist 0.15 loss 3.01 step 256/1000: dist 0.15 loss 8.73 step 257/1000: dist 0.14 loss 11.57 step 258/1000: dist 0.15 loss 8.01 step 259/1000: dist 0.15 loss 5.17 step 260/1000: dist 0.15 loss 5.90 step 261/1000: dist 0.14 loss 5.84 step 262/1000: dist 0.14 loss 4.11 step 263/1000: dist 0.15 loss 4.71 step 264/1000: dist 0.14 loss 7.77 step 265/1000: dist 0.15 loss 10.12 step 266/1000: dist 0.16 loss 14.50 step 267/1000: dist 0.14 loss 24.71 step 268/1000: dist 0.14 loss 31.82 step 269/1000: dist 0.15 loss 23.23 step 270/1000: dist 0.15 loss 14.74 step 271/1000: dist 0.15 loss 23.90 step 272/1000: dist 0.14 loss 29.04 step 273/1000: dist 0.14 loss 12.52 step 274/1000: dist 0.14 loss 1.79 step 275/1000: dist 0.15 loss 13.45 step 276/1000: dist 0.14 loss 18.36 step 277/1000: dist 0.15 loss 8.35 step 278/1000: dist 0.14 loss 9.47 step 279/1000: dist 0.15 loss 19.60 step 280/1000: dist 0.14 loss 22.11 step 281/1000: dist 0.15 loss 20.92 step 282/1000: dist 0.13 loss 15.48 step 283/1000: dist 0.15 loss 5.94 step 284/1000: dist 0.14 loss 5.92 step 285/1000: dist 0.14 loss 11.50 step 286/1000: dist 0.14 loss 10.44 step 287/1000: dist 0.15 loss 6.53 step 288/1000: dist 0.14 loss 5.11 step 289/1000: dist 0.14 loss 6.99 step 290/1000: dist 0.14 loss 10.44 step 291/1000: dist 0.14 loss 9.52 step 292/1000: dist 0.14 loss 9.63 step 293/1000: dist 0.15 loss 17.67 step 294/1000: dist 0.14 loss 22.30 step 295/1000: dist 0.14 loss 18.10 step 296/1000: dist 0.15 loss 13.05 step 297/1000: dist 0.14 loss 7.73 step 298/1000: dist 0.15 loss 2.60 step 299/1000: dist 0.14 loss 4.98 step 300/1000: dist 0.15 loss 11.35 step 301/1000: dist 0.15 loss 11.03 step 302/1000: dist 0.13 loss 5.59 step 303/1000: dist 0.15 loss 4.29 step 304/1000: dist 0.15 loss 8.40 step 305/1000: dist 0.15 loss 11.08 step 306/1000: dist 0.15 loss 8.84 step 307/1000: dist 0.14 loss 5.68 step 308/1000: dist 0.14 loss 4.41 step 309/1000: dist 0.15 loss 3.65 step 310/1000: dist 0.15 loss 3.24 step 311/1000: dist 0.15 loss 3.76 step 312/1000: dist 0.16 loss 4.91 step 313/1000: dist 0.14 loss 7.30 step 314/1000: dist 0.14 loss 11.59 step 315/1000: dist 0.14 loss 18.09 step 316/1000: dist 0.15 loss 29.08 step 317/1000: dist 0.14 loss 43.69 step 318/1000: dist 0.15 loss 49.68 step 319/1000: dist 0.16 loss 37.63 step 320/1000: dist 0.14 loss 23.03 step 321/1000: dist 0.14 loss 23.50 step 322/1000: dist 0.13 loss 26.49 step 323/1000: dist 0.15 loss 15.28 step 324/1000: dist 0.15 loss 4.18 step 325/1000: dist 0.15 loss 12.70 step 326/1000: dist 0.15 loss 22.73 step 327/1000: dist 0.14 loss 12.47 step 328/1000: dist 0.14 loss 0.98 step 329/1000: dist 0.14 loss 7.26 step 330/1000: dist 0.14 loss 13.02 step 331/1000: dist 0.15 loss 6.69 step 332/1000: dist 0.15 loss 3.67 step 333/1000: dist 0.14 loss 6.63 step 334/1000: dist 0.14 loss 5.18 step 335/1000: dist 0.14 loss 3.53 step 336/1000: dist 0.15 loss 5.64 step 337/1000: dist 0.13 loss 4.35 step 338/1000: dist 0.13 loss 1.72 step 339/1000: dist 0.13 loss 3.98 step 340/1000: dist 0.14 loss 5.96 step 341/1000: dist 0.15 loss 5.34 step 342/1000: dist 0.14 loss 8.20 step 343/1000: dist 0.14 loss 14.20 step 344/1000: dist 0.13 loss 19.26 step 345/1000: dist 0.13 loss 21.64 step 346/1000: dist 0.13 loss 16.82 step 347/1000: dist 0.16 loss 8.17 step 348/1000: dist 0.14 loss 9.32 step 349/1000: dist 0.14 loss 19.43 step 350/1000: dist 0.13 loss 22.30 step 351/1000: dist 0.14 loss 16.38 step 352/1000: dist 0.14 loss 17.08 step 353/1000: dist 0.15 loss 27.26 step 354/1000: dist 0.13 loss 33.80 step 355/1000: dist 0.13 loss 30.23 step 356/1000: dist 0.13 loss 20.21 step 357/1000: dist 0.14 loss 9.79 step 358/1000: dist 0.13 loss 6.17 step 359/1000: dist 0.13 loss 11.63 step 360/1000: dist 0.13 loss 15.69 step 361/1000: dist 0.13 loss 9.84 step 362/1000: dist 0.12 loss 4.20 step 363/1000: dist 0.13 loss 7.42 step 364/1000: dist 0.13 loss 10.17 step 365/1000: dist 0.14 loss 7.11 step 366/1000: dist 0.13 loss 7.16 step 367/1000: dist 0.13 loss 12.78 step 368/1000: dist 0.13 loss 16.70 step 369/1000: dist 0.13 loss 17.82 step 370/1000: dist 0.13 loss 17.50 step 371/1000: dist 0.13 loss 12.72 step 372/1000: dist 0.13 loss 7.44 step 373/1000: dist 0.13 loss 10.47 step 374/1000: dist 0.13 loss 19.69 step 375/1000: dist 0.13 loss 25.22 step 376/1000: dist 0.13 loss 24.43 step 377/1000: dist 0.14 loss 21.74 step 378/1000: dist 0.13 loss 20.66 step 379/1000: dist 0.12 loss 19.86 step 380/1000: dist 0.13 loss 17.03 step 381/1000: dist 0.13 loss 14.10 step 382/1000: dist 0.13 loss 14.54 step 383/1000: dist 0.13 loss 18.89 step 384/1000: dist 0.13 loss 24.17 step 385/1000: dist 0.13 loss 23.90 step 386/1000: dist 0.13 loss 14.64 step 387/1000: dist 0.12 loss 5.09 step 388/1000: dist 0.12 loss 5.66 step 389/1000: dist 0.12 loss 11.82 step 390/1000: dist 0.12 loss 11.92 step 391/1000: dist 0.13 loss 5.94 step 392/1000: dist 0.12 loss 3.45 step 393/1000: dist 0.13 loss 6.14 step 394/1000: dist 0.12 loss 7.06 step 395/1000: dist 0.13 loss 4.58 step 396/1000: dist 0.13 loss 3.03 step 397/1000: dist 0.13 loss 3.71 step 398/1000: dist 0.12 loss 4.10 step 399/1000: dist 0.12 loss 3.39 step 400/1000: dist 0.13 loss 2.71 step 401/1000: dist 0.13 loss 2.55 step 402/1000: dist 0.12 loss 2.81 step 403/1000: dist 0.12 loss 3.09 step 404/1000: dist 0.12 loss 3.02 step 405/1000: dist 0.13 loss 3.17 step 406/1000: dist 0.13 loss 4.25 step 407/1000: dist 0.13 loss 5.69 step 408/1000: dist 0.12 loss 6.62 step 409/1000: dist 0.13 loss 7.33 step 410/1000: dist 0.12 loss 7.77 step 411/1000: dist 0.13 loss 6.68 step 412/1000: dist 0.13 loss 3.99 step 413/1000: dist 0.12 loss 2.09 step 414/1000: dist 0.13 loss 2.85 step 415/1000: dist 0.13 loss 5.45 step 416/1000: dist 0.12 loss 8.43 step 417/1000: dist 0.13 loss 11.01 step 418/1000: dist 0.13 loss 12.25 step 419/1000: dist 0.12 loss 10.47 step 420/1000: dist 0.12 loss 6.04 step 421/1000: dist 0.13 loss 2.20 step 422/1000: dist 0.13 loss 1.94 step 423/1000: dist 0.13 loss 4.15 step 424/1000: dist 0.12 loss 5.41 step 425/1000: dist 0.12 loss 4.05 step 426/1000: dist 0.12 loss 1.63 step 427/1000: dist 0.12 loss 1.03 step 428/1000: dist 0.12 loss 2.44 step 429/1000: dist 0.12 loss 3.34 step 430/1000: dist 0.12 loss 2.18 step 431/1000: dist 0.12 loss 0.60 step 432/1000: dist 0.12 loss 0.76 step 433/1000: dist 0.12 loss 2.01 step 434/1000: dist 0.12 loss 2.17 step 435/1000: dist 0.12 loss 1.03 step 436/1000: dist 0.12 loss 0.43 step 437/1000: dist 0.12 loss 1.21 step 438/1000: dist 0.12 loss 2.15 step 439/1000: dist 0.12 loss 2.25 step 440/1000: dist 0.12 loss 2.40 step 441/1000: dist 0.12 loss 3.71 step 442/1000: dist 0.12 loss 5.84 step 443/1000: dist 0.12 loss 7.56 step 444/1000: dist 0.13 loss 7.81 step 445/1000: dist 0.12 loss 6.20 step 446/1000: dist 0.12 loss 3.27 step 447/1000: dist 0.12 loss 0.87 step 448/1000: dist 0.12 loss 0.66 step 449/1000: dist 0.12 loss 2.32 step 450/1000: dist 0.12 loss 3.77 step 451/1000: dist 0.12 loss 3.27 step 452/1000: dist 0.12 loss 1.42 step 453/1000: dist 0.12 loss 0.49 step 454/1000: dist 0.12 loss 1.54 step 455/1000: dist 0.12 loss 3.44 step 456/1000: dist 0.12 loss 4.88 step 457/1000: dist 0.12 loss 6.63 step 458/1000: dist 0.11 loss 11.07 step 459/1000: dist 0.12 loss 19.15 step 460/1000: dist 0.12 loss 26.65 step 461/1000: dist 0.11 loss 25.10 step 462/1000: dist 0.11 loss 14.39 step 463/1000: dist 0.12 loss 10.13 step 464/1000: dist 0.12 loss 19.10 step 465/1000: dist 0.11 loss 25.19 step 466/1000: dist 0.12 loss 14.82 step 467/1000: dist 0.12 loss 2.51 step 468/1000: dist 0.11 loss 4.58 step 469/1000: dist 0.11 loss 10.88 step 470/1000: dist 0.12 loss 9.43 step 471/1000: dist 0.12 loss 7.30 step 472/1000: dist 0.12 loss 8.99 step 473/1000: dist 0.12 loss 10.00 step 474/1000: dist 0.11 loss 14.16 step 475/1000: dist 0.11 loss 23.90 step 476/1000: dist 0.12 loss 28.68 step 477/1000: dist 0.12 loss 27.07 step 478/1000: dist 0.12 loss 28.17 step 479/1000: dist 0.11 loss 30.84 step 480/1000: dist 0.11 loss 24.05 step 481/1000: dist 0.11 loss 13.28 step 482/1000: dist 0.12 loss 10.98 step 483/1000: dist 0.11 loss 15.74 step 484/1000: dist 0.11 loss 18.55 step 485/1000: dist 0.12 loss 16.06 step 486/1000: dist 0.11 loss 16.08 step 487/1000: dist 0.11 loss 18.71 step 488/1000: dist 0.11 loss 13.35 step 489/1000: dist 0.11 loss 5.00 step 490/1000: dist 0.11 loss 6.31 step 491/1000: dist 0.11 loss 10.45 step 492/1000: dist 0.11 loss 9.42 step 493/1000: dist 0.12 loss 9.97 step 494/1000: dist 0.11 loss 12.56 step 495/1000: dist 0.12 loss 14.97 step 496/1000: dist 0.12 loss 23.75 step 497/1000: dist 0.11 loss 32.78 step 498/1000: dist 0.11 loss 26.02 step 499/1000: dist 0.11 loss 13.56 step 500/1000: dist 0.11 loss 15.53 step 501/1000: dist 0.11 loss 29.08 step 502/1000: dist 0.11 loss 37.59 step 503/1000: dist 0.11 loss 27.47 step 504/1000: dist 0.12 loss 7.95 step 505/1000: dist 0.11 loss 4.64 step 506/1000: dist 0.11 loss 16.24 step 507/1000: dist 0.11 loss 16.96 step 508/1000: dist 0.11 loss 6.14 step 509/1000: dist 0.11 loss 4.52 step 510/1000: dist 0.11 loss 10.30 step 511/1000: dist 0.11 loss 8.69 step 512/1000: dist 0.11 loss 3.63 step 513/1000: dist 0.11 loss 4.42 step 514/1000: dist 0.11 loss 6.62 step 515/1000: dist 0.11 loss 4.60 step 516/1000: dist 0.11 loss 2.68 step 517/1000: dist 0.11 loss 4.15 step 518/1000: dist 0.11 loss 4.77 step 519/1000: dist 0.11 loss 3.77 step 520/1000: dist 0.11 loss 5.27 step 521/1000: dist 0.11 loss 8.42 step 522/1000: dist 0.11 loss 11.47 step 523/1000: dist 0.11 loss 16.49 step 524/1000: dist 0.11 loss 20.74 step 525/1000: dist 0.11 loss 17.14 step 526/1000: dist 0.11 loss 7.68 step 527/1000: dist 0.11 loss 1.88 step 528/1000: dist 0.11 loss 3.78 step 529/1000: dist 0.11 loss 8.65 step 530/1000: dist 0.11 loss 8.85 step 531/1000: dist 0.11 loss 3.35 step 532/1000: dist 0.11 loss 0.86 step 533/1000: dist 0.11 loss 4.47 step 534/1000: dist 0.11 loss 5.94 step 535/1000: dist 0.11 loss 2.29 step 536/1000: dist 0.11 loss 0.71 step 537/1000: dist 0.11 loss 3.12 step 538/1000: dist 0.11 loss 3.73 step 539/1000: dist 0.11 loss 1.53 step 540/1000: dist 0.11 loss 0.92 step 541/1000: dist 0.11 loss 2.47 step 542/1000: dist 0.11 loss 2.96 step 543/1000: dist 0.11 loss 2.09 step 544/1000: dist 0.11 loss 2.67 step 545/1000: dist 0.11 loss 5.11 step 546/1000: dist 0.11 loss 7.36 step 547/1000: dist 0.11 loss 9.83 step 548/1000: dist 0.11 loss 14.53 step 549/1000: dist 0.11 loss 19.85 step 550/1000: dist 0.11 loss 22.30 step 551/1000: dist 0.11 loss 21.43 step 552/1000: dist 0.11 loss 18.03 step 553/1000: dist 0.11 loss 11.67 step 554/1000: dist 0.11 loss 6.03 step 555/1000: dist 0.11 loss 6.90 step 556/1000: dist 0.11 loss 12.34 step 557/1000: dist 0.11 loss 15.60 step 558/1000: dist 0.11 loss 16.42 step 559/1000: dist 0.11 loss 20.72 step 560/1000: dist 0.11 loss 27.72 step 561/1000: dist 0.11 loss 24.64 step 562/1000: dist 0.11 loss 9.79 step 563/1000: dist 0.11 loss 1.05 step 564/1000: dist 0.11 loss 8.35 step 565/1000: dist 0.11 loss 15.72 step 566/1000: dist 0.11 loss 8.95 step 567/1000: dist 0.11 loss 1.13 step 568/1000: dist 0.11 loss 6.10 step 569/1000: dist 0.11 loss 11.90 step 570/1000: dist 0.11 loss 8.10 step 571/1000: dist 0.11 loss 6.92 step 572/1000: dist 0.11 loss 15.39 step 573/1000: dist 0.11 loss 20.50 step 574/1000: dist 0.11 loss 17.41 step 575/1000: dist 0.11 loss 13.19 step 576/1000: dist 0.11 loss 8.27 step 577/1000: dist 0.11 loss 2.73 step 578/1000: dist 0.11 loss 4.41 step 579/1000: dist 0.11 loss 10.99 step 580/1000: dist 0.11 loss 11.06 step 581/1000: dist 0.10 loss 7.47 step 582/1000: dist 0.11 loss 11.02 step 583/1000: dist 0.11 loss 20.32 step 584/1000: dist 0.11 loss 28.52 step 585/1000: dist 0.10 loss 34.54 step 586/1000: dist 0.11 loss 38.93 step 587/1000: dist 0.10 loss 38.64 step 588/1000: dist 0.10 loss 29.75 step 589/1000: dist 0.10 loss 19.46 step 590/1000: dist 0.10 loss 20.81 step 591/1000: dist 0.10 loss 26.28 step 592/1000: dist 0.10 loss 18.74 step 593/1000: dist 0.10 loss 8.18 step 594/1000: dist 0.11 loss 17.73 step 595/1000: dist 0.10 loss 35.10 step 596/1000: dist 0.10 loss 36.99 step 597/1000: dist 0.10 loss 35.22 step 598/1000: dist 0.10 loss 40.55 step 599/1000: dist 0.10 loss 33.84 step 600/1000: dist 0.10 loss 16.79 step 601/1000: dist 0.10 loss 15.48 step 602/1000: dist 0.10 loss 28.57 step 603/1000: dist 0.10 loss 39.72 step 604/1000: dist 0.10 loss 47.90 step 605/1000: dist 0.10 loss 56.75 step 606/1000: dist 0.10 loss 64.10 step 607/1000: dist 0.10 loss 57.93 step 608/1000: dist 0.10 loss 30.16 step 609/1000: dist 0.10 loss 11.71 step 610/1000: dist 0.10 loss 25.67 step 611/1000: dist 0.10 loss 38.57 step 612/1000: dist 0.10 loss 28.65 step 613/1000: dist 0.11 loss 20.34 step 614/1000: dist 0.10 loss 22.24 step 615/1000: dist 0.10 loss 19.86 step 616/1000: dist 0.10 loss 13.02 step 617/1000: dist 0.11 loss 10.28 step 618/1000: dist 0.10 loss 10.43 step 619/1000: dist 0.10 loss 10.85 step 620/1000: dist 0.10 loss 12.52 step 621/1000: dist 0.10 loss 11.57 step 622/1000: dist 0.10 loss 6.42 step 623/1000: dist 0.10 loss 6.08 step 624/1000: dist 0.10 loss 10.68 step 625/1000: dist 0.10 loss 10.47 step 626/1000: dist 0.10 loss 8.74 step 627/1000: dist 0.10 loss 10.47 step 628/1000: dist 0.10 loss 9.76 step 629/1000: dist 0.10 loss 7.94 step 630/1000: dist 0.10 loss 8.48 step 631/1000: dist 0.10 loss 5.79 step 632/1000: dist 0.10 loss 1.74 step 633/1000: dist 0.10 loss 2.39 step 634/1000: dist 0.10 loss 3.39 step 635/1000: dist 0.10 loss 2.62 step 636/1000: dist 0.10 loss 3.78 step 637/1000: dist 0.10 loss 4.35 step 638/1000: dist 0.10 loss 2.22 step 639/1000: dist 0.10 loss 1.15 step 640/1000: dist 0.10 loss 1.63 step 641/1000: dist 0.10 loss 1.54 step 642/1000: dist 0.10 loss 1.47 step 643/1000: dist 0.10 loss 1.72 step 644/1000: dist 0.10 loss 1.63 step 645/1000: dist 0.10 loss 1.34 step 646/1000: dist 0.10 loss 0.91 step 647/1000: dist 0.10 loss 0.53 step 648/1000: dist 0.10 loss 0.68 step 649/1000: dist 0.10 loss 0.97 step 650/1000: dist 0.10 loss 1.00 step 651/1000: dist 0.10 loss 0.87 step 652/1000: dist 0.10 loss 0.56 step 653/1000: dist 0.10 loss 0.32 step 654/1000: dist 0.10 loss 0.43 step 655/1000: dist 0.10 loss 0.56 step 656/1000: dist 0.10 loss 0.59 step 657/1000: dist 0.10 loss 0.74 step 658/1000: dist 0.10 loss 0.84 step 659/1000: dist 0.10 loss 1.03 step 660/1000: dist 0.10 loss 1.93 step 661/1000: dist 0.10 loss 3.78 step 662/1000: dist 0.10 loss 6.66 step 663/1000: dist 0.10 loss 10.00 step 664/1000: dist 0.10 loss 11.16 step 665/1000: dist 0.10 loss 7.32 step 666/1000: dist 0.10 loss 1.68 step 667/1000: dist 0.10 loss 0.47 step 668/1000: dist 0.10 loss 3.76 step 669/1000: dist 0.10 loss 5.75 step 670/1000: dist 0.10 loss 3.27 step 671/1000: dist 0.10 loss 0.37 step 672/1000: dist 0.10 loss 1.28 step 673/1000: dist 0.10 loss 3.44 step 674/1000: dist 0.10 loss 2.68 step 675/1000: dist 0.10 loss 0.53 step 676/1000: dist 0.10 loss 0.82 step 677/1000: dist 0.10 loss 2.59 step 678/1000: dist 0.10 loss 2.68 step 679/1000: dist 0.10 loss 2.03 step 680/1000: dist 0.10 loss 3.72 step 681/1000: dist 0.10 loss 7.87 step 682/1000: dist 0.10 loss 13.37 step 683/1000: dist 0.10 loss 21.88 step 684/1000: dist 0.10 loss 33.94 step 685/1000: dist 0.10 loss 42.28 step 686/1000: dist 0.10 loss 34.51 step 687/1000: dist 0.10 loss 13.59 step 688/1000: dist 0.10 loss 1.02 step 689/1000: dist 0.10 loss 8.35 step 690/1000: dist 0.10 loss 19.48 step 691/1000: dist 0.10 loss 15.18 step 692/1000: dist 0.10 loss 2.78 step 693/1000: dist 0.10 loss 2.45 step 694/1000: dist 0.10 loss 10.93 step 695/1000: dist 0.10 loss 10.03 step 696/1000: dist 0.10 loss 1.74 step 697/1000: dist 0.10 loss 1.95 step 698/1000: dist 0.10 loss 7.51 step 699/1000: dist 0.10 loss 5.47 step 700/1000: dist 0.10 loss 0.64 step 701/1000: dist 0.10 loss 2.49 step 702/1000: dist 0.10 loss 5.01 step 703/1000: dist 0.10 loss 2.35 step 704/1000: dist 0.10 loss 0.70 step 705/1000: dist 0.10 loss 2.57 step 706/1000: dist 0.10 loss 2.70 step 707/1000: dist 0.10 loss 1.07 step 708/1000: dist 0.10 loss 1.15 step 709/1000: dist 0.10 loss 1.76 step 710/1000: dist 0.10 loss 1.30 step 711/1000: dist 0.10 loss 0.98 step 712/1000: dist 0.10 loss 1.00 step 713/1000: dist 0.10 loss 0.87 step 714/1000: dist 0.10 loss 0.97 step 715/1000: dist 0.10 loss 0.88 step 716/1000: dist 0.10 loss 0.45 step 717/1000: dist 0.10 loss 0.60 step 718/1000: dist 0.10 loss 0.91 step 719/1000: dist 0.10 loss 0.45 step 720/1000: dist 0.10 loss 0.21 step 721/1000: dist 0.10 loss 0.66 step 722/1000: dist 0.10 loss 0.63 step 723/1000: dist 0.10 loss 0.16 step 724/1000: dist 0.10 loss 0.32 step 725/1000: dist 0.10 loss 0.63 step 726/1000: dist 0.10 loss 0.42 step 727/1000: dist 0.10 loss 0.34 step 728/1000: dist 0.10 loss 0.75 step 729/1000: dist 0.10 loss 1.13 step 730/1000: dist 0.10 loss 1.63 step 731/1000: dist 0.10 loss 2.93 step 732/1000: dist 0.10 loss 5.29 step 733/1000: dist 0.10 loss 9.06 step 734/1000: dist 0.10 loss 14.29 step 735/1000: dist 0.10 loss 19.44 step 736/1000: dist 0.10 loss 19.63 step 737/1000: dist 0.10 loss 12.47 step 738/1000: dist 0.10 loss 3.28 step 739/1000: dist 0.10 loss 1.90 step 740/1000: dist 0.10 loss 8.45 step 741/1000: dist 0.10 loss 13.39 step 742/1000: dist 0.10 loss 11.99 step 743/1000: dist 0.10 loss 11.61 step 744/1000: dist 0.10 loss 18.20 step 745/1000: dist 0.10 loss 23.20 step 746/1000: dist 0.10 loss 16.80 step 747/1000: dist 0.10 loss 5.67 step 748/1000: dist 0.10 loss 3.25 step 749/1000: dist 0.10 loss 9.02 step 750/1000: dist 0.10 loss 13.24 step 751/1000: dist 0.10 loss 13.23 step 752/1000: dist 0.10 loss 13.51 step 753/1000: dist 0.10 loss 16.68 step 754/1000: dist 0.10 loss 19.92 step 755/1000: dist 0.10 loss 17.98 step 756/1000: dist 0.10 loss 9.72 step 757/1000: dist 0.10 loss 2.33 step 758/1000: dist 0.10 loss 2.75 step 759/1000: dist 0.10 loss 7.27 step 760/1000: dist 0.10 loss 8.23 step 761/1000: dist 0.10 loss 5.29 step 762/1000: dist 0.10 loss 2.81 step 763/1000: dist 0.10 loss 2.34 step 764/1000: dist 0.10 loss 3.18 step 765/1000: dist 0.09 loss 4.22 step 766/1000: dist 0.10 loss 3.30 step 767/1000: dist 0.09 loss 1.03 step 768/1000: dist 0.09 loss 1.14 step 769/1000: dist 0.09 loss 3.05 step 770/1000: dist 0.09 loss 2.47 step 771/1000: dist 0.09 loss 0.36 step 772/1000: dist 0.09 loss 0.80 step 773/1000: dist 0.09 loss 2.21 step 774/1000: dist 0.09 loss 1.35 step 775/1000: dist 0.09 loss 0.17 step 776/1000: dist 0.09 loss 0.88 step 777/1000: dist 0.09 loss 1.40 step 778/1000: dist 0.09 loss 0.61 step 779/1000: dist 0.09 loss 0.30 step 780/1000: dist 0.09 loss 0.78 step 781/1000: dist 0.09 loss 0.76 step 782/1000: dist 0.09 loss 0.38 step 783/1000: dist 0.09 loss 0.39 step 784/1000: dist 0.09 loss 0.52 step 785/1000: dist 0.09 loss 0.45 step 786/1000: dist 0.09 loss 0.32 step 787/1000: dist 0.09 loss 0.29 step 788/1000: dist 0.09 loss 0.35 step 789/1000: dist 0.09 loss 0.34 step 790/1000: dist 0.09 loss 0.20 step 791/1000: dist 0.09 loss 0.21 step 792/1000: dist 0.09 loss 0.31 step 793/1000: dist 0.09 loss 0.21 step 794/1000: dist 0.09 loss 0.13 step 795/1000: dist 0.09 loss 0.24 step 796/1000: dist 0.09 loss 0.22 step 797/1000: dist 0.09 loss 0.11 step 798/1000: dist 0.09 loss 0.18 step 799/1000: dist 0.09 loss 0.20 step 800/1000: dist 0.09 loss 0.10 step 801/1000: dist 0.09 loss 0.13 step 802/1000: dist 0.09 loss 0.18 step 803/1000: dist 0.09 loss 0.11 step 804/1000: dist 0.09 loss 0.11 step 805/1000: dist 0.09 loss 0.16 step 806/1000: dist 0.09 loss 0.11 step 807/1000: dist 0.09 loss 0.10 step 808/1000: dist 0.09 loss 0.13 step 809/1000: dist 0.09 loss 0.11 step 810/1000: dist 0.09 loss 0.10 step 811/1000: dist 0.09 loss 0.12 step 812/1000: dist 0.09 loss 0.10 step 813/1000: dist 0.09 loss 0.10 step 814/1000: dist 0.09 loss 0.11 step 815/1000: dist 0.09 loss 0.10 step 816/1000: dist 0.09 loss 0.10 step 817/1000: dist 0.09 loss 0.10 step 818/1000: dist 0.09 loss 0.10 step 819/1000: dist 0.09 loss 0.10 step 820/1000: dist 0.09 loss 0.10 step 821/1000: dist 0.09 loss 0.10 step 822/1000: dist 0.09 loss 0.10 step 823/1000: dist 0.09 loss 0.10 step 824/1000: dist 0.09 loss 0.10 step 825/1000: dist 0.09 loss 0.10 step 826/1000: dist 0.09 loss 0.10 step 827/1000: dist 0.09 loss 0.09 step 828/1000: dist 0.09 loss 0.09 step 829/1000: dist 0.09 loss 0.10 step 830/1000: dist 0.09 loss 0.09 step 831/1000: dist 0.09 loss 0.09 step 832/1000: dist 0.09 loss 0.09 step 833/1000: dist 0.09 loss 0.09 step 834/1000: dist 0.09 loss 0.09 step 835/1000: dist 0.09 loss 0.09 step 836/1000: dist 0.09 loss 0.09 step 837/1000: dist 0.09 loss 0.09 step 838/1000: dist 0.09 loss 0.09 step 839/1000: dist 0.09 loss 0.09 step 840/1000: dist 0.09 loss 0.09 step 841/1000: dist 0.09 loss 0.09 step 842/1000: dist 0.09 loss 0.09 step 843/1000: dist 0.09 loss 0.09 step 844/1000: dist 0.09 loss 0.09 step 845/1000: dist 0.09 loss 0.09 step 846/1000: dist 0.09 loss 0.09 step 847/1000: dist 0.09 loss 0.09 step 848/1000: dist 0.09 loss 0.09 step 849/1000: dist 0.09 loss 0.09 step 850/1000: dist 0.09 loss 0.09 step 851/1000: dist 0.09 loss 0.09 step 852/1000: dist 0.09 loss 0.09 step 853/1000: dist 0.09 loss 0.09 step 854/1000: dist 0.09 loss 0.09 step 855/1000: dist 0.09 loss 0.09 step 856/1000: dist 0.09 loss 0.09 step 857/1000: dist 0.09 loss 0.09 step 858/1000: dist 0.09 loss 0.09 step 859/1000: dist 0.09 loss 0.09 step 860/1000: dist 0.09 loss 0.09 step 861/1000: dist 0.09 loss 0.09 step 862/1000: dist 0.09 loss 0.09 step 863/1000: dist 0.09 loss 0.09 step 864/1000: dist 0.09 loss 0.09 step 865/1000: dist 0.09 loss 0.09 step 866/1000: dist 0.09 loss 0.09 step 867/1000: dist 0.09 loss 0.09 step 868/1000: dist 0.09 loss 0.09 step 869/1000: dist 0.09 loss 0.09 step 870/1000: dist 0.09 loss 0.09 step 871/1000: dist 0.09 loss 0.09 step 872/1000: dist 0.09 loss 0.09 step 873/1000: dist 0.09 loss 0.09 step 874/1000: dist 0.09 loss 0.09 step 875/1000: dist 0.09 loss 0.09 step 876/1000: dist 0.09 loss 0.09 step 877/1000: dist 0.09 loss 0.09 step 878/1000: dist 0.09 loss 0.09 step 879/1000: dist 0.09 loss 0.09 step 880/1000: dist 0.09 loss 0.09 step 881/1000: dist 0.09 loss 0.09 step 882/1000: dist 0.09 loss 0.09 step 883/1000: dist 0.09 loss 0.09 step 884/1000: dist 0.09 loss 0.09 step 885/1000: dist 0.09 loss 0.09 step 886/1000: dist 0.09 loss 0.09 step 887/1000: dist 0.09 loss 0.09 step 888/1000: dist 0.09 loss 0.09 step 889/1000: dist 0.09 loss 0.09 step 890/1000: dist 0.09 loss 0.09 step 891/1000: dist 0.09 loss 0.09 step 892/1000: dist 0.09 loss 0.09 step 893/1000: dist 0.09 loss 0.09 step 894/1000: dist 0.09 loss 0.09 step 895/1000: dist 0.09 loss 0.09 step 896/1000: dist 0.09 loss 0.09 step 897/1000: dist 0.09 loss 0.09 step 898/1000: dist 0.09 loss 0.09 step 899/1000: dist 0.09 loss 0.09 step 900/1000: dist 0.09 loss 0.09 step 901/1000: dist 0.09 loss 0.09 step 902/1000: dist 0.09 loss 0.09 step 903/1000: dist 0.09 loss 0.09 step 904/1000: dist 0.09 loss 0.09 step 905/1000: dist 0.09 loss 0.09 step 906/1000: dist 0.09 loss 0.09 step 907/1000: dist 0.09 loss 0.09 step 908/1000: dist 0.09 loss 0.09 step 909/1000: dist 0.09 loss 0.09 step 910/1000: dist 0.09 loss 0.09 step 911/1000: dist 0.09 loss 0.09 step 912/1000: dist 0.09 loss 0.09 step 913/1000: dist 0.09 loss 0.09 step 914/1000: dist 0.09 loss 0.09 step 915/1000: dist 0.09 loss 0.09 step 916/1000: dist 0.09 loss 0.09 step 917/1000: dist 0.09 loss 0.09 step 918/1000: dist 0.09 loss 0.09 step 919/1000: dist 0.09 loss 0.09 step 920/1000: dist 0.09 loss 0.09 step 921/1000: dist 0.09 loss 0.09 step 922/1000: dist 0.09 loss 0.09 step 923/1000: dist 0.09 loss 0.09 step 924/1000: dist 0.09 loss 0.09 step 925/1000: dist 0.09 loss 0.09 step 926/1000: dist 0.09 loss 0.09 step 927/1000: dist 0.09 loss 0.09 step 928/1000: dist 0.09 loss 0.09 step 929/1000: dist 0.09 loss 0.09 step 930/1000: dist 0.09 loss 0.09 step 931/1000: dist 0.09 loss 0.09 step 932/1000: dist 0.09 loss 0.09 step 933/1000: dist 0.09 loss 0.09 step 934/1000: dist 0.09 loss 0.09 step 935/1000: dist 0.09 loss 0.09 step 936/1000: dist 0.09 loss 0.09 step 937/1000: dist 0.09 loss 0.09 step 938/1000: dist 0.09 loss 0.09 step 939/1000: dist 0.09 loss 0.09 step 940/1000: dist 0.09 loss 0.09 step 941/1000: dist 0.09 loss 0.09 step 942/1000: dist 0.09 loss 0.09 step 943/1000: dist 0.09 loss 0.09 step 944/1000: dist 0.09 loss 0.09 step 945/1000: dist 0.09 loss 0.09 step 946/1000: dist 0.09 loss 0.09 step 947/1000: dist 0.09 loss 0.09 step 948/1000: dist 0.09 loss 0.09 step 949/1000: dist 0.09 loss 0.09 step 950/1000: dist 0.09 loss 0.09 step 951/1000: dist 0.09 loss 0.09 step 952/1000: dist 0.09 loss 0.09 step 953/1000: dist 0.09 loss 0.09 step 954/1000: dist 0.09 loss 0.09 step 955/1000: dist 0.09 loss 0.09 step 956/1000: dist 0.09 loss 0.09 step 957/1000: dist 0.09 loss 0.09 step 958/1000: dist 0.09 loss 0.09 step 959/1000: dist 0.09 loss 0.09 step 960/1000: dist 0.09 loss 0.09 step 961/1000: dist 0.09 loss 0.09 step 962/1000: dist 0.09 loss 0.09 step 963/1000: dist 0.09 loss 0.09 step 964/1000: dist 0.09 loss 0.09 step 965/1000: dist 0.09 loss 0.09 step 966/1000: dist 0.09 loss 0.09 step 967/1000: dist 0.09 loss 0.09 step 968/1000: dist 0.09 loss 0.09 step 969/1000: dist 0.09 loss 0.09 step 970/1000: dist 0.09 loss 0.09 step 971/1000: dist 0.09 loss 0.09 step 972/1000: dist 0.09 loss 0.09 step 973/1000: dist 0.09 loss 0.09 step 974/1000: dist 0.09 loss 0.09 step 975/1000: dist 0.09 loss 0.09 step 976/1000: dist 0.09 loss 0.09 step 977/1000: dist 0.09 loss 0.09 step 978/1000: dist 0.09 loss 0.09 step 979/1000: dist 0.09 loss 0.09 step 980/1000: dist 0.09 loss 0.09 step 981/1000: dist 0.09 loss 0.09 step 982/1000: dist 0.09 loss 0.09 step 983/1000: dist 0.09 loss 0.09 step 984/1000: dist 0.09 loss 0.09 step 985/1000: dist 0.09 loss 0.09 step 986/1000: dist 0.09 loss 0.09 step 987/1000: dist 0.09 loss 0.09 step 988/1000: dist 0.09 loss 0.09 step 989/1000: dist 0.09 loss 0.09 step 990/1000: dist 0.09 loss 0.09 step 991/1000: dist 0.09 loss 0.09 step 992/1000: dist 0.09 loss 0.09 step 993/1000: dist 0.09 loss 0.09 step 994/1000: dist 0.09 loss 0.09 step 995/1000: dist 0.09 loss 0.09 step 996/1000: dist 0.09 loss 0.09 step 997/1000: dist 0.09 loss 0.09 step 998/1000: dist 0.09 loss 0.09 step 999/1000: dist 0.09 loss 0.09 step 1000/1000: dist 0.09 loss 0.09 Elapsed: 733.0 s
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
With the conversion complete, lets have a look at the two GANs.
img_gan_source = cv2.imread('/content/out_source/proj.png') img = cv2.cvtColor(img_gan_source, cv2.COLOR_BGR2RGB) plt.imshow(img) plt.title('source-gan') plt.show() img_gan_target = cv2.imread('/content/out_target/proj.png') img = cv2.cvtColor(img_gan_target, cv2.COLOR_BGR2RGB) plt.imshow(img) plt.title('target-gan') plt.show()
_____no_output_____
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
Build the VideoThe following code builds a transition video between the two latent vectors previously obtained.
import torch import dnnlib import legacy import PIL.Image import numpy as np import imageio from tqdm.notebook import tqdm lvec1 = np.load('/content/out_source/projected_w.npz')['w'] lvec2 = np.load('/content/out_target/projected_w.npz')['w'] network_pkl = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl" device = torch.device('cuda') with dnnlib.util.open_url(network_pkl) as fp: G = legacy.load_network_pkl(fp)['G_ema'].requires_grad_(False).to(device) # type: ignore diff = lvec2 - lvec1 step = diff / STEPS current = lvec1.copy() target_uint8 = np.array([1024,1024,3], dtype=np.uint8) video = imageio.get_writer('/content/movie.mp4', mode='I', fps=FPS, codec='libx264', bitrate='16M') for j in tqdm(range(STEPS)): z = torch.from_numpy(current).to(device) synth_image = G.synthesis(z, noise_mode='const') synth_image = (synth_image + 1) * (255/2) synth_image = synth_image.permute(0, 2, 3, 1).clamp(0, 255).to(torch.uint8)[0].cpu().numpy() repeat = FREEZE_STEPS if j==0 or j==(STEPS-1) else 1 for i in range(repeat): video.append_data(synth_image) current = current + step video.close()
_____no_output_____
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
Download your VideoIf you made it through all of these steps, you are now ready to download your video.
from google.colab import files files.download("movie.mp4")
_____no_output_____
MIT
StyleGAN2.ipynb
patprem/FaceMorphing
k-Nearest Neighbor (kNN) exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*The kNN classifier consists of two stages:- During training, the classifier takes the training data and simply remembers it- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples- The value of k is cross-validatedIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
# Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train)
_____no_output_____
MIT
assignment1/knn.ipynb
meijun/cs231n-assignment
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.First, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
# Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print dists.shape # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show()
_____no_output_____
MIT
assignment1/knn.ipynb
meijun/cs231n-assignment
**Inline Question 1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the columns? **Your Answer**: - Bright rows: the test item has high distance with all train data;- Bright columns: the train item has high distance with all test data.
# Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Got 137 / 500 correct => accuracy: 0.274000
MIT
assignment1/knn.ipynb
meijun/cs231n-assignment
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Got 139 / 500 correct => accuracy: 0.278000
MIT
assignment1/knn.ipynb
meijun/cs231n-assignment
You should expect to see a slightly better performance than with `k = 1`.
# Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are def time_function(f, *args): """ Call a function f with args and return the time (in seconds) that it took to execute. """ import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation
Two loop version took 25.608644 seconds One loop version took 49.357512 seconds No loop version took 0.393901 seconds
MIT
assignment1/knn.ipynb
meijun/cs231n-assignment
Cross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ X_train_folds = np.array_split(X_train, num_folds) y_train_folds = np.array_split(y_train, num_folds) ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ for k in k_choices: k_to_accuracies[k] = [] print k for v_id in xrange(num_folds): X_train_k = np.concatenate([xs for xs_id, xs in enumerate(X_train_folds) if xs_id != v_id]) y_train_k = np.concatenate([ys for ys_id, ys in enumerate(y_train_folds) if ys_id != v_id]) classifier_k = KNearestNeighbor() classifier_k.train(X_train_k, y_train_k) dists_k = classifier_k.compute_distances_no_loops(X_train_folds[v_id]) y_test_pred_k = classifier_k.predict_labels(dists_k, k=k) num_correct_k = np.sum(y_test_pred_k == y_train_folds[v_id]) accuracy_k = float(num_correct_k) / X_train_folds[v_id].shape[0] k_to_accuracies[k].append(accuracy_k) print k_to_accuracies[k] ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # plot the raw observations for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.show() # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 10 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Got 141 / 500 correct => accuracy: 0.282000
MIT
assignment1/knn.ipynb
meijun/cs231n-assignment
Function call parameter rules and unpackingPython functions have the following four forms when declaring parameters:1. Without default value: `def func(a): pass`1. With default value: `def func(a, b = 1): pass`1. Arbitrary position parameters: `def func(a, b = 1, *c): pass`1. Arbitrary key parameter: `def func(a, b = 1, *c, **d): pass`When calling a function, there are two situations:1. Parameters without keywords: `func("G", 20)`1. Parameters with keywords: `func(a = "G", b = 20)` (The order of calls with keywords can be ignored: `func(b = 20, a = "G"`)Of course, these two situations can be mixed: `func("G", b = 20)`, but the most important rule is that **positional parameters cannot appear after keyword parameters**:
def func(a, b = 1): pass func(a = "G", 20) # SyntaxError
_____no_output_____
MIT
Notebooks/Arguments-and-Unpacking.ipynb
gtavasoli/PyTips
Another rule is position parameter priority:
def func(a, b = 1): pass func(20, a = "G") # TypeError: Repeat assignment to parameter a
_____no_output_____
MIT
Notebooks/Arguments-and-Unpacking.ipynb
gtavasoli/PyTips
**The safest way is to use all keyword parameters.** Arbitrary ParametersAny parameter can accept any number of parameters, where the form of `*a` represents any number of **positional parameters**, and `**d` represents any number of **keyword parameters**:
def concat(*lst, sep = "/"): return sep.join((str(i) for i in lst)) print(concat("G", 20, "@", "Hz", sep = ""))
G20@Hz
MIT
Notebooks/Arguments-and-Unpacking.ipynb
gtavasoli/PyTips
The syntax of the above `def concat(*lst, sep = "/")` was proposed by [PEP 3102](https://www.python.org/dev/peps/pep-3102/) and implemented after **Python 3.0**. The keyword function here must be clearly specified and cannot be inferred by position:
print(concat("G", 20, "-")) # Not G-20
G/20/-
MIT
Notebooks/Arguments-and-Unpacking.ipynb
gtavasoli/PyTips