code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Convolutional Neural Networks: Step by Step
# ### 1 - Zero-Padding
#
# Zero-padding adds zeros around the border of an image:
#
# <img src="images/PAD.png" style="width:600px;height:400px;">
# <caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Zero-Padding**<br> Image (3 channels, RGB) with a padding of 2. </center></caption>
#
# The main benefits of padding are the following:
#
# - It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer.
#
# - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.
#
# ### 2 - Single step of convolution
#
# In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which:
#
# - Takes an input volume
# - Applies a filter at every position of the input
# - Outputs another volume (usually of different size)
#
# <img src="images/Convolution_schematic.gif" style="width:500px;height:300px;">
# <caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : **Convolution operation**<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>
#
# In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output.
# ### 3 - Convolutional Neural Networks - Forward pass
#
# In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume:
#
# 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:
# ```python
# a_slice_prev = a_prev[0:2,0:2,:]
# ```
# This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.
# 2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.
#
# <img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;">
# <caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** <br> This figure shows only a single channel. </center></caption>
#
#
# **Reminder**:
# The formulas relating the output shape of the convolution to the input shape is:
# $$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
# $$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
# $$ n_C = \text{number of filters used in the convolution}$$
# ## 4 - Pooling layer
#
# The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are:
#
# - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.
#
# - Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.
#
# <table>
# <td>
# <img src="images/max_pool1.png" style="width:500px;height:300px;">
# <td>
#
# <td>
# <img src="images/a_pool.png" style="width:500px;height:300px;">
# <td>
# </table>
#
# These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over.
#
# ### 4.1 - Forward Pooling
# Now, you are going to implement MAX-POOL and AVG-POOL, in the same function.
#
# **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.
#
# **Reminder**:
# As there's no padding, the formulas binding the output shape of the pooling to the input shape is:
# $$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$
# $$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$
# $$ n_C = n_{C_{prev}}$$
# ## 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)
#
# In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like.
#
# When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.
#
# ### 5.1 - Convolutional layer backward pass
#
# Let's start by implementing the backward pass for a CONV layer.
#
# #### 5.1.1 - Computing dA:
# This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:
#
# $$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$
#
# Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices.
#
# In code, inside the appropriate for-loops, this formula translates into:
# ```python
# da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
# ```
#
# #### 5.1.2 - Computing dW:
# This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:
#
# $$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$
#
# Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$.
#
# In code, inside the appropriate for-loops, this formula translates into:
# ```python
# dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
# ```
#
# #### 5.1.3 - Computing db:
#
# This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:
#
# $$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$
#
# As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost.
#
# In code, inside the appropriate for-loops, this formula translates into:
# ```python
# db[:,:,:,c] += dZ[i, h, w, c]
| CNNStepByStep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Removing Data Part II
#
# So, you now have seen how we can fit a model by dropping rows with missing values. This is great in that sklearn doesn't break! However, this means future observations will not obtain a prediction if they have missing values in any of the columns.
#
# In this notebook, you will answer a few questions about what happened in the last screencast, and take a few additional steps.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import seaborn as sns
import RemovingData as t
# %matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
#Subset to only quantitative vars
num_vars = df[['Salary', 'CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]
num_vars.head()
# -
# #### Question 1
#
# **1.** What proportion of individuals in the dataset reported a salary?
# +
prop_sals = df.Salary.notnull().sum()/df.shape[0] # Proportion of individuals in the dataset with salary reported
prop_sals
# -
t.prop_sals_test(prop_sals) #test
# #### Question 2
#
# **2.** Remove the rows associated with nan values in Salary (only Salary) from the dataframe **num_vars**. Store the dataframe with these rows removed in **sal_rem**.
# +
sal_rm = num_vars[num_vars.Salary.notnull()]# dataframe with rows for nan Salaries removed
sal_rm.head()
# -
t.sal_rm_test(sal_rm) #test
# #### Question 3
#
# **3.** Using **sal_rm**, create **X** be a dataframe (matrix) of all of the numeric feature variables. Then, let **y** be the response vector you would like to predict (Salary). Run the cell below once you have split the data, and use the result of the code to assign the correct letter to **question3_solution**.
sal_rm[['Salary']]
# +
X = sal_rm.loc[:, sal_rm.columns != 'Salary'] #Create X using explanatory variables from sal_rm
y = sal_rm[['Salary']]#Create y using the response variable of Salary
# Split data into training and test data, and fit a linear model
X_train, X_test, y_train, y_test = train_test_split(X, y , test_size=.30, random_state=42)
lm_model = LinearRegression(normalize=True)
# If our model works, it should just fit our model to the data. Otherwise, it will let us know.
try:
lm_model.fit(X_train, y_train)
except:
print("Oh no! It doesn't work!!!")
# -
lm_model.fit(X_train, y_train)
# +
a = 'Python just likes to break sometimes for no reason at all.'
b = 'It worked, because Python is magic.'
c = 'It broke because we still have missing values in X'
question3_solution = c#Letter here
#test
t.question3_check(question3_solution)
# -
# #### Question 4
#
# **4.** Remove the rows associated with nan values in any column from **num_vars** (this was the removal process used in the screencast). Store the dataframe with these rows removed in **all_rem**.
# +
all_rm = num_vars.dropna(how='any')# dataframe with rows for nan Salaries removed
all_rm.head()
# -
t.all_rm_test(all_rm) #test
# #### Question 5
#
# **5.** Using **all_rm**, create **X_2** be a dataframe (matrix) of all of the numeric feature variables. Then, let **y_2** be the response vector you would like to predict (Salary). Run the cell below once you have split the data, and use the result of the code to assign the correct letter to **question5_solution**.
# +
X_2 = all_rm.loc[:, sal_rm.columns != 'Salary'] #Create X using explanatory variables from all_rm
y_2 = all_rm[['Salary']] #Create y using Salary from sal_rm
# Split data into training and test data, and fit a linear model
X_2_train, X_2_test, y_2_train, y_2_test = train_test_split(X_2, y_2 , test_size=.30, random_state=42)
lm_2_model = LinearRegression(normalize=True)
# If our model works, it should just fit our model to the data. Otherwise, it will let us know.
try:
lm_2_model.fit(X_2_train, y_2_train)
except:
print("Oh no! It doesn't work!!!")
# +
a = 'Python just likes to break sometimes for no reason at all.'
b = 'It worked, because Python is magic.'
c = 'It broke because we still have missing values in X'
question5_solution = b#Letter here
#test
t.question5_check(question5_solution)
# -
# #### Question 6
#
# **6.** Now, use **lm_2_model** to predict the **y_2_test** response values, and obtain an r-squared value for how well the predicted values compare to the actual test values.
# +
y_test_preds = lm_2_model.predict(X_2_test)# Predictions here using X_2 and lm_2_model
r2_test = r2_score(y_2_test, y_test_preds) # Rsquared here for comparing test and preds from lm_2_model
# Print r2 to see result
r2_test
# -
t.r2_test_check(r2_test)
# #### Question 7
#
# **7.** Use what you have learned **from the second model you fit** (and as many cells as you need to find the answers) to complete the dictionary with the variables that link to the corresponding descriptions.
X_2_test.shape
# +
a = 5009
b = 'Other'
c = 645
d = 'We still want to predict their salary'
e = 'We do not care to predict their salary'
f = False
g = True
question7_solution = {'The number of reported salaries in the original dataset': a,
'The number of test salaries predicted using our model': c,
'If an individual does not rate stackoverflow, but has a salary': d,
'If an individual does not have a a job satisfaction, but has a salary': d,
'Our model predicts salaries for the two individuals described above.': f}
#Check your answers against the solution - you should be told you were right if your answers are correct!
t.question7_check(question7_solution)
# +
#Cell for work
# +
#Cell for work
| CRISP_DM/Removing Data Part II.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## Reguläre Ausdrücke
#
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Eingangsbeispiel - Standardisierung von Straßennamen
# + slideshow={"slide_type": "subslide"}
string = '100 NORTH MAIN ROAD'
string.replace("ROAD", "RD.")
# + slideshow={"slide_type": "subslide"}
string = '100 NORTH BROAD ROAD' # problem
string.replace("ROAD", "RD.")
# + slideshow={"slide_type": "subslide"}
string[:-4] + string[-4:].replace('ROAD', 'RD.') # umstaendliche und spezifische loesung
# + slideshow={"slide_type": "subslide"}
import re
re.sub('ROAD$', 'RD.', string) # regulaerer ausdruck
# + [markdown] slideshow={"slide_type": "subslide"}
# [Reguläre Ausdrücke](https://de.wikipedia.org/wiki/Regul%C3%A4rer_Ausdruck) (regular expressions) spezifizieren Mengen von Zeichenketten, die über verschiedene Operationen identifiziert werden können. Für Data Scraping sind reguläre Ausdrücke sehr hilfreich um z.B. relevante Texte aus Webseiten PDF's zu extrahieren.
#
# > Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.
#
# > *<NAME>*
# + [markdown] slideshow={"slide_type": "subslide"}
# ### `re`
#
# Das Paket [re](https://docs.python.org/3/library/re.html) für reguläre Ausdrücke ist in der Standardbibliothek von Python enthalten.
#
# + slideshow={"slide_type": "subslide"}
import re
pattern = 'a'
string = 'Spam, Eggs and Bacon'
# + slideshow={"slide_type": "subslide"}
print(re.match(pattern, string)) # Sucht am Anfang des Strings
# + slideshow={"slide_type": "subslide"}
print(re.search(pattern, string)) # Sucht erstes Auftreten im String
# + [markdown] slideshow={"slide_type": "subslide"}
# Als Objekt:
# + slideshow={"slide_type": "fragment"}
pattern = re.compile('a')
print(pattern.search(string)) # nur suchen
# + slideshow={"slide_type": "fragment"}
print(pattern.search(string).group()) # suchen und ausgeben ueber group
# + [markdown] slideshow={"slide_type": "subslide"}
# ### `re.findall`
# Findet alle Vorkommnisse in einem String und gibt diese als *Liste-von-Strings* zurück.
# + slideshow={"slide_type": "fragment"}
print(string)
# + slideshow={"slide_type": "fragment"}
print(re.findall('a', string))
# + slideshow={"slide_type": "fragment"}
print(re.findall(' ', string))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Sonderzeichen:
#
# 1. `.` (dot) ist der allgemeinste, reguläre Ausdruck. Er spezifiziert ein beliebiges Zeichen im String.
# 2. `^` (carret) bezeichnet den Anfang eines Strings.
# 3. `$` (dollar) bezeichnet die Position vor der newline (`\n`) oder das Ende des Strings im `MULTILINE` Modus.
# + slideshow={"slide_type": "subslide"}
print(string)
# + slideshow={"slide_type": "subslide"}
print(re.search('.a.', string).group()) # erster treffer
# + slideshow={"slide_type": "subslide"}
print(re.findall('.a.', string)) # alle treffer
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Verkettung
#
# Spezifiziert Strings in bestimmter Reihenfolge. Die Reihenfolge kann negiert werden in dem man eine Menge angibt: `[]`.
# + slideshow={"slide_type": "fragment"}
print(re.search('AND', 'AND DNA XYZ').group())
# + slideshow={"slide_type": "fragment"}
print(re.findall('[AND]', 'AND DNA XYZ'))
# + slideshow={"slide_type": "fragment"}
print(string)
print(re.findall('[amb]', string))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Alternative
#
# Findet mehrere Alternativen regulärer Ausdrücke. Wird durch `|`-Operator angegeben.
# + slideshow={"slide_type": "fragment"}
print(re.findall('AND|DNA|RNA', 'AND DNA XYZ'))
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Weitere Sonderzeichen
# Folgende Zeichen haben besondere Bedeutungen in regulären Ausdrücken:
#
# Zeichen | Bedeutung
# -|-
# `.`| Beliebiges Zeichen. Mit `DOTALL` auch die Newline (`\n`)
# `^`| Anfang eines Strings. Wenn `MULTILINE`, dann auch nach jedem `\n`
# `$`| Ende des Strings. Wenn `MULTILINE`, dann auch vor jedem `\n`
# `\`| Escape für Sonderzeichen oder bezeichnet eine Menge
# `[]`| Definiert eine Menge von Zeichen
# `()`| Festlegung von Gruppen
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Wiederholungen
# Spezifiziert Anzahl der Wiederholungen des vorangegangenen regulären Ausdrucks. Folgende Wiederholungen sind möglich:
#
# Syntax | Bedeutung
# -|-
# `*` | 0 oder mehr Wiederholungen
# `+` | 1 oder mehr Wiederholungen
# `{m}` | Genau `m` Wiederholungen
# `{m,n}` | Von `m` bis einschließlich `n`
#
# + slideshow={"slide_type": "subslide"}
peter = '''The screen is filled by the face of PETER PARKER, a 17 year
old boy. High school must not be any fun for Petttter, he's one
hundred percent nerd- skinny, zitty, glasses. His face is just
frozen there, a cringing expression on it, which strikes us odd
until we realize the image is freeze framed.'''
# + slideshow={"slide_type": "subslide"}
peter
# + [markdown] slideshow={"slide_type": "subslide"}
#
# Die Wiederholungen sind standardmäßig *greedy*, d.h. es wird soviel vom String verbraucht, wie möglich. Dieses Verhalten kann abgeschaltet werden, indem ein `?` nach der Wiederholung gesetzt wird.
# + slideshow={"slide_type": "subslide"}
print(re.findall('s.*n', peter)) # greedy
# + slideshow={"slide_type": "subslide"}
print(re.findall('s.*?n', peter)) # non-greedy
# + [markdown] slideshow={"slide_type": "subslide"}
# Zusatzparameter `re.DOTALL` um über `.` auch `\n` zu erfassen:
# + slideshow={"slide_type": "subslide"}
print(re.findall('s.*?n', peter, re.DOTALL))
# + slideshow={"slide_type": "subslide"}
re.findall('\.', peter) # suche nach punkt, escapen des sonderzeichens "."
# + [markdown] slideshow={"slide_type": "subslide"}
# Grundsätzlich kann die Regex Kombination, `.*?`, verwendet werden, um mehrere Platzhalter (`.`) beliebig häufig (`*`) vorkommen zu lassen, solange bis das nächste Pattern zum ersten mal gefunden wird (`?`).
# + slideshow={"slide_type": "fragment"}
string = 'eeeAaZyyyyyyPeeAAeeeZeeeeyy'
print(re.findall('A.*Z', string)) # greedy
# + slideshow={"slide_type": "fragment"}
print(re.findall('A.*?Z', string)) # non-greedy
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Spezifizierung von Mengen
#
# Syntax | Äquivalent | Bedeutung
# -|-|-
# `\d` | `[0-9]` | Ganze Zahlen
# `\D` | `[^0-9]` | Alles was keine Zahl ist
# `\s` | `[ \t\n\r\f\v]` | Alles was whitespace ist
# `\S` | `[^ \t\n\r\f\v] ` | Alles was nicht whitespace ist
# `\w` | `[a-zA-Z0-9_]` | Alphanumerische Zeichen und Unterstrich
# `\W` | `[^a-zA-Z0-9_]` | Kein alphanumerische Zeichen oder Unterstrich
# + slideshow={"slide_type": "subslide"}
print(peter)
# + slideshow={"slide_type": "subslide"}
re.sub('\s', '_', peter) # ersetzen
# + slideshow={"slide_type": "subslide"}
re.findall('\d', peter) # alle zahlen
# + slideshow={"slide_type": "fragment"}
re.findall('\d{2}', peter) # zwei aufeinanderfolgende zahlen
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Look arounds
#
# Look arounds ermöglichen es Mengen vor und nach Strings zu prüfen ohne diese zu extrahieren. Grundlegende Syntax: `(?`...`)`
#
# Syntax | Bedeutung
# -|-
# `(?=`...`)` | *positive lookahead*
# `(?!`...`)` | *negative lookahead*
# `(?<=`...`)` | *positive lookbehind*
# `(?<!`...`)` | *negative lookbehind*
# + slideshow={"slide_type": "subslide"}
string = 'bacon, eggs & spam'
re.findall('(?<=eggs).*', string) # positive lookbehind: alle zeichen nach "eggs"
# + slideshow={"slide_type": "subslide"}
string = "1pt 7px 3em 4px"
re.findall("\d+(?!px)", string) # negative look ahead: zahlen die nicht vor "px" stehen
# + [markdown] slideshow={"slide_type": "subslide"}
# Weitere Tutorials und Infos zu regulären Ausdrücken:
#
# [DiveIntoPython Tutorial](https://diveintopython3.net/regular-expressions.html)
#
# [PyDocs RegEx HowTo](https://docs.python.org/3/howto/regex.html)
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Übungsaufgabe 1
#
# Schreibt einen regulären Ausdruck, über den alle Wörter aus einem String in eine Liste überführt werden.
# + slideshow={"slide_type": "fragment"}
beispiel_string = '''The screen is filled by the face of <NAME>, a 17 year
old boy. High school must not be any fun for Petttter, he's one
hundred percent nerd- skinny, zitty, glasses. His face is just
frozen there, a cringing expression on it, which strikes us odd
until we realize the image is freeze framed.'''
# Code Übungsaufgabe 1
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Übungsaufgabe 2
#
# Nutzt reguläre Ausdrücke um eine Liste von URL's nach Bildern (.jpg, .jpeg, .png) zu filtern.
# + slideshow={"slide_type": "subslide"}
beispiel_links = [
'https://www.uni-bamberg.de/ma-politik/schwerpunkte/computational-social-sciences/',
'https://www.uni-bamberg.de/fileadmin/_processed_/f/c/csm_Schmid_Finzel_2c34cb23de.jpg',
'https://www.uni-bamberg.de/fileadmin/uni/verwaltung/presse/042_MARKETING/0421_Corporate_Design/Logos-extern/weltoffene-hochschule/Logo-EN-170.png',
'https://www.uni-bamberg.de/fileadmin/_processed_/e/d/csm_2020-04-30_Homeschooling_web_4cf4ce1ad8.jpeg',
'https://www.uni-bamberg.de/soziologie/lehrstuehle-und-professuren/']
# Code Übungsaufgabe 2
# + [markdown] slideshow={"slide_type": "subslide"}
# <br>
# <br>
#
#
# ___
#
#
# **Kontakt: <NAME>** (Webseite: www.carstenschwemmer.com, Email: <EMAIL>)
| notebooks/07_Regulaere_Ausdruecke.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: "py37\u3000"
# language: python
# name: py37
# ---
# # 'River meanders and the theory of minimum variance'
# # and 'Up a lazy river'
#
# The issue is not how the channel guides the river but how the river carves the channel. Rivers meander even when they carry no sediment, and even when they have no banks (Hayes, 2006)!
#
# ### Reference
# - <NAME>. (1951). Most frequent particle paths in a plane. Eos, Transactions American Geophysical Union, 32(2), 222-226.
# - <NAME>. (1964). Most frequent random walks. Schenectady, N.Y.: General Electric
# - <NAME>., & <NAME>. (1966). River meanders. Scientific American, 214(6), 60-73.
# - <NAME>., & <NAME>. (1970). River meanders and the theory of minimum variance. In Rivers and river terraces (pp. 238-263). Palgrave Macmillan, London.
# - <NAME>. (2006). Computing science: Up a lazy river. American Scientist, 94(6), 490-494.
#
# ### Researcher
# - <NAME>
# - [<NAME>](http://www.nasonline.org/member-directory/deceased-members/53482.html) (1907-1982, Geomorphologist and Hydrologist)
# - [<NAME>](https://en.wikipedia.org/wiki/Luna_Leopold) (1915-2006, Geomorphologist and Hydrologist) [The Virtual Luna Leopold Project](https://eps.berkeley.edu/people/lunaleopold/)
# - [<NAME>](https://en.wikipedia.org/wiki/Brian_Hayes_(scientist)) (Scientist, columnist and author)
import numpy as np
import matplotlib.pyplot as plt
# ### Theoretical meander in plan view
# It has been suggested that meanders are caused by such processes as (Langbein&Leopold, 1970):
# - regular erosion and deposition;
# - secondary circulation in the cross-sectional plane;
# - seiche effect analogous to lake seiches.
#
# The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. **The sine-generated curve** has been defined as follows:
#
# $\phi =\omega sin(\frac{s}{M}2\pi$)
#
# Where $\phi$ equals the direction at location s, $M$ is the total path distance along a meander, and $\omega$ is the maximum angle the path makes from the mean downvalley direction.
#
# Among all the ways of bending and folding this segment from $a$ to $b$ with length $M$, **the sine-generated curve** has three properties:
# - It is the path of minimal **bending stress** (The bending stress of a river is the work or energy that has to be expend ed to make its path deviate from a straight line. At each point along the route, the bending stress is proportional to the square of the curvature at that point.),
# - it is the path of minimal **variance in direction**,
# - and it is the path representing the most likely **random walk**.
#
# Since theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or non-meandering alignment.
# omega = 2.2/3.14*180
omega = 110 # [degree], maximum angle a path makes with mean downpath direction
x0, y0 = 0.,0. # the coordinate of the beginning point
M = 100 # fixed total distance along path
N = 150 # the number of the unit distance along path
s = np.linspace(M*-0.25,M*1.25,N) # the unit distance along path
# +
# Sine-generated curve defined by omega and path distance
phi = omega*np.sin(s/M*2*np.pi)
# Coordinates (x,y) of a point of the river path from (<NAME>, 1951)
x = np.zeros_like(s)
y = np.zeros_like(s)
x[0],y[0]= x0,y0
for i in range(1,len(s)):
ds = s[i]-s[i-1]
dx = ds*np.cos(phi[i-1]/180*np.pi)
dy = ds*np.sin(phi[i-1]/180*np.pi)
x[i] = x[i-1]+dx
y[i] = y[i-1]+dy
# +
# Fig.2 in (Langbein&Leopold, 1970)
fig = plt.figure(figsize=(15,5))
ax = fig.subplots(1,2)
ax[0].set(xlabel='x', ylabel='y')
ax[0].set_title('Theoretical meander river in plan view with $\omega$=%s$\degree$'%omega)
ax[0].plot(x,y)
ax[0].scatter(x[0], y[0],color='r',s=20)
ax[0].text(x[0], y[0], '$a$', fontsize=12)
ax[0].scatter(x[-1], y[-1],color='r',s=20)
ax[0].text(x[-1], y[-1], '$b$', fontsize=12)
ax[1].set(xlabel='Distance along channel (ratio)', ylabel='Deviation angel (degree)')
ax[1].set_title('Its deviation angle as a function of distance along the channel path')
ax[1].set(xticks=np.arange(-0.25,1.25+0.1,0.25))
ax[1].plot(s/M,phi,'k')
ax[1].scatter(s[0]/M,phi[0],color='r',s=20)
ax[1].text(s[0]/M,phi[0]+2, '$a$', fontsize=12)
ax[1].scatter(s[-1]/M,phi[-1],color='r',s=20)
ax[1].text(s[-1]/M,phi[-1]+2, '$b$', fontsize=12)
plt.subplots_adjust(wspace =0.25, hspace =0)
#fname_save = 'Theoretical meander river with w='+str(omega)+'degree.png'
#plt.savefig(fname_save,dpi=300)
# -
# ### Sinuosity
# The sinuosity, $k$, equals the average of the values of $cos\phi$ over the range from $\phi=0$ to $\phi=\omega$. Thus a relationship can be defined between k and $\omega$. An approximate algebraic expression is: $\omega (radians) = 2.2\sqrt{\frac{k-1}{k}}$, or: $\omega = 125^{\circ}\sqrt{\frac{k-1}{k}}$
#k = 4.84/(4.84-(omega/180*np.pi)**2)
k = 1/(1-(omega/125)**2)
print('k is %s'%k)
# ### Azimuth
azi = np.zeros_like(phi)
for i in range(0,len(phi)):
azi[i] = float((-phi[i] + 90.0) % 360.0)
# +
# Fig.2 in (Langbein&Leopold, 1970)
fig = plt.figure(figsize=(15,5))
ax = fig.subplots(1,2)
ax[0].set(xlabel='x', ylabel='y')
ax[0].set_title('Theoretical meander river in plan view with $\omega$=%s$\degree$'%omega)
ax[0].plot(x,y)
ax[0].scatter(x[0], y[0],color='r',s=20)
ax[0].text(x[0], y[0], '$a$', fontsize=12)
ax[0].scatter(x[-1], y[-1],color='r',s=20)
ax[0].text(x[-1], y[-1], '$b$', fontsize=12)
ax[1].set(xlabel='Distance along channel (ratio)', ylabel='Angle (degree)')
ax[1].set_title('Its deviation angle and azimuth')
ax[1].set(xticks=np.arange(-0.25,1.25+0.1,0.25))
ax[1].plot(s/M,phi,'k',label='deviation')
ax[1].plot(s/M,azi,'r',label='azimuth')
ax[1].scatter(s[0]/M,phi[0],color='r',s=20)
ax[1].text(s[0]/M,phi[0]+2, '$a$', fontsize=12)
ax[1].scatter(s[-1]/M,phi[-1],color='r',s=20)
ax[1].text(s[-1]/M,phi[-1]+2, '$b$', fontsize=12)
plt.legend(loc = 'lower right',prop = {'size':8})
plt.subplots_adjust(wspace =0.25, hspace =0)
fname_save = 'Theoretical meander river with w='+str(omega)+'degree_azimuth.png'
plt.savefig(fname_save,dpi=300)
# +
# fig = plt.figure(figsize=(7,5))
# ax = plt.subplot(111)
# ax.set(xlabel='x', ylabel='y')
# ax.set_title('Theoretical meander river in plan view with $\omega$=%s$\degree$'%omega)
# ax.plot(x,y)
# ax.scatter(x[0], y[0],color='r',s=20)
# ax.text(x[0], y[0], '$a$', fontsize=12)
# ax.scatter(x[-1], y[-1],color='r',s=20)
# ax.text(x[-1], y[-1], '$b$', fontsize=12)
# ax2 = ax.twinx()
# ax2.plot(x,azi,color='red',label='Azimuth_neighbor point')
# # ax2.plot(x,azi2,color ='blue',label='Azimuth_gradeint')
# # ax2.plot(x,azi3,color ='yellow',label='Azimuth_gradeint topopy')
# ax2.set_ylabel('Azimuth')
# plt.legend(loc = 'lower right',prop = {'size':8})
# -
from qixiang import functions as fnqx
river_xy = np.array((x, y)).T
azi_np = fnqx.cal_azi_river(river_xy) # neighbor points
# +
# dx = np.gradient(x)
# dy = np.gradient(y)
# azi_gra = np.zeros(len(dx))
# for i in range(0,len(dx)):
# azi_gra[i] = fnqx.cal_azi(0,0,dx[i],dy[i])
# +
fig = plt.figure(figsize=(14,10))
ax = plt.subplot(111)
ax.set(xlabel='Distance along channel (ratio)', ylabel='Angel (degree)')
ax.set_title('Its deviation angle as a function of distance along the channel path')
ax.set(xticks=np.arange(-0.25,1.25+0.1,0.25))
#ax.plot(s/M,phi,'k',label='deviation')
ax.plot(s/M,azi,'r--',label='azimuth')
ax.plot(s/M,azi_np,'b',label='azimuth_neighbor points')
#ax.plot(s/M,azi2,'y',label='gradient')
# ax.scatter(s[0]/M,phi[0],color='r',s=20)
# ax.text(s[0]/M,phi[0]+2, '$a$', fontsize=12)
# ax.scatter(s[-1]/M,phi[-1],color='r',s=20)
# ax.text(s[-1]/M,phi[-1]+2, '$b$', fontsize=12)
plt.legend(loc = 'lower right',prop = {'size':8})
# -
(azi[:-2]-azi_np[:-2]).max()
azi.mean()
# ### Characteristic scale
#
# **The river can't think globally; it can only act locally.** What are the forces at each point along the river channel that create and maintain that shape (Hayes, 2016)?
| tutorials/Note_River meanders and the theory of minimum variance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # "A Comparison of TF-IDF, Word2Vec, and Transfer Learning for Text Classification"
#
# - toc: true
# - author: <NAME>
# - comments: true
# - categories: [tf-idf, word2vec, neural networks, text classification, natural language processing]
#hide
import warnings
warnings.filterwarnings('ignore')
# Text Classification is the assignment of a particular label to a text with respect to its content. In modern Natural Language Processing (NLP), there are many different algorithms and techniques used to gain significant accuracy in text classification tasks.
#
# In this notebook, we will cover three of the most popular methods for text classification: TF-IDF, Word2Vec, and transfer learning. For each of the three methods, we will also show their effectiveness based on the amount of preprocessing that is done to the text beforehand, leaving us with a total of nine measurements at the end.
#
# We will see that transfer learning is by far the superior method for the task in terms of ease of use and accuracy.
#
# The data that we will be using comes from [Kaggle's "Real or Not? NLP with Disaster Tweets"](https://www.kaggle.com/c/nlp-getting-started) competition, where the user is tasked with predicting which tweets are about real disasters, and which ones are not.
#
# In the competition, leaderboard position is based on the model's F1 score. Therefore, for clarity, we will provide both the accuracy and F1 score for each output below.
#
# To begin, let's start with some data analysis and augmentation:
# > Tip: You can open this blog post as a notebook in Google Colab using the corresponding badge under the title. This way you can follow along and experiment with the code if you'd like!
# ## Data Analysis, Augmentation, and Splitting
# ### Light Analysis
#collapse
import pandas as pd
# First, let's take a look at the training data we're given:
data = pd.read_csv('./data/disaster_tweets/train.csv')
data.head()
# To find out a bit more information about the data, we can use the `.info()` and `.nunique()` methods on our DataFrame:
data.info()
data.nunique()
# Interesting! It looks like some of the tweets (110 of them, to be precise) are the same.
# ### Data Augmentation
# #### Cleaning
#collapse
import re
import spacy
# As mentioned above, I will incorporate different methods of preprocessing to our data to see if such changes have a positive or negative effect on our evaluation metrics. The three differently processed data I'll be using are:
#
# 1. Unprocessed - the data as it is given to us.
# 2. "Simply" cleaned - the data without any hashtags, @-symbols, website links, or punctuation.
# 3. SpaCy cleaned - the data lemmatized and without any stop words according to SpaCy's pretrained English language model (which we'll get to in a moment).
#
# The unprocessed data is already done for us in the `text` column of our DataFrame.
#
# Moving on to the second preprocessing method, "simply" cleaned data. By "simply" I mean cleaned explicitly by me using [regular expressions](https://docs.python.org/3/library/re.html) with prior assumptions about the data. For the data we're using here, we have a bunch of tweets. Thererfore, it makes sense to me to remove things like hashtags, @-symbols, and websites, since those don't intuitively seem like they contribute to a tweets disaster level (though this isn't necessarily true, just an assumption!).
#
# To achieve this "simple" cleaning of the data, we can use the following three functions I've created:
# +
def remove_at_hash(sent):
""" Returns a string with @-symbols and hashtags removed. """
return re.sub(r'@|#', r'', sent.lower())
def remove_sites(sent):
""" Returns a string with any websites starting with 'http.' removed. """
return re.sub(r'http.*', r'', sent.lower())
def remove_punct(sent):
""" Returns a string with only English unicode word characters ([a-zA-Z0-9_]). """
return ' '.join(re.findall(r'\w+', sent.lower()))
# -
# Now we can create a new column in our `data` DataFrame that represents the "simply" cleaned tweets. I'll call this column `text_simple`.
data['text_simple'] = data['text'].apply(lambda x: remove_punct(remove_sites(remove_at_hash(x))))
data.head()
# Moving now to the last preprocessing method: SpaCy. [SpaCy](https://spacy.io/) is a great, open-source software library for NLP. It includes varying, pretrained language models of a number of different sizes for a number of different langauges, allowing you to quickly perform routine NLP tasks. Here, we're going to use SpaCy to [lemmatize](https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html) each tweet in the data and remove any [stop words](https://en.wikipedia.org/wiki/Stop_word).
#
# Below, we need to first load in SpaCy's (full) English model (note that, for speed, I disable some features that we won't need here). Then, create a function that will give us a string lemmatized by SpaCy.
# +
nlp = spacy.load('en', disable=['ner', 'parser'])
def spacy_cleaning(doc):
""" Returns a string that has been lemmatized and rid of stop words via SpaCy. """
doc = nlp(doc.lower())
text = [token.lemma_ for token in doc if not token.is_stop]
return ' '.join(text)
# -
# Using our new function, we can again create a new column in our `data` DataFrame with the SpaCy-cleaned tweets. I'll call this column `text_spacy`.
data['text_spacy'] = data['text'].apply(lambda x: spacy_cleaning(x))
data.head()
# #### n-grams
#collapse
from gensim.models.phrases import Phrases, Phraser
# > Important: I'll only be applying what we learn in the n-gram section to the Word2Vec model. If you'd like to skip this section, and come back when you get to Word2Vec, feel free to do so.
# An [n-gram](https://en.wikipedia.org/wiki/N-gram#:~:text=In%20the%20fields%20of%20computational,a%20text%20or%20speech%20corpus.) is a contiguous sequence of *n* items from a given sample of text or speech. This turns out to be quite useful in NLP. Consider the phrase "New York Times". When all three words are together, the phrase is understood to mean the widely spread news source based in New York of the same moniker. However, if we split the words up (while maintaining original order), we get: "New York", "York Times", "New", "York", and "Times". These separate words and phrases can occur in many contexts other than those in which the full phrase "New York Times" is found, skewing the phrase's true meaning in the data. N-gram models allow us to concatenate these commonly occurring multi-word phrases in our data, allowing their true meaning to shine through.
#
# Thankfully, we can use the `Phraser` and `Phrases` classes provided by [gensim](https://radimrehurek.com/gensim/) in order to easily find n-grams in our data.
#
# Let's start by getting trigrams found in the unprocessed data.
#
# First, we extract the tweets and split them by whitespace characters.
text = [re.split('\s+', tweet) for tweet in data['text']]
# Then, we find bigrams throughout our data. Here we use a parameter of `min_count=30` for our `Phrases` class. This ensures that only bigrams that occur more than 30 times in the data are found. Many combinations of words occur side by side only a few times, and don't contribute much additional knowledge to our model, so this is important.
bigram_phrases = Phrases(text, min_count=30)
bigram = Phraser(bigram_phrases)
bigram_text = bigram[text]
# Next, we can use the bigrams we just made to search for trigrams in the exact same way.
#
# > Note: This is repeatable! Keep going to find n-grams of size 5 if you wanted!
trigram_phrases = Phrases(bigram_text, min_count=30)
trigram = Phraser(trigram_phrases)
trigram_text = trigram[bigram_text]
# That's it! Now we can pop this list back into our `data` DataFrame to be used later.
data['text_trigram'] = [' '.join(tweet) for tweet in trigram_text]
data.head()
# Great work! Now let's do the same for the `text_simple` and `text_spacy` columns.
# +
#collapse
text_simple = [re.split('\s+', tweet) for tweet in data['text_simple']]
bigram_phrases = Phrases(text_simple, min_count=30)
bigram = Phraser(bigram_phrases)
bigram_text_simple = bigram[text_simple]
trigram_phrases = Phrases(bigram_text_simple, min_count=30)
trigram = Phraser(trigram_phrases)
trigram_text_simple = trigram[bigram_text_simple]
data['text_trigram_simple'] = [' '.join(tweet) for tweet in trigram_text_simple]
# +
#collapse
text_spacy = [re.split('\s+', tweet) for tweet in data['text_spacy']]
bigram_phrases = Phrases(text_spacy, min_count=30)
bigram = Phraser(bigram_phrases)
bigram_text_spacy = bigram[text_spacy]
trigram_phrases = Phrases(bigram_text_spacy, min_count=30)
trigram = Phraser(trigram_phrases)
trigram_text_spacy = trigram[bigram_text_spacy]
data['text_trigram_spacy'] = [' '.join(tweet) for tweet in trigram_text_spacy]
# -
data.head()
# Fantastic! We've found all of the trigrams and bigrams in each of our three datasets that occur more than 30 times. This data will prove to be very useful when we reach Word2Vec.
# Now that we've got the three separately preprocessed sets of tweets in neat columns in our dataset, it's time to split our data into training and validation data and begin our testing!
# ### Splitting
#collapse
from sklearn.model_selection import train_test_split
# In order to properly test our data, we'll need to split it into training and validation sets. To do this, we simply pass our `data` DataFrame to sklearn's `train_test_split`. We reset the index of each newly-created DataFrame to avoid complications with indexing later on. Then, check the shapes to make everything adds up.
# +
train, valid = train_test_split(data, random_state=24)
train = train.reset_index()
valid = valid.reset_index()
train.shape, valid.shape, data.shape
# -
# Things are looking good! One last preprocessing step is in order, and that is dividing our newly-created `train` data by their target labels, thereby giving us two new DataFrames representing disaster tweets and non-disaster tweets.
#
# When we call `.nunique()` on both `disasters` and `not_disasters`, we can see that the unique number of `target`s in each DataFrame is 1, indicating we split the data properly.
# +
disasters = train[train['target'] == 1].reset_index()
not_disasters = train[train['target'] == 0].reset_index()
disasters.nunique(), not_disasters.nunique()
# -
# Awesome! We're all set and we can begin to train our models.
#
# Let's start with TF-IDF.
# ## TF-IDF
#collapse
from collections import defaultdict
from gensim.corpora import Dictionary
from gensim.models import TfidfModel
from gensim.similarities import MatrixSimilarity
import numpy as np
from sklearn.metrics import accuracy_score, f1_score
# TF-IDF is an incredible, straightforward way to analyze document similarity. It involves no fancy machine learning, just the term frequency across documents! For this reason, we will begin with trying to use TF-IDF to determine if a tweet is about a disaster or not.
#
# From [tfidf.com](http://www.tfidf.com/):
# > Tf-idf stands for *term frequency-inverse document frequency*, and the tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus.
#
# You can learn more about the mathematical foundations of TF-IDF [here](https://rare-technologies.com/pivoted-document-length-normalisation/).
#
# We'll start by analyzing the unprocessed tweets.
# ### TF-IDF with Unprocessed Tweets
# In order to calculate the similarity between two tweets (namely, a tweet in the validation set with a tweet in the training set) without having to do all the math out ourselves, we'll use [gensim](https://radimrehurek.com/gensim/), a free Python library that provides a lot of great NLP functionality.
#
# Gensim requires a list of *texts* in a list of *documents*. For us, that's a list of *words in a tweet* in a list of *tweets*. So let's make that now.
#
# > Note: We're using the unprocessed tweets in the `text` column of our data this time around. We'll be using the other two preprocessed tweets in a bit!
# +
disaster_tweets = disasters['text'].tolist()
not_disaster_tweets = not_disasters['text'].tolist()
disaster_tweets_split = [
[word for word in tweet.split()]
for tweet in disaster_tweets
]
not_disaster_tweets_split = [
[word for word in tweet.split()]
for tweet in not_disaster_tweets
]
# -
# Thinking about a TF-IDF model, words that only occur once throughout the entire corpus will not provide any noteworthy advantage to the model. Therefore, in the next step, we remove words that only occur once from `disaster_tweets_split` and `not_disaster_tweets` split.
# +
disaster_tweets_word_frequency = defaultdict(int)
for tweet in disaster_tweets_split:
for word in tweet:
disaster_tweets_word_frequency[word] += 1
not_disaster_tweets_word_frequency = defaultdict(int)
for tweet in not_disaster_tweets_split:
for word in tweet:
not_disaster_tweets_word_frequency[word] += 1
disaster_tweets_split = [
[word for word in tweet if disaster_tweets_word_frequency[word] > 1]
for tweet in disaster_tweets_split
]
not_disaster_tweets_split = [
[word for word in tweet if not_disaster_tweets_word_frequency[word] > 1]
for tweet in not_disaster_tweets_split
]
# -
# Next, we create a Dictionary object with gensim, which is a mapping between words and their integer ids. With this Dictionary object we can create a "corpus" for disaster tweets and non-disaster tweets by converting each document (i.e., tweet) in each set to a Bag of Words format (that is, a list of `(token_id, token_count)` tuples).
# +
disaster_tweets_dct = Dictionary(disaster_tweets_split)
not_disaster_tweets_dct = Dictionary(not_disaster_tweets_split)
disaster_tweets_corpus = [disaster_tweets_dct.doc2bow(tweet) for tweet in disaster_tweets_split]
not_disaster_tweets_corpus = [not_disaster_tweets_dct.doc2bow(tweet) for tweet in not_disaster_tweets_split]
# -
# Fit TF-IDF models for our two sets of tweets.
disaster_tweets_tfidf = TfidfModel(disaster_tweets_corpus)
not_disaster_tweets_tfidf = TfidfModel(not_disaster_tweets_corpus)
# Apply the models to our corpora to get vectors for each tweet.
disaster_tweets_tfidf_vectors = disaster_tweets_tfidf[disaster_tweets_corpus]
not_disaster_tweets_tfidf_vectors = not_disaster_tweets_tfidf[not_disaster_tweets_corpus]
# Create variable which we can index into using another vector to compute similarity.
disaster_tweets_similarity = MatrixSimilarity(disaster_tweets_tfidf_vectors)
not_disaster_tweets_similarity = MatrixSimilarity(not_disaster_tweets_tfidf_vectors)
# Now we can compare each tweet in the validation set to each set of tweets (disaster and non-disaster) in the training set. Whichever set contains a greater number of "similar enough" tweets (to be determined by a threshold) determines how the validation tweet will be labeled.
#
# First, configure the validation tweets in the same way that we did for the training tweets:
# +
valid_tweets = valid['text'].tolist()
valid_tweets_split = [
[word for word in tweet.split()]
for tweet in valid_tweets
]
valid_tweets_word_frequency = defaultdict(int)
for tweet in valid_tweets_split:
for word in tweet:
valid_tweets_word_frequency[word] += 1
valid_tweets_split = [
[word for word in tweet if valid_tweets_word_frequency[word] > 1]
for tweet in valid_tweets_split
]
# -
# We now have all the information we need to make our predictions! We can store our predictions in the `valid` DataFrame. This will make for easier access when comparing target to prediction.
#
# To do that, we need to initialize a new column in the DataFrame, let's call it `prediction`:
valid['prediction'] = np.zeros(len(valid)).astype('int')
# In order to make predictions using the model we just created, we have to compare each tweet in the validation data with each tweet in both the `disasters` DataFrame and the `not_disasters` DataFrame.
#
# Therefore, for each tweet, we:
# 1. Turn it into a BoW according to each set of tweets' Dictionary object.
# 2. Get a vector for it using each set's TF-IDF model.
# 3. Compare it's vector with each set's full set of tweets using the MatrixSimilarity object we created earleir.
# 4. Tally up the total number of disaster and non-disaster tweets whose cosine similarity is greater than 0.1.
# 5. If the disaster tally is greater than the non-disaster tally, we change the value of the prediction column for this tweet in the `valid` DataFrame to 1 (otherwise, it stays 0, indicating a non-disastrous guess).
#
# This is exemplefied below:
for row in range(len(valid)):
tweet = valid_tweets_split[row]
tweet_bow_with_disasters_dct = disaster_tweets_dct.doc2bow(tweet)
tweet_bow_with_not_disasters_dct = not_disaster_tweets_dct.doc2bow(tweet)
tweet_tfidf_vector_with_disasters_tfidf = disaster_tweets_tfidf[tweet_bow_with_disasters_dct]
tweet_tfidf_vector_with_not_disasters_tfidf = not_disaster_tweets_tfidf[tweet_bow_with_not_disasters_dct]
disaster_similarity_vector = disaster_tweets_similarity[tweet_tfidf_vector_with_disasters_tfidf]
not_disaster_similarity_vector = not_disaster_tweets_similarity[tweet_tfidf_vector_with_not_disasters_tfidf]
disaster_tally = np.where(disaster_similarity_vector > 0.1)[0].size # np.where() returns a tuple, so we have to index into [0] to get what we want
not_disaster_tally = np.where(not_disaster_similarity_vector > 0.1)[0].size
if disaster_tally > not_disaster_tally:
valid.loc[row, 'prediction'] = 1
# If all went well, we should be able to see our predictions in the `valid` DataFrame...
valid.head()
# Look at that! Seems we've made some predictions! But how well did we do?
#
# Let's take a look at both the accuracy and F1 score:
accuracy = accuracy_score(valid['target'], valid['prediction'])
F1 = f1_score(valid['target'], valid['prediction'])
accuracy, F1
# `64.08%` accuracy! That's not too shabby for just looking at word frequencies...
#
# But what happens if we calculate tweet similarities using TF-IDF again, but this time using the preprocessed data that we prepared in the last section?
#
# Let's start by seeing how our scores improve with the "simply" cleaned tweets.
# ### TF-IDF with "Simple" Tweets
# Before we go any further, we'll need to get rid of the predictions we just made in `valid`.
valid = valid.drop(columns=['prediction'])
valid.head()
# The process this time around will, in fact, be exactly the same as last time! The only change we need to make is that we are indexing into the `text_simple` column in the `disaster_tweets` and `not_disaster_tweets` DataFrames.
#
# Since the procedure is the same, let's skip to the metrics! (You can still expand the code below if you need a closer look.)
# +
#collapse
disaster_tweets = disasters['text_simple'].tolist()
not_disaster_tweets = not_disasters['text_simple'].tolist()
disaster_tweets_split = [
[word for word in tweet.split()]
for tweet in disaster_tweets
]
not_disaster_tweets_split = [
[word for word in tweet.split()]
for tweet in not_disaster_tweets
]
disaster_tweets_word_frequency = defaultdict(int)
for tweet in disaster_tweets_split:
for word in tweet:
disaster_tweets_word_frequency[word] += 1
not_disaster_tweets_word_frequency = defaultdict(int)
for tweet in not_disaster_tweets_split:
for word in tweet:
not_disaster_tweets_word_frequency[word] += 1
disaster_tweets_split = [
[word for word in tweet if disaster_tweets_word_frequency[word] > 1]
for tweet in disaster_tweets_split
]
not_disaster_tweets_split = [
[word for word in tweet if not_disaster_tweets_word_frequency[word] > 1]
for tweet in not_disaster_tweets_split
]
disaster_tweets_dct = Dictionary(disaster_tweets_split)
not_disaster_tweets_dct = Dictionary(not_disaster_tweets_split)
disaster_tweets_corpus = [disaster_tweets_dct.doc2bow(tweet) for tweet in disaster_tweets_split]
not_disaster_tweets_corpus = [not_disaster_tweets_dct.doc2bow(tweet) for tweet in not_disaster_tweets_split]
disaster_tweets_tfidf = TfidfModel(disaster_tweets_corpus)
not_disaster_tweets_tfidf = TfidfModel(not_disaster_tweets_corpus)
disaster_tweets_tfidf_vectors = disaster_tweets_tfidf[disaster_tweets_corpus]
not_disaster_tweets_tfidf_vectors = not_disaster_tweets_tfidf[not_disaster_tweets_corpus]
disaster_tweets_similarity = MatrixSimilarity(disaster_tweets_tfidf_vectors)
not_disaster_tweets_similarity = MatrixSimilarity(not_disaster_tweets_tfidf_vectors)
valid_tweets = valid['text_simple'].tolist()
valid_tweets_split = [
[word for word in tweet.split()]
for tweet in valid_tweets
]
valid_tweets_word_frequency = defaultdict(int)
for tweet in valid_tweets_split:
for word in tweet:
valid_tweets_word_frequency[word] += 1
valid_tweets_split = [
[word for word in tweet if valid_tweets_word_frequency[word] > 1]
for tweet in valid_tweets_split
]
valid['prediction'] = np.zeros(len(valid)).astype('int')
for row in range(len(valid)):
tweet = valid_tweets_split[row]
tweet_bow_with_disasters_dct = disaster_tweets_dct.doc2bow(tweet)
tweet_bow_with_not_disasters_dct = not_disaster_tweets_dct.doc2bow(tweet)
tweet_tfidf_vector_with_disasters_tfidf = disaster_tweets_tfidf[tweet_bow_with_disasters_dct]
tweet_tfidf_vector_with_not_disasters_tfidf = not_disaster_tweets_tfidf[tweet_bow_with_not_disasters_dct]
disaster_similarity_vector = disaster_tweets_similarity[tweet_tfidf_vector_with_disasters_tfidf]
not_disaster_similarity_vector = not_disaster_tweets_similarity[tweet_tfidf_vector_with_not_disasters_tfidf]
disaster_tally = np.where(disaster_similarity_vector > 0.1)[0].size # np.where() returns a tuple, so we have to index into [0] to get what we want
not_disaster_tally = np.where(not_disaster_similarity_vector > 0.1)[0].size
if disaster_tally > not_disaster_tally:
valid.loc[row, 'prediction'] = 1
# -
accuracy = accuracy_score(valid['target'], valid['prediction'])
F1 = f1_score(valid['target'], valid['prediction'])
accuracy, F1
# `66.60%` accuracy; we've gotten better! Notice that our F1 score has gone up also, from `0.53` to `0.57`.
#
# For the last of the TF-IDF similarities, let's see how things go if we use the tweets that were preprocessed with SpaCy:
# ### TF-IDF with SpaCy Tweets
# Same process as before, let's clear the old predictions from `valid` and skip to the metrics!
valid = valid.drop(columns=['prediction'])
# +
#collapse
disaster_tweets = disasters['text_spacy'].tolist()
not_disaster_tweets = not_disasters['text_spacy'].tolist()
disaster_tweets_split = [
[word for word in tweet.split()]
for tweet in disaster_tweets
]
not_disaster_tweets_split = [
[word for word in tweet.split()]
for tweet in not_disaster_tweets
]
disaster_tweets_word_frequency = defaultdict(int)
for tweet in disaster_tweets_split:
for word in tweet:
disaster_tweets_word_frequency[word] += 1
not_disaster_tweets_word_frequency = defaultdict(int)
for tweet in not_disaster_tweets_split:
for word in tweet:
not_disaster_tweets_word_frequency[word] += 1
disaster_tweets_split = [
[word for word in tweet if disaster_tweets_word_frequency[word] > 1]
for tweet in disaster_tweets_split
]
not_disaster_tweets_split = [
[word for word in tweet if not_disaster_tweets_word_frequency[word] > 1]
for tweet in not_disaster_tweets_split
]
disaster_tweets_dct = Dictionary(disaster_tweets_split)
not_disaster_tweets_dct = Dictionary(not_disaster_tweets_split)
disaster_tweets_corpus = [disaster_tweets_dct.doc2bow(tweet) for tweet in disaster_tweets_split]
not_disaster_tweets_corpus = [not_disaster_tweets_dct.doc2bow(tweet) for tweet in not_disaster_tweets_split]
disaster_tweets_tfidf = TfidfModel(disaster_tweets_corpus)
not_disaster_tweets_tfidf = TfidfModel(not_disaster_tweets_corpus)
disaster_tweets_tfidf_vectors = disaster_tweets_tfidf[disaster_tweets_corpus]
not_disaster_tweets_tfidf_vectors = not_disaster_tweets_tfidf[not_disaster_tweets_corpus]
disaster_tweets_similarity = MatrixSimilarity(disaster_tweets_tfidf_vectors)
not_disaster_tweets_similarity = MatrixSimilarity(not_disaster_tweets_tfidf_vectors)
valid_tweets = valid['text_spacy'].tolist()
valid_tweets_split = [
[word for word in tweet.split()]
for tweet in valid_tweets
]
valid_tweets_word_frequency = defaultdict(int)
for tweet in valid_tweets_split:
for word in tweet:
valid_tweets_word_frequency[word] += 1
valid_tweets_split = [
[word for word in tweet if valid_tweets_word_frequency[word] > 1]
for tweet in valid_tweets_split
]
valid['prediction'] = np.zeros(len(valid)).astype('int')
for row in range(len(valid)):
tweet = valid_tweets_split[row]
tweet_bow_with_disasters_dct = disaster_tweets_dct.doc2bow(tweet)
tweet_bow_with_not_disasters_dct = not_disaster_tweets_dct.doc2bow(tweet)
tweet_tfidf_vector_with_disasters_tfidf = disaster_tweets_tfidf[tweet_bow_with_disasters_dct]
tweet_tfidf_vector_with_not_disasters_tfidf = not_disaster_tweets_tfidf[tweet_bow_with_not_disasters_dct]
disaster_similarity_vector = disaster_tweets_similarity[tweet_tfidf_vector_with_disasters_tfidf]
not_disaster_similarity_vector = not_disaster_tweets_similarity[tweet_tfidf_vector_with_not_disasters_tfidf]
disaster_tally = np.where(disaster_similarity_vector > 0.1)[0].size # np.where() returns a tuple, so we have to index into [0] to get what we want
not_disaster_tally = np.where(not_disaster_similarity_vector > 0.1)[0].size
if disaster_tally > not_disaster_tally:
valid.loc[row, 'prediction'] = 1
# -
accuracy = accuracy_score(valid['target'], valid['prediction'])
F1 = f1_score(valid['target'], valid['prediction'])
accuracy, F1
# With SpaCy lemmatization and removal of stop words, we've actually gotten the worst results of the three datasets, with an accuracy of `64.02%` and an F1 score of `0.49`.
#
# So it seems of the three preprocessing techniques used in a TF-IDF model, in this case, "simple" cleaning worked the best with an accuracy of `66.60` and an F1 score of `0.57`.
#
# Let's now move forward with Word2Vec.
# ## Word2Vec
#collapse
from gensim.models import Word2Vec
from gensim.models.phrases import Phrases, Phraser
import time
# The second method for text classification that we'll use is **word vectors**.
#
# Word vectors were first introduced by Mikolov et al.[[1]](https://arxiv.org/pdf/1301.3781.pdf)[[2]](https://arxiv.org/pdf/1310.4546.pdf) and provide highly accurate results in word similarity tasks at relatively low computational cost. You can think of a word vector as a 1-dimensional matrix of numbers of some arbitrary length computed by neural networks. Word similarity is then determined by the [cosine distance](https://en.wikipedia.org/wiki/Cosine_similarity) between two vectors.
#
# Word vectors, interestingly, can encode linguistic regularities and patterns. Therefore, many of these patterns can be represented as linear translations. For example `vector(king) - vector(man) + vector(woman)` is going to very close to `vector(queen)`. This is surprising!
#
# Let's see how word vectors do at predicting disaster tweets.
# #### Word2Vec Preprocessing
# We'll be using gensim's Word2Vec module, which processes text using a `min_count` parameter. This parameter only includes words in the input that occur more than the set `min_count` number of times. This will cause problems later on when trying to classify the tweets in the validation set because some of the words will have occurred less than the `min_count` parameter, throwing an "out-of-vocabulary" (OOV) error.
#
# In order to remedy this, we have two options:
# 1. Train the Word2Vec model and then remove the words from the validation tweets that are not in the trained vocabulary.
# 2. Preemptively change the words in our corpus that occur less than the expected `min_count` number of times with some sort of "unknown" character.
#
# Both of these methods alter the original tweet that we'll be classifying, but the latter option seems to adhere closer to the original meaning of the tweet. If we drop words, we could make an entirely new sentence with an enitrely new grammatical structure and meaning. Whereas if we replace the words that occur less than `min_count` amount of times with an unknown character, the original grammatical structure of each sentence is held in tact, creating a closer tie to the tweet's original meaning.
#
# To do this efficiently, I've created a function `replace_unknowns()` that replaces the words in a text which occur less than a specified `min_count` number of times with `'UNK'`. We can use this to alter the preprocessed columns that we made earlier and store them in our original `data` DataFrame.
#collapse
def replace_unknowns(search_texts, min_count):
"""
Replaces words that occur less than a certain number of times
in a string or list of strings with 'UNK'.
Parameters
----------
search_texts : list
A list of input strings to iterate over.
min_count : int
An integer specify the minimum count a word should occur in
the search_texts to not be replaced with 'UNK'.
Returns
-------
list
List of search_texts with words that occur less than the min_count
amount of times replaced with 'UNK'.
"""
# Get all tweets lowered and tokenized.
# This makes sense because we'd never want to
# treat an 'a' different from an 'A'.
# (Capitalization is just an orthographical convention)
texts = [
[word for word in re.split('\s+', text.lower())]
for text in search_texts
]
# create a dictionary that stores the count of each
# word in our uncleaned tweets. We can insert new words
# into the dict or add to their count if their already in it.
vocab_counts = defaultdict(int)
# Create a list that we can append words that occur more than
# the desired threshold number of times to.
vocab = []
for text in texts:
for word in text:
vocab_counts[word] += 1
# Now go through the vocab_counts and get rid of
# words that occur less than five times.
for word in vocab_counts.keys():
if vocab_counts[word] > min_count:
vocab.append(word)
# Now initialize a new column in data that will hold
# the tweets with 'UNK' replacing words that occur
# across the entire vocabulary less than five times.
# This creates congruency later on in the model.
# data['text_count_5'] = np.empty(len(data), dtype=str) # ***** DO THIS OUTSIDE FUNC IN WORD2VEC SECTION
# Now, go through each tweet and replace the words that
# occur less than 5 times throughout the entire corpus
# with 'UNK'. Then, we insert the new tweet into a new
# column in the original dataframe.
out = []
# this process takes about a minute
for i, text in enumerate(texts):
text_replaced = []
for word in text:
if word in vocab:
text_replaced.append(word)
else:
text_replaced.append('UNK')
text_replaced = ' '.join(text_replaced)
out.append(text_replaced)
return out
# Below, we'll use `min_count=5` as one of our [hyperparameters](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)) in our Word2Vec model, so let's replace all of the words in all three of our preprocessed tweet columns (`text_trigram`, `text_trigram_simple`, and `text_trigram_spacy`) in each DataFrame with `'UNK'`.
#
# > Important: We're using the tweets with the n-grams that we built in the Data Augmentation section for our Word2Vec model. If you skipped it, go back now!
#
# > Note: Normally this would happen during the initial preprocessing stage, allowing us to only need to call `replace_unknowns()` on our initial `data` DataFrame. Because we're calling `replace_unknowns()` after we've already split our data into training and validation sets, we need to call the function on all of the DataFrames we've already created.
#collapse
data['text_count_5'] = replace_unknowns(data['text_trigram'], 5)
data['text_simple_5'] = replace_unknowns(data['text_trigram_simple'], 5)
data['text_spacy_5'] = replace_unknowns(data['text_trigram_spacy'], 5)
data.head()
# +
#collapse
valid['text_count_5'] = replace_unknowns(valid['text_trigram'], 5)
valid['text_simple_5'] = replace_unknowns(valid['text_trigram_simple'], 5)
valid['text_spacy_5'] = replace_unknowns(valid['text_trigram_spacy'], 5)
disasters['text_count_5'] = replace_unknowns(disasters['text_trigram'], 5)
disasters['text_simple_5'] = replace_unknowns(disasters['text_trigram_simple'], 5)
disasters['text_spacy_5'] = replace_unknowns(disasters['text_trigram_spacy'], 5)
not_disasters['text_count_5'] = replace_unknowns(not_disasters['text_trigram'], 5)
not_disasters['text_simple_5'] = replace_unknowns(not_disasters['text_trigram_simple'], 5)
not_disasters['text_spacy_5'] = replace_unknowns(not_disasters['text_trigram_spacy'], 5)
# -
# Great, now our data is set up and ready to be used with a Word2Vec model!
# ### Word2Vec with Unprocessed Tweets
# First and foremost, let's get rid of the `valid['prediction']` column that we made using TF-IDF.
valid = valid.drop(columns=['prediction'])
# Initialize our Word2Vec model.
# > Note: I'm splitting up the training of the model into three steps. See [this notebook](https://www.kaggle.com/pierremegret/gensim-word2vec-tutorial/comments) for more details on why (and Word2Vec in general).
model = Word2Vec(min_count=5, sample=1e-3, workers=4, seed=24)
# Build the vocab for our model.
#
# The `.build_vocab()` method expects an iterable of a list of strings as its input, so first we split our tweets to adhere to that. Notice that we're looping through all of the tweets in our original `data` DataFrame rather than the `train` DataFrame we created. This is because we need the vocabulary of *all* tweets (in both the training and validation data) in order to properly compare tweets in the training data to tweets in the validation data. If we just built our model on the training data, many of the words in the validation tweets would throw OOV errors!
# +
tweets = [
[wd for wd in tweet.split(' ')]
for tweet in data['text_count_5']
]
model.build_vocab(tweets)
# -
# Now we can train the model over 30 epochs (cycles).
model.train(tweets, total_examples=model.corpus_count, epochs=30)
# Now we normalize vectors in the vocaulary for consistency.
# > Important: You wouldn't do this if you were going to train further down the line. See [this notebook](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/online_w2v_tutorial.ipynb) for more information about expanding your model's vocabulary.
model.wv.init_sims(replace=True)
# Now we can make our predictions.
#
# Just as when we were doing TF-IDF, we need to initialize a `prediction` column in the `valid` DataFrame to store our predictions.
valid['prediction'] = np.zeros(len(valid)).astype('int')
# Similar to how we predicted whether a tweet was a disaster or not with TF-IDF, we have to compare each tweet in the validation data with each tweet in both the `disasters` DataFrame and the `not_disasters` DataFrame.
#
# So, this time, for each tweet, we:
# 1. Split the validation tweet on all whitespace characters.
# 2. Calculate the similarity between the validation tweet and each disaster and non-disaster tweet (also split on whitespace characters).
# 3. If the similarity between the two tweets is greater than 0.7, add to that tweet set's tally.
# 4. If the disaster tally is gerater than the non-disaster tally, we change the value of the prediction column for the validation tweet to 1 (otherwise, it remains 0, indicating a non-disastrous guess).
#
# This is exemplified below:
#
# > Note: This model takes a little bit of time to train. It took almost 16 minutes on my machine.
# +
start_time = time.time()
for valid_row in range(len(valid)):
valid_tweet = valid.loc[valid_row, 'text_count_5']
tokenized_valid_tweet = re.split('\s+', valid_tweet) # split on all whitespace characters
disaster_count = 0
not_disaster_count = 0
# we can just reuse "disasters" and
# "not_disasters" from earlier!
for disaster_row in range(len(disasters)):
disaster_tweet = disasters.loc[disaster_row, 'text_count_5']
tokenized_disaster_tweet = re.split('\s+', disaster_tweet)
if model.wv.n_similarity(tokenized_valid_tweet, tokenized_disaster_tweet) > 0.7:
disaster_count += 1
for not_disaster_row in range(len(not_disasters)):
not_disaster_tweet = not_disasters.loc[not_disaster_row, 'text_count_5']
tokenized_not_disaster_tweet = re.split('\s+', not_disaster_tweet)
if model.wv.n_similarity(tokenized_valid_tweet, tokenized_not_disaster_tweet) > 0.7:
not_disaster_count += 1
if disaster_count > not_disaster_count:
valid.loc[valid_row, 'prediction'] = 1
end_time = time.time()
print(f'Runtime: {(end_time - start_time) / 60.0} mins')
# -
# Now let's take another look at the `valid` DataFrame to see if we've got some predictions...
valid.head()
# Seems to have worked!
#
# Now let's find out the accuracy and F1 score of our Word2Vec model using the unprocessed tweet data.
accuracy = accuracy_score(valid['target'], valid['prediction'])
F1 = f1_score(valid['target'], valid['prediction'])
accuracy, F1
# `63.39%` accuracy! That's about the same as the TF-IDF model. The F1 score on the other hand... yikes! `0.28`. Horrible!
#
# > Note: There's a bit of randomness involved when making predictions with Word2Vec and transfer learning (coming up). For this reason, if you run this notebook, your metrics may be a little different than what is shown here.
#
# Can we improve that with either of the preprocessed tweets?
# ### Word2Vec with "Simple" Tweets
# Once again, we clear out the predictions we've just made from `valid`.
valid = valid.drop(columns=['prediction'])
# Just like with TF-IDF (seeing a trend here?), the process this time around will be exactly the same as before. The only change we need to make is that we are indexing into the `text_simple_5` column in the `disaster_tweets` and `not_disaster_tweets` DataFrames.
#
# Since the procedures are the same, let's skip to the metrics! (You can still expand the code below if you need a closer look.)
# +
#collapse
start_time = time.time()
model = Word2Vec(min_count=5, sample=1e-3, workers=4, seed=24)
tweets = [
[wd for wd in tweet.split(' ')]
for tweet in data['text_simple_5']
]
model.build_vocab(tweets)
model.train(tweets, total_examples=model.corpus_count, epochs=30)
model.wv.init_sims(replace=True)
valid['prediction'] = np.zeros(len(valid)).astype('int')
for valid_row in range(len(valid)):
valid_tweet = valid.loc[valid_row, 'text_simple_5']
tokenized_valid_tweet = re.split('\s+', valid_tweet) # split on all whitespace characters
disaster_count = 0
not_disaster_count = 0
# we can just reuse "disasters" and
# "not_disasters" from earlier!
for disaster_row in range(len(disasters)):
disaster_tweet = disasters.loc[disaster_row, 'text_simple_5']
tokenized_disaster_tweet = re.split('\s+', disaster_tweet)
if model.wv.n_similarity(tokenized_valid_tweet, tokenized_disaster_tweet) > 0.7:
disaster_count += 1
for not_disaster_row in range(len(not_disasters)):
not_disaster_tweet = not_disasters.loc[not_disaster_row, 'text_simple_5']
tokenized_not_disaster_tweet = re.split('\s+', not_disaster_tweet)
if model.wv.n_similarity(tokenized_valid_tweet, tokenized_not_disaster_tweet) > 0.7:
not_disaster_count += 1
if disaster_count > not_disaster_count:
valid.loc[valid_row, 'prediction'] = 1
end_time = time.time()
print(f'Runtime: {(end_time - start_time) / 60.0} mins')
# -
accuracy = accuracy_score(valid['target'], valid['prediction'])
F1 = f1_score(valid['target'], valid['prediction'])
accuracy, F1
# Quite an improvement! Our accuracy and F1 score went up to `67.28%` and `0.43`, respectively.
#
# Now let's see how the SpaCy tweets perform in our Word2Vec model.
# ### Word2Vec with SpaCy Tweets
# Same process as before, let's clear the old predictions from `valid` and skip to the metrics!
valid = valid.drop(columns=['prediction'])
# +
#collapse
start_time = time.time()
model = Word2Vec(min_count=5, sample=1e-3, workers=4, seed=24)
tweets = [
[wd for wd in tweet.split(' ')]
for tweet in data['text_spacy_5']
]
model.build_vocab(tweets)
model.train(tweets, total_examples=model.corpus_count, epochs=30)
model.wv.init_sims(replace=True)
valid['prediction'] = np.zeros(len(valid)).astype('int')
for valid_row in range(len(valid)):
valid_tweet = valid.loc[valid_row, 'text_spacy_5']
tokenized_valid_tweet = re.split('\s+', valid_tweet) # split on all whitespace characters
disaster_count = 0
not_disaster_count = 0
# we can just reuse "disasters" and
# "not_disasters" from earlier!
for disaster_row in range(len(disasters)):
disaster_tweet = disasters.loc[disaster_row, 'text_spacy_5']
tokenized_disaster_tweet = re.split('\s+', disaster_tweet)
if model.wv.n_similarity(tokenized_valid_tweet, tokenized_disaster_tweet) > 0.7:
disaster_count += 1
for not_disaster_row in range(len(not_disasters)):
not_disaster_tweet = not_disasters.loc[not_disaster_row, 'text_spacy_5']
tokenized_not_disaster_tweet = re.split('\s+', not_disaster_tweet)
if model.wv.n_similarity(tokenized_valid_tweet, tokenized_not_disaster_tweet) > 0.7:
not_disaster_count += 1
if disaster_count > not_disaster_count:
valid.loc[valid_row, 'prediction'] = 1
end_time = time.time()
print(f'Runtime: {(end_time - start_time) / 60.0} mins')
# -
accuracy = accuracy_score(valid['target'], valid['prediction'])
F1 = f1_score(valid['target'], valid['prediction'])
accuracy, F1
# SpaCy, this time, comes in the middle of our three tests with an accuracy of `64.44%` and F1 score of `0.32`.
#
# Among the three datasets trained with a Word2Vec model, the "simple" tweets seem to have it again with an accuracy of `67.28%` and an F1 score of `0.43`.
#
# Lastly, let's turn to transfer learning.
# ## Transfer Learning with fastai
#collapse
from fastai.text.all import *
# Rather than create our own neural network from scratch that competes with something like Word2Vec, we can use transfer learning to quickly adapt our language data by using a model that's already been trained on a lot more data than just what we have.
#
# From [<NAME>](https://machinelearningmastery.com/transfer-learning-for-deep-learning/):
# > Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task.
#
# In order to perform transfer learning, we'll be using [fastai](https://docs.fast.ai/). Fastai is great because it really simplifies the training procedure, thereby making it super easy to perform an array of deep learning tasks.
#
# We'll need two classes from fastai to conduct transfer learning with text: `language_model_learner` and `text_classifier_learner`. The former will allow us to shape the pretrained model with our own data to make a new language model, while the latter will allow us to create a classifier model for the tweets we have (the same task we've been doing above).
#
# Let's start, per usual, with the unprocessed tweets.
# ### Transfer Learning with Unprocessed Tweets
# Fastai uses PyTorch under the hood, which requires our data to be formatted in [a specific way](https://pytorch.org/docs/stable/data.html). In order to do this most efficiently, we can use fastai's `DataBlock` object and `.dataloaders()` method. With `DataBlock`, we can:
# 1. Directly pull our columns from the dataframe that we'd like to train *and* test on.
# 2. Split the data however we'd like.
# 3. [And more!](https://docs.fast.ai/data.block#DataBlock)
#
# Let's start by creating a `DataBlock` that we'll pass to `language_model_learner` to create a new language model tailored to our data.
dls_lm = DataBlock(
blocks=(TextBlock.from_df('text', is_lm=True)),
get_items=ColReader('text'),
splitter=RandomSplitter(0.1)
).dataloaders(data, bs=128, seq_len=80)
# Note that there is only one block in the `DataBlock` we just created: a `TextBlock`. All we need to create a language model is the text (we don't care about the categories yet), so we only need one block in the `DataBlock`. We also need to specify the parameter `is_lm=True` when creating the `TextBlock`, to specify that this is our language model.
#
# Now we can use `.show_batch()` to take a look at our newly formatted data:
dls_lm.show_batch(max_n=2)
# We can now instantiate our `language_model_learner` using the `DataBlock` we just created and `AWD_LSTM`, which is a pretrained model provided by fastai. You can learn more about `AWD_LSTM` [here](https://arxiv.org/pdf/1708.02182.pdf).
learn = language_model_learner(
dls_lm, AWD_LSTM, drop_mult=0.3,
metrics=[accuracy, Perplexity()])
# All that's left to do is fit our language model!
#
# You'll note that fastai also provides super clear, customizable output for each training cycle.
learn.fit_one_cycle(1, 2e-2)
learn.fit_one_cycle(10,2e-3)
# The accuracy above represents the models ability to predict the next word in a sequence from our disaster tweets data. `44.16%`! That's pretty dang good for something that took about the same time as our Word2Vec models.
#
# But we're not after text prediction, we're after text classification. Let's turn to that now.
#
# First, let's create a `DataBlock` that we'll pass to `text_classifier_learner`. Notice that now we're passing two blocks to the `blocks` parameter: `TextBlock` and `CategoryBlock`. We specify these with the `get_x` and `get_y` parameters. It is also important to note the new `TextBlock` parameter `vocab`. Without this, the language model fitting we did above will mean nothing!
dls_clas = DataBlock(
blocks=(TextBlock.from_df('text', vocab=dls_lm.vocab, seq_len=80), CategoryBlock),
get_x=ColReader('text'),
get_y=ColReader('target'),
splitter=RandomSplitter()
).dataloaders(data, bs=128, seq_len=80)
# Check to see that our data is how we want it.
dls_clas.show_batch(max_n=3)
# Now it's time to create our text classifier model, again using transfer learning from the `AWD_LSTM` model provided by fastai. This time we want to see the accuracy and F1 score when testing on the validation set.
learn = text_classifier_learner(
dls_clas, AWD_LSTM, drop_mult=0.5,
metrics=[accuracy, F1Score()])
# Now we can fit:
learn.fit_one_cycle(1, 2e-2)
# And that's. It.
#
# Crazy, right?! One last step that we need to take care of to inch our models accuracy up further is [gradual unfreezing](https://stats.stackexchange.com/questions/393168/what-does-it-mean-to-freeze-or-unfreeze-a-model). Unfreezing a few layers at a time seems to make a meaningful difference in NLP, so we'll do that here (in computer vision, the model will often be unfrozen all at once).
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2))
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3))
learn.unfreeze()
learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3))
# After fully unfreezing and fitting our model, our accuracy is... `79.30%`! Over 11% better than our best Word2Vec! Impressive. Our F1 score of `0.738` also blows away our best Word2Vec F1 score of `0.433`. Impressive, indeed.
#
# But how will transfer learning perform will the preprocessed tweets? Let's find out!
# ### Transfer Learning with "Simple" Tweets
# In order to repeat the same process for transfer learning on the preprocessed tweets, we'll need to create a whole new language model for each set. This is done almost exactly in the same way as above. The two differences are:
# 1. The column that your selecting from will change from `text` to `text_simple` or `text_spacy`.
# 2. The `get_x` parameter when creating the `DataBlock` for the `text_classifier_learner`, `dls_clas`, must *remain* `text`, no matter the name of the column in the DataFrame that you are using as the independent variable. [[1]](https://forums.fast.ai/t/issue-with-textblock-from-df-dataloaders-only-accepting-one-column-name/77467)
#
# Knowing this, let's fit our language model!
# +
#collapse
dls_lm = DataBlock(
blocks=(TextBlock.from_df('text_simple', is_lm=True)),
get_items=ColReader('text_simple'),
splitter=RandomSplitter(0.1)
).dataloaders(data, bs=128, seq_len=80)
learn = language_model_learner(dls_lm, AWD_LSTM, drop_mult=0.3, metrics=[accuracy, Perplexity()])
learn.fit_one_cycle(1, 2e-2)
learn.fit_one_cycle(10,2e-3)
dls_clas = DataBlock(
blocks=(TextBlock.from_df('text_simple', vocab=dls_lm.vocab, seq_len=80), CategoryBlock),
get_x=ColReader('text'),
get_y=ColReader('target'),
splitter=RandomSplitter()
).dataloaders(data, bs=128, seq_len=80)
# -
# Fit our text classifier:
learn = text_classifier_learner(dls_clas, AWD_LSTM, drop_mult=0.5, metrics=[accuracy, F1Score()])
learn.fit_one_cycle(1, 2e-2)
# Now gradually unfreeze:
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2))
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3))
learn.unfreeze()
learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3))
# Accuracy: `78.58%`. F1 score: `0.731`.
#
# Nearly the same as, but not quite better than the unprocessed tweets. This is the opposite of what happened with TF-IDF and Word2Vec.
#
# Let's see how the SpaCy tweets perform:
# ### Transfer Learning with SpaCy Tweets
# Let's do the same thing with our tweets preprocessed with SpaCy.
#
# First, the language model:
# +
#collapse
dls_lm = DataBlock(
blocks=(TextBlock.from_df('text_spacy', is_lm=True)),
get_items=ColReader('text_spacy'),
splitter=RandomSplitter(0.1)
).dataloaders(data, bs=128, seq_len=80)
learn = language_model_learner(dls_lm, AWD_LSTM, drop_mult=0.3, metrics=[accuracy, Perplexity()])
learn.fit_one_cycle(1, 2e-2)
learn.fit_one_cycle(10,2e-3)
dls_clas = DataBlock(
blocks=(TextBlock.from_df('text_spacy', vocab=dls_lm.vocab, seq_len=80), CategoryBlock),
get_x=ColReader('text'),
get_y=ColReader('target'),
splitter=RandomSplitter()
).dataloaders(data, bs=128, seq_len=80)
# -
# Then, fit the text classifier:
learn = text_classifier_learner(dls_clas, AWD_LSTM, drop_mult=0.5, metrics=[accuracy, F1Score()])
learn.fit_one_cycle(1, 2e-2)
# Gradually unfreeze the model:
learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2))
learn.freeze_to(-3)
learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3))
learn.unfreeze()
learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3))
# Et voilà!
#
# Accuracy: `79.00%`. F1 score: `0.724`.
#
# So between the three datasets used in transfer learning, the unprocessed dataset seemed to perform the best! Unexpected, indeed.
# # Conclusion
# Now that we've gone through each model: TF-IDF, Word2Vec, and transfer learning, it's time to compare the results:
#
# Model | Dataset | Accuracy | F1 Score
# ---------- | ----------- | ------------- | -----------
# **TF-IDF** | Unprocessed | 64.08% | 0.530
# '' | "Simple" | 66.60% | 0.570
# '' | SpaCy | 64.02% | 0.489
# **Word2Vec** | Unprocessed | 63.39% | 0.278
# '' | "Simple" | 67.28% | 0.433
# '' | SpaCy | 64.44% | 0.324
# **Transfer Learning** | **Unprocessed** | **79.30%** | **0.738**
# '' | "Simple" | 75.58% | 0.731
# '' | SpaCy | 79.00% | 0.724
# And the winner is, unsurprisingly, transfer learning! What is surprising, however, is that of the three datasets that we used for transfer learning, the unprocessed dataset yielded the best results. This provides strong support for transfer learning, as it is able to extract nuances in natural language as opposed to augemented, unnatural language.
#
# If you're interesed in getting more involved with transfer learning, I strongly recommend <NAME> and <NAME>' course [Deep Learning for Coders](https://youtu.be/_QUEXsHfsA0). At the time of writing, this is an excellent resource for getting a really good, modern grasp of deep learning, provided you've got some basic Python programming experience. And it's all free!
#
# With that, I'll leave the reader to experiment further with text classification and langauge modeling.
#
# Questions I'm now asking myself:
# * What other preprocessing methods or data augmentations techniques could we have used?
# * What's a transformer?
# * How does BERT work?
# * Where else can we apply text classification to somehow learn something meaningful?
| _notebooks/2020-10-31-Text-Classification-Comparison.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# ## Import packages
# + deletable=true editable=true
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import savgol_filter
import cline_analysis as ca
import pandas as pd
import seaborn as sns
import datetime
import os
from scipy.signal import medfilt
import functools
from scipy.optimize import bisect
from scipy import stats
sns.set_style("whitegrid")
sns.set_style("ticks")
# %matplotlib qt
# %config InlineBackend.figure_format = 'svg'
plt.matplotlib.rcParams['svg.fonttype'] = 'svgfont' # fonts will be recognized by Adobe Illustrator
# + [markdown] deletable=true editable=true
# ## Load data
# + deletable=true editable=true
dirname = '/Users/zoltan/Dropbox/Channels/Fluvial/Jutai/csv_files/'
fnames,clxs,clys,rbxs,lbxs,rbys,lbys,curvatures,ages,widths,dates = ca.load_data(dirname)
# + deletable=true editable=true
fnames
# + deletable=true editable=true
dates
# + [markdown] deletable=true editable=true
# ## Get migration rate
# + deletable=true editable=true
ts1 = 0 # first timestep
ts2 = 1 # second timestep
d = dates[ts2]-dates[ts1]
years = d.days/365.0
x = np.array(clxs[ts1])
y = np.array(clys[ts1])
xn = np.array(clxs[ts2])
yn = np.array(clys[ts2])
migr_rate, migr_sign, p, q = ca.get_migr_rate(x,y,xn,yn,years,0)
# + deletable=true editable=true
migr_rate = medfilt(savgol_filter(migr_rate,41,3),kernel_size=5) # smoothing
curv,s = ca.compute_curvature(x,y)
curv = medfilt(savgol_filter(curv,71,3),kernel_size=5) # smoothing
# + deletable=true editable=true
# set intervals affected by cu=toffs to NaN - specific to Jutai river
migr_rate[1086:1293] = np.NaN
# + deletable=true editable=true
plt.figure()
plt.plot(migr_rate)
# + [markdown] deletable=true editable=true
# ## Read 'valid' inflection points and corresponding points of zero migration from CSV file
# + deletable=true editable=true
df = pd.read_csv('Jutai_LT05_L1TP_003063_19890805_20170202_01_T1_inflection_and_zero_migration_indices.csv')
LZC = np.array(df['index of inflection point'])
LZM = np.array(df['index of zero migration'])
# + deletable=true editable=true
# indices of bends affected by low erodibility and cutoffs (these have been picked manually)
erodibility_inds = [69,115,117,119,163,189,191,204,218]
cutoff_inds = [7,8,9,14,15,29,30,50,51,58,59,185,194,209,210]
# + [markdown] deletable=true editable=true
# ## Plot curvature and migration rate series side-by-side
# + deletable=true editable=true
# plot curvature and migration rate along the channel
W = np.nanmean(widths[0]) # mean channel width
fig, ax1 = plt.subplots(figsize=(25,4))
plt.tight_layout()
curv_scale = 0.6
migr_scale = 3
y1 = curv_scale
y2 = -3*curv_scale
y3 = 3*migr_scale
y4 = -migr_scale
y5 = -2*curv_scale
y6 = 2*migr_scale
for i in range(0,len(LZC)-1,2):
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,0,y5,y2,y2,y5,0]
ax1.fill(xcoords,ycoords,facecolor=[0.85,0.85,0.85],edgecolor='k',zorder=0)
deltas = 25.0
ax1.fill_between(s, 0, curv*W)
ax2 = ax1.twinx()
ax2.fill_between(s, 0, migr_rate, facecolor='green')
ax1.plot([0,max(s)],[0,0],'k--')
ax2.plot([0,max(s)],[0,0],'k--')
ax1.set_ylim(y2,y1)
ax2.set_ylim(y4,y3)
ax1.set_xlim(0,s[-1])
for i in erodibility_inds:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,0,y5,y2,y2,y5,0]
ax1.fill(xcoords,ycoords,facecolor=[1.0,0.85,0.85],edgecolor='k',zorder=0)
for i in cutoff_inds:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,0,y5,y2,y2,y5,0]
ax1.fill(xcoords,ycoords,facecolor=[0.85,1.0,0.85],edgecolor='k',zorder=0)
for i in range(len(LZC)-1):
if np.sum(np.isnan(migr_rate[LZM[i]:LZM[i+1]]))>0:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,0,y5,y2,y2,y5,0]
ax1.fill(xcoords,ycoords,color='w')
for i in range(len(LZC)-1):
if np.sum(np.isnan(migr_rate[LZM[i]:LZM[i+1]]))>0:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y3,y3,y6,0,y4,y4,0,y6]
ax2.fill(xcoords,ycoords,color='w')
for i in range(0,len(LZC)-1,2):
ax1.text(s[LZC[i]],0.5,str(i),fontsize=12)
# + [markdown] deletable=true editable=true
# ## Estimate lag between curvature and migration rate
# + deletable=true editable=true
# plot widths and boundary between the two segments
plt.figure()
plt.plot(s,widths[0])
plt.plot([s[9846],s[9846]],[0,800],'r')
# + deletable=true editable=true
# first segment
# average lag estimated from distances between inflection points and points of zero migration
# (this is what was used in the paper)
np.mean(widths[0][:9846])
# + deletable=true editable=true
# second segment
# average lag estimated from distances between inflection points and points of zero migration
# (this is what was used in the paper)
np.mean(widths[0][9846:])
# + deletable=true editable=true
np.mean(25.0*(LZM[:149]-LZC[:149]))
# + deletable=true editable=true
np.mean(25.0*(LZM[149:]-LZC[149:]))
# + [markdown] deletable=true editable=true
# # First segment (<NAME>)
# + [markdown] deletable=true editable=true
# ## Estimate friction factor Cf
# + deletable=true editable=true
# first we need a continuous channel segment (e.g., no NaNs due to cutoffs)
q=np.array(q)
p=np.array(p)
i1 = 1293
i2 = 9846
i1n = p[np.where(q==i1)[0][0]]
i2n = p[np.where(q==i2)[0][0]]
xt = x[i1:i2]
yt = y[i1:i2]
xnt = xn[i1n:i2n]
ynt = yn[i1n:i2n]
plt.figure()
plt.plot(xt,yt)
plt.plot(xnt,ynt)
plt.axis('equal')
migr_rate_t, migr_sign_t, pt, qt = ca.get_migr_rate(xt,yt,xnt,ynt,years,0)
plt.figure()
plt.plot(migr_rate_t)
# + deletable=true editable=true
# this might take a while to run
kl = 3.0 # preliminary kl value (guesstimate)
k = 1
W = np.mean(widths[0][:9846])
D = (W/18.8)**0.7092 # depth in meters (from width)
dx,dy,ds,s = ca.compute_derivatives(xt,yt)
curv_t, s = ca.compute_curvature(xt,yt)
curv_t = medfilt(savgol_filter(curv_t,71,3),kernel_size=5) # smoothing
migr_rate_t = medfilt(savgol_filter(migr_rate_t,41,3),kernel_size=5)
get_friction_factor_1 = functools.partial(ca.get_friction_factor,curvature=curv_t,migr_rate=migr_rate_t,
kl=kl,W=W, k=k, D=D, s=s)
Cf_opt = bisect(get_friction_factor_1, 0.0002, 0.1)
print Cf_opt
# + deletable=true editable=true
Cf_opt = 0.00760703125
# + [markdown] deletable=true editable=true
# ## Estimate migration rate constant kl
# + deletable=true editable=true
# minimize the error between actual and predicted migration rates (using the 75th percentile)
errors = []
curv_t, s = ca.compute_curvature(xt,yt)
curv_t = medfilt(savgol_filter(curv_t,71,3),kernel_size=5) # smoothing
for i in np.arange(1,10):
print i
R1 = ca.get_predicted_migr_rate(curv_t,W=W,k=1,Cf=Cf_opt,D=D,kl=i,s=s)
errors.append(np.abs(np.percentile(np.abs(R1),75)-np.percentile(np.abs(migr_rate_t[1:-1]),75)))
plt.figure()
plt.plot(np.arange(1,10),errors);
# + deletable=true editable=true
kl_opt = 4.0 # the error is at minimum for kl = 4.0
# + deletable=true editable=true
310/25.0
# + deletable=true editable=true
plt.figure()
plt.plot(W*kl_opt*curv_t)
plt.plot(migr_rate_t)
# + [markdown] deletable=true editable=true
# ## Plot actual migration rate against nominal migration rate
# + deletable=true editable=true
# kernel density and scatterplot of actual vs. nominal migration rate
w = np.nanmean(widths[0][:9846])
curv_nodim = W*curv_t*kl_opt
lag = 12
plt.figure(figsize=(8,8))
sns.kdeplot(curv_nodim[:-lag][np.isnan(migr_rate_t[lag:])==0], migr_rate_t[lag:][np.isnan(migr_rate_t[lag:])==0],
n_levels=20,shade=True,cmap='Blues',shade_lowest=False)
plt.scatter(curv_nodim[:-lag][::20],migr_rate_t[lag:][::20],c='k',s=15)
max_x = 2.5
plt.xlim(-max_x,max_x)
plt.ylim(-max_x,max_x)
plt.plot([-max_x,max_x],[-max_x,max_x],'k--')
plt.xlabel('nominal migration rate (m/year)', fontsize=14)
plt.ylabel('actual migration rate (m/year)', fontsize=14)
# + deletable=true editable=true
# get correlation coefficient for relationship between curvature and migration rate
slope, intercept, r_value, p_value, slope_std_rror = stats.linregress(curv_nodim[:-lag][np.isnan(migr_rate_t[lag:])==0],
migr_rate_t[lag:][np.isnan(migr_rate_t[lag:])==0])
print r_value
print r_value**2
print p_value
# + deletable=true editable=true
# number of data points used in analysis
len(curv_nodim[:-lag][np.isnan(migr_rate_t[lag:])==0])
# + deletable=true editable=true
# compute predicted migration rates
D = (w/18.8)**0.7092 # depth in meters (from width)
dx,dy,ds,s = ca.compute_derivatives(xt,yt)
R1 = ca.get_predicted_migr_rate(curv_t,W=w,k=1,Cf=Cf_opt,D=D,kl=kl_opt,s=s)
# + deletable=true editable=true
# plot actual and predicted migration rates
plt.figure()
plt.plot(s,migr_rate_t)
plt.plot(s,R1,'r')
# + deletable=true editable=true
# get correlation coefficient for relationship between actual and predicted migration rate
m_nonan = migr_rate_t[(np.isnan(R1)==0)&(np.isnan(migr_rate_t)==0)]
R_nonan = R1[(np.isnan(R1)==0)&(np.isnan(migr_rate_t)==0)]
slope, intercept, r_value, p_value, slope_std_rror = stats.linregress(R_nonan,m_nonan)
print r_value
print r_value**2
print p_value
# + deletable=true editable=true
# 90th percentile of migration rate
np.percentile(np.abs(m_nonan),90)
# + deletable=true editable=true
# plot actual vs. predicted migration rate
max_m = 2.5
plt.figure(figsize=(8,8))
sns.kdeplot(R_nonan,m_nonan,n_levels=10,shade=True,cmap='Blues',shade_lowest=False)
plt.plot([-max_m,max_m],[-max_m,max_m],'k--')
plt.scatter(R_nonan[::20],m_nonan[::20],c='k',s=15)
plt.xlim(-max_m,max_m)
plt.ylim(-max_m,max_m)
plt.xlabel('predicted migration rate (m/year)', fontsize=14)
plt.ylabel('actual migration rate (m/year)', fontsize=14)
# + deletable=true editable=true
# plot actual vs. predicted migration rate
max_m = 4.0
plt.figure(figsize=(8,8))
sns.kdeplot(R_nonan,m_nonan,n_levels=10,shade=True,cmap='Blues',shade_lowest=False)
plt.plot([-max_m,max_m],[-max_m,max_m],'k--')
plt.scatter(R_nonan[::20],m_nonan[::20],c='k',s=15)
plt.xlim(-max_m,max_m)
plt.ylim(-max_m,max_m)
plt.xlabel('predicted migration rate (m/year)', fontsize=14)
plt.ylabel('actual migration rate (m/year)', fontsize=14)
# add points affected by cutoffs and low erodibility
for i in erodibility_inds:
plt.scatter(R1[-i1+LZC[i]:-i1+LZC[i+1]][::5],migr_rate_t[-i1+LZC[i]:-i1+LZC[i+1]][::5],c='r',s=15)
for i in cutoff_inds:
plt.scatter(R1[-i1+LZC[i]:-i1+LZC[i+1]][::5],migr_rate_t[-i1+LZC[i]:-i1+LZC[i+1]][::5],c='g',s=15)
# + [markdown] deletable=true editable=true
# # Second segment (Jutai B)
# + [markdown] deletable=true editable=true
# ## Estimate friction factor Cf
# + deletable=true editable=true
# first we need a continuous channel segment (e.g., no NaNs due to cutoffs)
q=np.array(q)
p=np.array(p)
i1 = 9846
i2 = len(x)-1
i1n = p[np.where(q==i1)[0][0]]
i2n = p[np.where(q==i2)[0][0]]
xt = x[i1:i2]
yt = y[i1:i2]
xnt = xn[i1n:i2n]
ynt = yn[i1n:i2n]
plt.figure()
plt.plot(xt,yt)
plt.plot(xnt,ynt)
plt.axis('equal')
migr_rate_t, migr_sign_t, pt, qt = ca.get_migr_rate(xt,yt,xnt,ynt,years,0)
plt.figure()
plt.plot(migr_rate_t)
# + deletable=true editable=true
# this might take a while to run
kl = 4.0 # preliminary kl value (guesstimate)
k = 1
W = np.mean(widths[0][9846:])
D = (W/18.8)**0.7092 # depth in meters (from width)
dx,dy,ds,s = ca.compute_derivatives(xt,yt)
curv_t, s = ca.compute_curvature(xt,yt)
curv_t = medfilt(savgol_filter(curv_t,71,3),kernel_size=5) # smoothing
migr_rate_t = medfilt(savgol_filter(migr_rate_t,41,3),kernel_size=5)
get_friction_factor_1 = functools.partial(ca.get_friction_factor,curvature=curv_t,migr_rate=migr_rate_t,
kl=kl,W=W, k=k, D=D, s=s)
Cf_opt = bisect(get_friction_factor_1, 0.0002, 0.1)
print Cf_opt
# + deletable=true editable=true
Cf_opt = 0.00682734375
# + [markdown] deletable=true editable=true
# ## Estimate migration rate constant kl
# + deletable=true editable=true
# minimize the error between actual and predicted migration rates (using the 75th percentile)
errors = []
curv_t, s = ca.compute_curvature(xt,yt)
curv_t = medfilt(savgol_filter(curv_t,71,3),kernel_size=5) # smoothing
for i in np.arange(1,10):
print i
R1 = ca.get_predicted_migr_rate(curv_t,W=W,k=1,Cf=Cf_opt,D=D,kl=i,s=s)
errors.append(np.abs(np.percentile(np.abs(R1),75)-np.percentile(np.abs(migr_rate_t[1:-1]),75)))
plt.figure()
plt.plot(np.arange(1,10),errors);
# + deletable=true editable=true
kl_opt = 4.0 # the error is at minimum for kl = 4.0
# + deletable=true editable=true
552/25.0 # lag
# + deletable=true editable=true
plt.figure()
plt.plot(W*kl_opt*curv_t)
plt.plot(migr_rate_t)
# + [markdown] deletable=true editable=true
# ## Plot actual migration rate against nominal migration rate
# + deletable=true editable=true
# kernel density and scatterplot of actual vs. nominal migration rate
w = np.nanmean(widths[0][9846:])
curv_nodim = W*curv_t*kl_opt
lag = 22
plt.figure(figsize=(8,8))
sns.kdeplot(curv_nodim[:-lag][np.isnan(migr_rate_t[lag:])==0], migr_rate_t[lag:][np.isnan(migr_rate_t[lag:])==0],
n_levels=20,shade=True,cmap='Blues',shade_lowest=False)
plt.scatter(curv_nodim[:-lag][::20],migr_rate_t[lag:][::20],c='k',s=15)
max_x = 3.0
plt.xlim(-max_x,max_x)
plt.ylim(-max_x,max_x)
plt.plot([-max_x,max_x],[-max_x,max_x],'k--')
plt.xlabel('nominal migration rate (m/year)', fontsize=14)
plt.ylabel('actual migration rate (m/year)', fontsize=14)
# + deletable=true editable=true
# get correlation coefficient for relationship between curvature and migration rate
slope, intercept, r_value, p_value, slope_std_rror = stats.linregress(curv_nodim[:-lag][np.isnan(migr_rate_t[lag:])==0],
migr_rate_t[lag:][np.isnan(migr_rate_t[lag:])==0])
print r_value
print r_value**2
print p_value
# + deletable=true editable=true
# number of data points used in analysis
len(curv_nodim[:-lag][np.isnan(migr_rate_t[lag:])==0])
# + deletable=true editable=true
# compute predicted migration rates
D = (w/18.8)**0.7092 # depth in meters (from width)
dx,dy,ds,s = ca.compute_derivatives(xt,yt)
R1 = ca.get_predicted_migr_rate(curv_t,W=w,k=1,Cf=Cf_opt,D=D,kl=kl_opt,s=s)
# + deletable=true editable=true
# plot actual and predicted migration rates
plt.figure()
plt.plot(s,migr_rate_t)
plt.plot(s,R1,'r')
# + deletable=true editable=true
# get correlation coefficient for relationship between actual and predicted migration rate
m_nonan = migr_rate_t[(np.isnan(R1)==0)&(np.isnan(migr_rate_t)==0)]
R_nonan = R1[(np.isnan(R1)==0)&(np.isnan(migr_rate_t)==0)]
slope, intercept, r_value, p_value, slope_std_rror = stats.linregress(R_nonan,m_nonan)
print r_value
print r_value**2
print p_value
# + deletable=true editable=true
# 90th percentile of migration rate
np.percentile(np.abs(m_nonan),90)
# + deletable=true editable=true
# plot actual vs. predicted migration rate
max_m = 3.0
plt.figure(figsize=(8,8))
sns.kdeplot(R_nonan,m_nonan,n_levels=10,shade=True,cmap='Blues',shade_lowest=False)
plt.plot([-max_m,max_m],[-max_m,max_m],'k--')
plt.scatter(R_nonan[::20],m_nonan[::20],c='k',s=15)
plt.xlim(-max_m,max_m)
plt.ylim(-max_m,max_m)
plt.xlabel('predicted migration rate (m/year)', fontsize=14)
plt.ylabel('actual migration rate (m/year)', fontsize=14)
# + deletable=true editable=true
# plot actual vs. predicted migration rate
max_m = 5.0
plt.figure(figsize=(8,8))
sns.kdeplot(R_nonan,m_nonan,n_levels=10,shade=True,cmap='Blues',shade_lowest=False)
plt.plot([-max_m,max_m],[-max_m,max_m],'k--')
plt.scatter(R_nonan[::20],m_nonan[::20],c='k',s=15)
plt.xlim(-max_m,max_m)
plt.ylim(-max_m,max_m)
plt.xlabel('predicted migration rate (m/year)', fontsize=14)
plt.ylabel('actual migration rate (m/year)', fontsize=14)
# add points affected by cutoffs and low erodibility
for i in erodibility_inds:
plt.scatter(R1[-i1+LZC[i]:-i1+LZC[i+1]][::10],migr_rate_t[-i1+LZC[i]:-i1+LZC[i+1]][::10],c='r',s=15)
for i in cutoff_inds:
plt.scatter(R1[-i1+LZC[i]:-i1+LZC[i+1]][::10],migr_rate_t[-i1+LZC[i]:-i1+LZC[i+1]][::10],c='g',s=15)
| Jutai_migration_rates.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
# # %load toxicavenger.py
import numpy as np
import pandas as pd
import markovify as mk
from sklearn import ensemble
from sklearn.feature_extraction.text import TfidfVectorizer
train = pd.read_csv('./input/train.csv')
test = pd.read_csv('./input/test.csv')
sub1 = pd.read_csv('./input/output.csv')
# +
coly = [c for c in train.columns if c not in ['id','comment_text']]
y = train[coly]
df = pd.concat([train['comment_text'], test['comment_text']], axis=0)
# +
nrow = train.shape[0]
tfidf = TfidfVectorizer(stop_words='english', max_features=800000)
data = tfidf.fit_transform(df)
# +
model = ensemble.ExtraTreesClassifier(n_jobs=-1, random_state=3, verbose=1)
model.fit(data[:nrow], y)
print(1- model.score(data[:nrow], y))
# +
sub2 = model.predict_proba(data[nrow:])
sub2 = pd.DataFrame([[c[1] for c in sub2[row]] for row in range(len(sub2))]).T
sub2.columns = coly
sub2['id'] = test['id'].values
for c in coly:
sub2[c] = sub2[c].clip(0+1e12, 1-1e12)
# +
sub2.columns = [x+'_' if x not in ['id'] else x for x in sub2.columns]
blend = pd.merge(sub1, sub2, how='left', on='id')
# -
for c in coly:
blend[c] = blend[c] * 0.8 + blend[c+'_'] * 0.2
blend[c] = blend[c].clip(0+1e12, 1-1e12)
blend = blend[sub1.columns]
blend.to_csv('submission.csv', index=False)
sub2.columns
| .ipynb_checkpoints/toxic-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tirgul 7- Decision Tree and Random Forest
# +
import pandas as pd
import numpy as np
import sklearn as sk
from sklearn import tree
from sklearn.tree import DecisionTreeRegressor
melbourne_file_path = 'melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
print(len(melbourne_data))
melbourne_data.head()
# -
# drop na values
melbourne_data = melbourne_data.dropna(axis=0)
# Aquire data from multiple collumns
melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'Lattitude', 'Longtitude']
X = melbourne_data[melbourne_features]
y = melbourne_data['Price']
# Spliting the data to train and test set. Specify a number for random_state to ensure same results each run
X_train, X_test, y_train, y_test = sk.model_selection.train_test_split(X, y, test_size=0.2, random_state=42)
def mse(a,b):
return np.sqrt(np.square(a-b).mean())
# +
from sklearn.tree import DecisionTreeRegressor
# Define model. Specify a number for random_state to ensure same results each run
melbourne_model = DecisionTreeRegressor(random_state=42)
# Fit model
melbourne_model.fit(X_train, y_train)
# +
print("Making predictions for the following 5 houses:")
print(y_test.head())
print("The predictions are")
test_pred = melbourne_model.predict(X_test.head())
print(test_pred)
# +
print("Making predictions all test houses:")
print(y_test)
print("The predictions are")
test_pred = melbourne_model.predict(X_test)
print(test_pred)
print("MSE: {:.3f}".format(mse(y_test.values,test_pred)))
# +
# Random Forest
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
regr = RandomForestRegressor(n_estimators=100,max_depth=10, random_state=42)
print('Fitting model...')
regr.fit(X_train, y_train)
print("Making predictions for the following 5 houses:")
print("The predictions are")
test_pred = regr.predict(X_test)
print(test_pred)
print("MSE: {:.3f}".format(mse(y_test.values,test_pred)))
| tirgul_7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Training on Cloud ML Engine
#
# **Learning Objectives**
# - Use CMLE to run a distributed training job
#
# ## Introduction
# After having testing our training pipeline both locally and in the cloud on a susbset of the data, we can submit another (much larger) training job to the cloud. It is also a good idea to run a hyperparameter tuning job to make sure we have optimized the hyperparameters of our model.
#
# This notebook illustrates how to do distributed training and hyperparameter tuning on Cloud ML Engine.
#
# To start, we'll set up our environment variables as before.
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.13" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
# + language="bash"
# gcloud config set project $PROJECT
# gcloud config set compute/region $REGION
# -
# Next, we'll look for the preprocessed data for the babyweight model and copy it over if it's not there.
# + language="bash"
# if ! gsutil ls -r gs://$BUCKET | grep -q gs://$BUCKET/babyweight/preproc; then
# gsutil mb -l ${REGION} gs://${BUCKET}
# # copy canonical set of preprocessed files if you didn't do previous notebook
# gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
# fi
# + language="bash"
# gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
# -
# In the previous labs we developed our TensorFlow model and got it working on a subset of the data. Now we can package the TensorFlow code up as a Python module and train it on Cloud ML Engine.
#
# ## Train on Cloud ML Engine
#
# Training on Cloud ML Engine requires two things:
# - Configuring our code as a Python package
# - Using gcloud to submit the training code to Cloud ML Engine
#
# ### Move code into a Python package
#
# A Python package is simply a collection of one or more `.py` files along with an `__init__.py` file to identify the containing directory as a package. The `__init__.py` sometimes contains initialization code but for our purposes an empty file suffices.
#
# The bash command `touch` creates an empty file in the specified location, the directory `babyweight` should already exist.
# + language="bash"
# touch babyweight/trainer/__init__.py
# -
# We then use the `%%writefile` magic to write the contents of the cell below to a file called `task.py` in the `babyweight/trainer` folder.
# +
# %%writefile babyweight/trainer/task.py
import argparse
import json
import os
from . import model
import tensorflow as tf
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--bucket",
help = "GCS path to data. We assume that data is in \
gs://BUCKET/babyweight/preproc/",
required = True
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--batch_size",
help = "Number of examples to compute gradient over.",
type = int,
default = 512
)
parser.add_argument(
"--job-dir",
help = "this model ignores this field, but it is required by gcloud",
default = "junk"
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes to use for DNN feature columns -- provide \
space-separated layers",
nargs = "+",
type = int,
default=[128, 32, 4]
)
parser.add_argument(
"--nembeds",
help = "Embedding size of a cross of n key real-valued parameters",
type = int,
default = 3
)
parser.add_argument(
"--train_examples",
help = "Number of examples (in thousands) to run the training job over. \
If this is more than actual # of examples available, it cycles through them. \
So specifying 1000 here when you have only 100k examples makes this 10 epochs.",
type = int,
default = 5000
)
parser.add_argument(
"--pattern",
help = "Specify a pattern that has to be in input files. For example 00001-of \
will process only one shard",
default = "of"
)
parser.add_argument(
"--eval_steps",
help = "Positive number of steps for which to evaluate model. Default to None, \
which means to evaluate until input_fn raises an end-of-input exception",
type = int,
default = None
)
# Parse arguments
args = parser.parse_args()
arguments = args.__dict__
# Pop unnecessary args needed for gcloud
arguments.pop("job-dir", None)
# Assign the arguments to the model variables
output_dir = arguments.pop("output_dir")
model.BUCKET = arguments.pop("bucket")
model.BATCH_SIZE = arguments.pop("batch_size")
model.TRAIN_STEPS = (arguments.pop("train_examples") * 1000) / model.BATCH_SIZE
model.EVAL_STEPS = arguments.pop("eval_steps")
print ("Will train for {} steps using batch_size={}".format(model.TRAIN_STEPS, model.BATCH_SIZE))
model.PATTERN = arguments.pop("pattern")
model.NEMBEDS= arguments.pop("nembeds")
model.NNSIZE = arguments.pop("nnsize")
print ("Will use DNN size of {}".format(model.NNSIZE))
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
output_dir = os.path.join(
output_dir,
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
# Run the training job
model.train_and_evaluate(output_dir)
# -
# In the same way we can write to the file `model.py` the model that we developed in the previous notebooks.
# +
# %%writefile babyweight/trainer/model.py
import shutil
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
BUCKET = None # set from task.py
PATTERN = "of" # gets all files
# Determine CSV, label, and key columns
CSV_COLUMNS = "weight_pounds,is_male,mother_age,plurality,gestation_weeks,key".split(',')
LABEL_COLUMN = "weight_pounds"
KEY_COLUMN = "key"
# Set default values for each CSV column
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0], ["nokey"]]
# Define some hyperparameters
TRAIN_STEPS = 10000
EVAL_STEPS = None
BATCH_SIZE = 512
NEMBEDS = 3
NNSIZE = [64, 16, 4]
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(filename_pattern, mode, batch_size):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(records = value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Use filename_pattern to create file path
file_path = "gs://{}/babyweight/preproc/{}*{}*".format(BUCKET, filename_pattern, PATTERN)
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename = file_path)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(filenames = file_list) # Read text file
.map(map_func = decode_csv)) # Transform each elem by applying decode_csv fn
# In training mode, shuffle the dataset and repeat indefinitely
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
# This will now return batches of features, label
dataset = dataset.repeat(count = num_epochs).batch(batch_size = batch_size)
return dataset
return _input_fn
# Define feature columns
def get_wide_deep():
# Define column types
fc_is_male,fc_plurality,fc_mother_age,fc_gestation_weeks = [\
tf.feature_column.categorical_column_with_vocabulary_list(key = "is_male",
vocabulary_list = ["True", "False", "Unknown"]),
tf.feature_column.categorical_column_with_vocabulary_list(key = "plurality",
vocabulary_list = ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"]),
tf.feature_column.numeric_column(key = "mother_age"),
tf.feature_column.numeric_column(key = "gestation_weeks")
]
# Bucketized columns
fc_age_buckets = tf.feature_column.bucketized_column(source_column = fc_mother_age, boundaries = np.arange(start = 15, stop = 45, step = 1).tolist())
fc_gestation_buckets = tf.feature_column.bucketized_column(source_column = fc_gestation_weeks, boundaries = np.arange(start = 17, stop = 47, step = 1).tolist())
# Sparse columns are wide, have a linear relationship with the output
wide = [fc_is_male,
fc_plurality,
fc_age_buckets,
fc_gestation_buckets]
# Feature cross all the wide columns and embed into a lower dimension
crossed = tf.feature_column.crossed_column(keys = wide, hash_bucket_size = 20000)
fc_embed = tf.feature_column.embedding_column(categorical_column = crossed, dimension = 3)
# Continuous columns are deep, have a complex relationship with the output
deep = [fc_mother_age,
fc_gestation_weeks,
fc_embed]
return wide, deep
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
"is_male": tf.placeholder(dtype = tf.string, shape = [None]),
"mother_age": tf.placeholder(dtype = tf.float32, shape = [None]),
"plurality": tf.placeholder(dtype = tf.string, shape = [None]),
"gestation_weeks": tf.placeholder(dtype = tf.float32, shape = [None]),
KEY_COLUMN: tf.placeholder_with_default(input = tf.constant(value = ["nokey"], dtype = tf.string), shape = [None])
}
features = {
key: tf.expand_dims(input = tensor, axis = -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders)
# create metric for hyperparameter tuning
def my_rmse(labels, predictions):
pred_values = predictions["predictions"]
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
wide, deep = get_wide_deep()
EVAL_INTERVAL = 300 # seconds
run_config = tf.estimator.RunConfig(
save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = tf.estimator.DNNLinearCombinedRegressor(
model_dir = output_dir,
linear_feature_columns = wide,
dnn_feature_columns = deep,
dnn_hidden_units = NNSIZE,
config = run_config)
# Illustrates how to add an extra metric
estimator = tf.contrib.estimator.add_metrics(estimator, my_rmse)
# For batch prediction, you need a key associated with each instance
estimator = tf.contrib.estimator.forward_features(estimator, KEY_COLUMN)
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset("train", tf.estimator.ModeKeys.TRAIN, BATCH_SIZE),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter(name = "exporter", serving_input_receiver_fn = serving_input_fn, exports_to_keep = None)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("eval", tf.estimator.ModeKeys.EVAL, 2**15), # no need to batch in eval
steps = EVAL_STEPS,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator = estimator, train_spec = train_spec, eval_spec = eval_spec)
# -
# ## Train locally
#
# After moving the code to a package, make sure it works as a standalone. Note, we incorporated the `--pattern` and `--train_examples` flags so that we don't try to train on the entire dataset while we are developing our pipeline. Once we are sure that everything is working on a subset, we can change the pattern so that we can train on all the data. Even for this subset, this takes about *3 minutes* in which you won't see any output ...
# + language="bash"
# echo "bucket=$BUCKET"
# rm -rf babyweight_trained
# export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
# python -m trainer.task \
# --bucket=$BUCKET \
# --output_dir=babyweight_trained \
# --job-dir=./tmp \
# --pattern="00000-of-"\
# --train_examples=1 \
# --eval_steps=1
# -
# ## Making predictions
#
# The JSON below represents an input into your prediction model. Write the input.json file below with the next cell, then run the prediction locally to assess whether it produces predictions correctly.
# %%writefile inputs.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
# + language="bash"
# MODEL_LOCATION=$(ls -d $(pwd)/babyweight_trained/export/exporter/* | tail -1)
# echo $MODEL_LOCATION
# gcloud ml-engine local predict --model-dir=$MODEL_LOCATION --json-instances=inputs.json
# -
# ## Training on the Cloud with CMLE
#
# Once the code works in standalone mode, you can run it on Cloud ML Engine. Because this is on the entire dataset, it will take a while. The training run took about <b> an hour </b> for me. You can monitor the job from the GCP console in the Cloud Machine Learning Engine section.
# + language="bash"
# OUTDIR=gs://${BUCKET}/babyweight/trained_model
# JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
# echo $OUTDIR $REGION $JOBNAME
# gsutil -m rm -rf $OUTDIR
# gcloud ml-engine jobs submit training $JOBNAME \
# --region=$REGION \
# --module-name=trainer.task \
# --package-path=$(pwd)/babyweight/trainer \
# --job-dir=$OUTDIR \
# --staging-bucket=gs://$BUCKET \
# --scale-tier=STANDARD_1 \
# --runtime-version=$TFVERSION \
# -- \
# --bucket=${BUCKET} \
# --output_dir=${OUTDIR} \
# --train_examples=200000
# -
# When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was:
# <pre>
# Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186
# </pre>
# The final RMSE was 1.03 pounds.
# <h2> Hyperparameter tuning </h2>
# <p>
# All of these are command-line parameters to my program. To do hyperparameter tuning, create hyperparam.xml and pass it as --configFile.
# This step will take <b>1 hour</b> -- you can increase maxParallelTrials or reduce maxTrials to get it done faster. Since maxParallelTrials is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
#
# %%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
- parameterName: nnsize
type: INTEGER
minValue: 64
maxValue: 512
scaleType: UNIT_LOG_SCALE
# + language="bash"
# OUTDIR=gs://${BUCKET}/babyweight/hyperparam
# JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
# echo $OUTDIR $REGION $JOBNAME
# gsutil -m rm -rf $OUTDIR
# gcloud ml-engine jobs submit training $JOBNAME \
# --region=$REGION \
# --module-name=trainer.task \
# --package-path=$(pwd)/babyweight/trainer \
# --job-dir=$OUTDIR \
# --staging-bucket=gs://$BUCKET \
# --scale-tier=STANDARD_1 \
# --config=hyperparam.yaml \
# --runtime-version=$TFVERSION \
# -- \
# --bucket=${BUCKET} \
# --output_dir=${OUTDIR} \
# --eval_steps=10 \
# --train_examples=20000
# -
# ## Repeat training
#
# Now that we've determined the optimal hyparameters, we'll retrain with these tuned parameters. Note the last line.
# + language="bash"
# OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
# JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
# echo $OUTDIR $REGION $JOBNAME
# gsutil -m rm -rf $OUTDIR
# gcloud ml-engine jobs submit training $JOBNAME \
# --region=$REGION \
# --module-name=trainer.task \
# --package-path=$(pwd)/babyweight/trainer \
# --job-dir=$OUTDIR \
# --staging-bucket=gs://$BUCKET \
# --scale-tier=STANDARD_1 \
# --runtime-version=$TFVERSION \
# -- \
# --bucket=${BUCKET} \
# --output_dir=${OUTDIR} \
# --train_examples=20000 --batch_size=35 --nembeds=16 --nnsize=281
# -
# Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| courses/machine_learning/deepdive/05_review/5_train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="SvQXkW3vDEKZ" colab_type="code" outputId="855b5274-b7a0-45e4-e404-6be9de8b196d" executionInfo={"status": "ok", "timestamp": 1580573046374, "user_tz": -330, "elapsed": 27348, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 121}
from google.colab import drive
drive.mount('/gdrive')
# + id="rLQb8ZoqDEIp" colab_type="code" colab={}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="4SOlg__gDYvN" colab_type="code" outputId="65b72e97-0584-4074-9a94-4e4041755244" executionInfo={"status": "ok", "timestamp": 1580573047813, "user_tz": -330, "elapsed": 28695, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mD<KEY>ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 195}
path = '/gdrive/My Drive/ML:Pilot/Assignments/Data/Iris.csv'
raw_data = pd.read_csv(path)
raw_data.head()
# + id="tmMYZs7HDobB" colab_type="code" outputId="3caeaa22-05a8-4e6c-8a3a-85d6e6a78f1a" executionInfo={"status": "ok", "timestamp": 1580573047817, "user_tz": -330, "elapsed": 28635, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
raw_data.shape
# + id="NUHsigLiDraa" colab_type="code" outputId="baac1bda-72f3-43f2-b73b-b526a938a315" executionInfo={"status": "ok", "timestamp": 1580573047826, "user_tz": -330, "elapsed": 28627, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 402}
y = raw_data.iloc[:,5:]
y
# + id="5-0DqNCIIenO" colab_type="code" outputId="79786806-dc93-4923-ed93-c898a9774de6" executionInfo={"status": "ok", "timestamp": 1580573047830, "user_tz": -330, "elapsed": 28612, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 296}
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
y["Species"] = labelencoder.fit_transform(y["Species"])
y.head()
# + id="JRQZcd7RI0sY" colab_type="code" outputId="e810800c-16cc-4349-9103-ceaf3001f90e" executionInfo={"status": "ok", "timestamp": 1580573047833, "user_tz": -330, "elapsed": 28583, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
y.shape
# + id="MPHOyWYvJtTT" colab_type="code" outputId="13abbcb5-c850-4e21-f34e-e10a69daa8f1" executionInfo={"status": "ok", "timestamp": 1580573047841, "user_tz": -330, "elapsed": 28524, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 402}
x = raw_data.iloc[:,:5]
x
# + id="LvqtmnAHJ5ln" colab_type="code" outputId="c2dc1caf-4218-4081-da6c-0b935016d4b6" executionInfo={"status": "ok", "timestamp": 1580573047845, "user_tz": -330, "elapsed": 28476, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
x.shape, y.shape
# + id="3Jq2Q7bAJ7r6" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size = 0.25, shuffle = False)
# + id="ptYQqnmpK-6D" colab_type="code" outputId="4db80483-0f65-42b2-949e-51eb176596ce" executionInfo={"status": "ok", "timestamp": 1580573048361, "user_tz": -330, "elapsed": 28896, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
x_train.shape, x_test.shape
# + id="ijfw-H54LECP" colab_type="code" outputId="c75111fd-993f-4902-d3a8-3f1733b643ed" executionInfo={"status": "ok", "timestamp": 1580573048364, "user_tz": -330, "elapsed": 28828, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
y_train.shape, y_test.shape
# + id="94ykMj8KLJO6" colab_type="code" colab={}
x_train = np.array(x_train)
y_train = np.array(y_train)
# + id="WvKVb1cgLax7" colab_type="code" outputId="1c964432-a601-4eca-a230-40c42c56a236" executionInfo={"status": "ok", "timestamp": 1580573049257, "user_tz": -330, "elapsed": 29665, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
parameters = {'solver': ('newton-cg','liblinear','saga'), 'C':[0.001, 10]}
model = LogisticRegression(penalty = 'l2', n_jobs = -2, max_iter = 1000)
a = GridSearchCV(model, parameters, n_jobs = -2)
a.fit(x_train,y_train)
# + id="2_RToLIgMVI9" colab_type="code" colab={}
training_score = a.score(x_train, y_train)
y_pred = a.predict(x_train)
# + id="JOqH-uYKMfK_" colab_type="code" outputId="c3245226-9f28-45cd-e534-12197163fb47" executionInfo={"status": "ok", "timestamp": 1580573049263, "user_tz": -330, "elapsed": 29604, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 118}
y_pred
# + id="-Kov7QF8MgXa" colab_type="code" outputId="065c9ef5-6e22-42c1-baa8-ff6c2b445e74" executionInfo={"status": "ok", "timestamp": 1580573049265, "user_tz": -330, "elapsed": 29570, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 185}
from sklearn.metrics import classification_report
cr =classification_report(y_train,y_pred)
print(cr)
# + id="mfP9ur5mNyrt" colab_type="code" outputId="e7fc4c00-c156-48fc-ce3c-b88cd4efc783" executionInfo={"status": "ok", "timestamp": 1580573049267, "user_tz": -330, "elapsed": 29539, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 67}
from sklearn.metrics import confusion_matrix
m = confusion_matrix(y_train,y_pred)
print(m)
# + id="jzJEnuYIOAFN" colab_type="code" outputId="977e08ce-ed8e-4abc-d976-f328c1880299" executionInfo={"status": "ok", "timestamp": 1580573050090, "user_tz": -330, "elapsed": 30332, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 547}
import seaborn as sns
plt.figure(figsize = (9,9))
sns.heatmap(m, annot = True, fmt='.3f', linewidth=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual output');
plt.xlabel('Predicted output');
sample_title = 'Training accuracy score: {0}'.format(training_score)
plt.title(sample_title, size = 20)
# + id="laetimGiOw_j" colab_type="code" outputId="273aa32b-245d-4df5-b8d0-e37ec1d29a95" executionInfo={"status": "ok", "timestamp": 1580573050099, "user_tz": -330, "elapsed": 30321, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
x_test.shape, y_test.shape
# + id="7nddmI1oPXfR" colab_type="code" colab={}
pred = a.predict(x_test)
# + id="bpol2D_nPgqe" colab_type="code" outputId="82ff769f-c31a-4749-cdf0-ff392d77b219" executionInfo={"status": "ok", "timestamp": 1580573050107, "user_tz": -330, "elapsed": 30284, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 50}
pred
# + id="x8DxtCnfPheu" colab_type="code" outputId="5de97526-1a90-4d81-bf0c-b30166c7756a" executionInfo={"status": "ok", "timestamp": 1580573050110, "user_tz": -330, "elapsed": 30231, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
score = a.score(x_test, y_test)
print(score)
# + id="E2GnRlZkPmdW" colab_type="code" colab={}
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
# + id="Up2CO3SNPuiB" colab_type="code" outputId="03f2ff82-e3c6-4d9a-bdde-89c207342719" executionInfo={"status": "ok", "timestamp": 1580573050119, "user_tz": -330, "elapsed": 30165, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 50}
cm = metrics.confusion_matrix(y_test,pred)
print(cm)
# + id="e-cDdTaHP2mQ" colab_type="code" outputId="55fd2ce1-4966-4eaf-9c4e-6dd632c05a55" executionInfo={"status": "ok", "timestamp": 1580573051007, "user_tz": -330, "elapsed": 30988, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mDqpPfLTvQwOdy7Rs8tKtf9ADbmEzXUErwFGih_=s64", "userId": "09111573231362728695"}} colab={"base_uri": "https://localhost:8080/", "height": 601}
plt.figure(figsize=(10,10))
sns.heatmap(cm, annot = True, fmt = ".3f", linewidth = .5, square = True, cmap = "Blues_r");
plt.ylabel('Actual test result');
plt.xlabel('Predicted test result');
title = ' Test accuracy score: {0}'.format(score)
plt.title(title, size = 20)
# + id="IarTjZpeQecb" colab_type="code" colab={}
| Iris Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W2D5_GenerativeModels/W2D5_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# -
# # Neuromatch Academy: Week 2, Day 5, Tutorial 1
#
# # VAEs and GANs : Conditional GANs and Implications of GAN Technology
#
# __Content creators:__ <NAME>
#
# __Production editors:__ <NAME>
#
# *Taken from UPenn course with slide modifications*:
# __Instructor:__ <NAME>, __Original Content creators:__ <NAME>, <NAME>
#
# **ALERT**: for prepod use only.
# ---
# ## Tutorial Objectives
# In the first tutorial of the *Generative Models* day, we are going to
#
# - Think about unsupervised learning and get a bird's eye view of why it is useful
# - See the connection between AutoEncoding and dimensionality reduction
# - Start thinking about neural networks as generative models
# - Put on our Bayesian hats and turn AEs into VAEs
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 591} outputId="135d1e8d-e890-4c9c-8e87-1a64e964f44f"
#@markdown Tutorial slides
# you should link the slides for all tutorial videos here (we will store pdfs on osf)
from IPython.display import HTML
HTML('<iframe src="https://docs.google.com/presentation/d/1_Nsq8OHIpls5iPlbA0WF53J1Ypqp02BGBGVj0zgec3c/edit?usp=sharing" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>')
# -
# ---
# # Setup
# + colab={"base_uri": "https://localhost:8080/"} outputId="a4909cb9-5112-40f2-bfd5-92337d90cbe0"
# we need to first upgrade the Colab's TorchVision
# !pip install --upgrade torchvision
# +
# imports
import torch
import random
import numpy as np
import torch.nn as nn
import torchvision as tv
import matplotlib.pylab as plt
import torch.nn.functional as F
from tqdm.notebook import tqdm, trange
from torch.utils.data import DataLoader
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
# + cellView="form"
# @title Figure Settings
# %config InlineBackend.figure_format = 'retina'
# %matplotlib inline
fig_w, fig_h = (8, 6)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# + cellView="form"
#@title Helper functions
def image_moments(image_batches, n_batches=None):
"""
Compute mean an covariance of all pixels from batches of images
"""
m1, m2 = torch.zeros((), device=DEVICE), torch.zeros((), device=DEVICE)
n = 0
for im in tqdm(image_batches, total=n_batches, leave=False,
desc='Computing pixel mean and covariance...'):
im = im.to(DEVICE)
b = im.size()[0]
im = im.view(b, -1)
m1 = m1 + im.sum(dim=0)
m2 = m2 + (im.view(b,-1,1) * im.view(b,1,-1)).sum(dim=0)
n += b
m1, m2 = m1/n, m2/n
cov = m2 - m1.view(-1,1)*m1.view(1,-1)
return m1.cpu(), cov.cpu()
def pca_encoder_decoder(mu, cov, k):
"""
Compute encoder and decoder matrices for PCA dimensionality reduction
"""
mu = mu.view(1,-1)
u, s, v = torch.svd_lowrank(cov, q=k)
W_encode = v / torch.sqrt(s)
W_decode = u * torch.sqrt(s)
def pca_encode(x):
# Encoder: subtract mean image and project onto top K eigenvectors of
# the data covariance
return (x.view(-1,mu.numel()) - mu) @ W_encode
def pca_decode(h):
# Decoder: un-project then add back in the mean
return (h @ W_decode.T) + mu
return pca_encode, pca_decode
# Helper for plotting images
def plot_torch_image(image, ax=None):
ax = ax if ax is not None else plt.gca()
c, h, w = image.size()
cm = 'gray' if c==1 else None
# Torch images have shape (channels, height, width) but matplotlib expects
# (height, width, channels) or just (height,width) when grayscale
ax.imshow(image.detach().cpu().permute(1,2,0).squeeze(), cmap=cm)
ax.set_xticks([])
ax.set_yticks([])
# + cellView="form"
#@title Plotting functions
def plot_linear_ae(lin_losses):
plt.figure()
plt.plot(lin_losses)
plt.ylim([0, 2*torch.as_tensor(lin_losses).median()])
plt.xlabel('Training batch')
plt.ylabel('MSE Loss')
plt.show()
def plot_conv_ae(lin_losses, conv_losses):
plt.figure()
plt.plot(lin_losses)
plt.plot(conv_losses)
plt.legend(['Lin AE', 'Conv AE'])
plt.xlabel('Training batch')
plt.ylabel('MSE Loss')
plt.ylim([0,
2*max(torch.as_tensor(conv_losses).median(),
torch.as_tensor(lin_losses).median())])
plt.show()
def plot_images(images, h=5, w=5):
plt.figure(figsize=(5, 5))
for i in range(h*w):
plt.subplot(h, w, i + 1)
plot_torch_image(images[i])
plt.show()
def plot_phi(phi, num=4):
plt.figure(figsize=(12, 3))
for i in range(num):
plt.subplot(1, num, i + 1)
plt.scatter(zs[i, :, 0], zs[i, :, 1], marker='.')
th = torch.linspace(0, 6.28318, 100)
x, y = torch.cos(th), torch.sin(th)
# Draw 2-sigma contours
plt.plot(
2*x*phi[i, 2].exp().item() + phi[i, 0].item(),
2*y*phi[i, 2].exp().item() + phi[i, 1].item()
)
plt.xlim(-5, 5)
plt.ylim(-5, 5)
plt.grid()
plt.axis('equal')
plt.suptitle('If rsample() is correct, then most but not all points should lie in the circles')
plt.show()
# + cellView="form" colab={"base_uri": "https://localhost:8080/"} outputId="d1f24782-98d8-41cf-b4d4-a079715efb57"
# @title Set seed for reproducibility in Pytorch
# https://pytorch.org/docs/stable/notes/randomness.html
def set_seed(seed):
"""
Set random seed for reproducibility
Args:
seed: integer
A positive integer to ensure reproducibility
"""
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
np.random.seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
print(f'Seed {seed} has been set.')
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
set_seed(522)
# -
# ---
# # Section 1: Generative models
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="ac255701-56b2-43d6-fe26-bf6b0afc8b77"
#@title Video 1: Generative vs. Discriminative Models
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p-XT6vLjPQo", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
# + cellView="form"
# @markdown Download a few standard image datasets while the above video plays
# See https://pytorch.org/docs/stable/torchvision/datasets.html
# %%capture
# MNIST contains handwritten digets 0-9, in grayscale images of size (1,28,28)
mnist = tv.datasets.MNIST('./mnist/',
train=True,
transform=tv.transforms.ToTensor(),
download=True)
mnist_val = tv.datasets.MNIST('./mnist/',
train=False,
transform=tv.transforms.ToTensor(),
download=True)
# -
# ## Select a dataset
#
# We've built today's tutorial to be flexible. It should work more-or-less out of the box with both MNIST and CIFAR (and other image datasets). MNIST is in many ways simpler, and the results will likely look better and run a bit faster if using MNIST. But we are leaving it up to you to pick which one you want to experiment with!
#
# We encourage pods to coordinate so that some members use MNIST and others use CIFAR10.
# Uncomment this to select MNIST
my_dataset = mnist
my_dataset_name = "MNIST"
my_dataset_size = (1, 28, 28)
my_dataset_dim = 28*28
my_valset = mnist_val
# ---
# # Section 2: AutoEncoders
# ## Conceptual introduction to AutoEncoders
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="471ddb32-b253-4f3a-9668-7c26de88086d"
#@title Video 2: Latent Variable Models
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="ACH27i-B-LM", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
# -
# ## Build a linear AutoEncoder
# Now we'll create our first autoencoder. It will reduce images down to $K$ dimensions. The architecture will be quite simple: the input will be linearly mapped to a single hidden layer with $K$ units, which will then be linearly mapped back to an output that is the same size as the input:
# $$\mathbf{x} \longrightarrow \mathbf{h} \longrightarrow \mathbf{x'}$$
#
# The loss function we'll use will simply be mean squared error (MSE) quantifying how well the reconstruction ($\mathbf{x'}$) matches the original image ($\mathbf{x}$):
# $$\text{MSE Loss} = \sum_{i=1}^{N} ||\mathbf{x}_i - \mathbf{x'}_i||^2_2$$
#
# If all goes well, then the AutoEncoder will learn, **end to end**, a good "encoding" or "compression" of inputs ($\mathbf{x \longrightarrow h}$) as well as a good "decoding" ($\mathbf{h \longrightarrow x'}$).
# The first choice to make is the dimensionality of $\mathbf{h}$. We'll see more on this below, but For MNIST, 5 to 20 is plenty. For CIFAR, we need more like 50 to 100 dimensions.
#
# Coordinate with your pod to try a variety of values for $K$ in each dataset so you can compare results.
# ### Coding Exercise 2.1
#
# Fill in the missing parts of the `LinearAutoEncoder` class and training loop
#
# 1. The `LinearAutoEncoder` as two stages: an `encoder` which linearly maps from inputs to a hidden layer of size `K` (with no nonlinearity), and a `decoder` which maps back from `K` up to the number of pixels in each image (`my_dataset_dim`).
# 2. The training loop will minimize MSE loss, as written above.
# + colab={"base_uri": "https://localhost:8080/"} outputId="c50d9389-217a-46db-e846-cb0a3ec3b707"
class LinearAutoEncoder(nn.Module):
def __init__(self, K):
####################################################################
# Fill in all missing code below (...),
# then remove or comment the line below to test your class
raise NotImplementedError("Please complete the LinearAutoEncoder class!")
####################################################################
super(LinearAutoEncoder, self).__init__()
# encoder
self.enc_lin = ...
# decoder
self.dec_lin = ...
def encode(self, x):
h = ...
return h
def decode(self, h):
x_prime = ...
return x_prime
def forward(self, x):
flat_x = x.view(x.size()[0], -1)
h = self.encode(flat_x)
return self.decode(h).view(x.size())
def train_autoencoder(autoencoder, dataset, epochs=20, batch_size=250):
autoencoder.to(DEVICE)
optim = torch.optim.Adam(autoencoder.parameters(), lr=1e-3, weight_decay=1e-5)
loss_fn = nn.MSELoss()
g = torch.Generator()
g.manual_seed(2021)
loader = DataLoader(dataset, batch_size=batch_size, shuffle=True,
pin_memory=True, num_workers=2,
worker_init_fn=seed_worker,
generator=g)
mse_loss = torch.zeros(epochs * len(dataset) // batch_size, device=DEVICE)
i = 0
for epoch in trange(epochs, desc='Epoch'):
for im_batch, _ in loader:
im_batch = im_batch.to(DEVICE)
optim.zero_grad()
reconstruction = autoencoder(im_batch)
####################################################################
# Fill in all missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Please complete the train_autoencoder function!")
####################################################################
# write the loss calculation
loss = ...
loss.backward()
optim.step()
mse_loss[i] = loss.detach()
i += 1
# After training completes, make sure the model is on CPU so we can easily
# do more visualizations and demos.
autoencoder.to('cpu')
return mse_loss.cpu()
# Pick your own K
K = 20
set_seed(2021)
# Uncomment to test your code
# lin_ae = LinearAutoEncoder(K)
# lin_losses = train_autoencoder(lin_ae, my_dataset)
# plot_linear_ae(lin_losses)
# + colab={"base_uri": "https://localhost:8080/", "height": 499, "referenced_widgets": ["cd4be182aea64f8a89fed6fd7866c05e", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "8baab8aca4f34777a56b6eedd6d672ec"]} outputId="27d8c2be-c560-4008-ed9f-21396abdb7fd"
# to_remove solution
class LinearAutoEncoder(nn.Module):
def __init__(self, K):
super(LinearAutoEncoder, self).__init__()
# encoder
self.enc_lin = nn.Linear(my_dataset_dim, K)
# decoder
self.dec_lin = nn.Linear(K, my_dataset_dim)
def encode(self, x):
h = self.enc_lin(x)
return h
def decode(self, h):
x_prime = self.dec_lin(h)
return x_prime
def forward(self, x):
flat_x = x.view(x.size()[0], -1)
h = self.encode(flat_x)
return self.decode(h).view(x.size())
def train_autoencoder(autoencoder, dataset, epochs=20, batch_size=250):
autoencoder.to(DEVICE)
optim = torch.optim.Adam(autoencoder.parameters(), lr=1e-3, weight_decay=1e-5)
loss_fn = nn.MSELoss()
g = torch.Generator()
g.manual_seed(2021)
loader = DataLoader(dataset, batch_size=batch_size, shuffle=True,
pin_memory=True, num_workers=2,
worker_init_fn=seed_worker,
generator=g)
mse_loss = torch.zeros(epochs * len(dataset) // batch_size, device=DEVICE)
i = 0
for epoch in trange(epochs, desc='Epoch'):
for im_batch, _ in loader:
im_batch = im_batch.to(DEVICE)
optim.zero_grad()
reconstruction = autoencoder(im_batch)
# write the loss calculation
loss = loss_fn(reconstruction.view(batch_size, -1),
target=im_batch.view(batch_size, -1))
loss.backward()
optim.step()
mse_loss[i] = loss.detach()
i += 1
# After training completes, make sure the model is on CPU so we can easily
# do more visualizations and demos.
autoencoder.to('cpu')
return mse_loss.cpu()
# Pick your own K
K = 20
set_seed(2021)
# Uncomment to test your code
lin_ae = LinearAutoEncoder(K)
lin_losses = train_autoencoder(lin_ae, my_dataset)
with plt.xkcd():
plot_linear_ae(lin_losses)
# -
# One way to think about AutoEncoders is that they automatically discover good dimensionality-reduction of the data. Another easy and common technique for dimensionality reduction is to project data onto the top $K$ **principal components** (Principal Component Analysis or PCA). For comparison, let's also do PCA.
# + colab={"base_uri": "https://localhost:8080/", "height": 17, "referenced_widgets": ["9a4fe0e519a74a98be76759667bd7d97", "6a59cfb8fc0a483da54fc817bbc47396", "5025466db8374bbe8b440ef3d738da23", "5e7e417d49b4420ebcd91c60ee049d72", "78c8a895099b4bec9f62d179ec782a16", "4bb31d15661f4fe9938ac2f516ddea31", "3d50e176fe3143b790ee159ad5d71405", "7a7b742536d948d8a075dfbba41043e9"]} outputId="e65fa279-22b7-4842-e177-d2951c094d38"
# PCA requires finding the top K eigenvectors of the data covariance. Start by
# finding the mean and covariance of the pixels in our dataset
g = torch.Generator()
g.manual_seed(2021)
loader = DataLoader(my_dataset,
batch_size=32,
pin_memory=True,
worker_init_fn=seed_worker,
generator=g)
mu, cov = image_moments((im for im, _ in loader),
n_batches=len(my_dataset) // 32)
pca_encode, pca_decode = pca_encoder_decoder(mu,
cov,
K)
# -
# Let's visualize some of the reconstructions ($\mathbf{x'}$) side-by-side with the input images ($\mathbf{x}$).
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 315} outputId="ad604006-3a45-49e4-94d0-03b56e96b4c7"
#@markdown Visualize the reconstructions `x'`
n_plot = 7
plt.figure(figsize=(10, 4.5))
for i in range(n_plot):
idx = torch.randint(len(my_dataset), size=())
image, _ = my_dataset[idx]
# Get reconstructed image from autoencoder
with torch.no_grad():
reconstruction = lin_ae(image.unsqueeze(0)).reshape(image.size())
# Get reconstruction from PCA dimensionality reduction
h_pca = pca_encode(image)
recon_pca = pca_decode(h_pca).reshape(image.size())
plt.subplot(3, n_plot, i + 1)
plot_torch_image(image)
if i == 0:
plt.ylabel('Original\nImage')
plt.subplot(3, n_plot, i + 1 + n_plot)
plot_torch_image(reconstruction)
if i == 0:
plt.ylabel(f'Lin AE\n(K={K})')
plt.subplot(3, n_plot, i + 1 + 2*n_plot)
plot_torch_image(recon_pca)
if i == 0:
plt.ylabel(f'PCA\n(K={K})')
plt.show()
# -
# ### Think!
#
# Compare the PCA-based reconstructions to those from the linear autoencoder. Is one better than the other? Are they equally good? Equally bad?
# ## Building a nonlinear convolutional autoencoder
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="6e4e7f12-85da-4a39-b685-8514866f07be"
#@title Video 3: Autoencoders
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="ZckJ-Wnx5vw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
# -
# The `nn.Linear` layer by default has a "bias" term, which is a learnable offset parameter separate for each output unit. Just like the PCA encoder "centered" the data by subtracting off the average image (`mu`) before encoding and added it back in during decoding, a bias term in the decoder can effectively account for the first moment of the data (AKA the average of all images in the training set). Convolution layers do have bias parameters, but the bias is applied per filter rather than per pixel location. If we're generating RGB images, then `Conv2d` will learn only 3 biases: one for each of R, G, and B.
#
# For some conceptual continuity with both PCA and the `nn.Linear` layers above, the next block defines a custom layer for adding a learnable per-pixel offset. This custom layer will be used twice: as the first stage of the encoder and as the final stage of the decoder. Ideally, this means that the rest of the neural net can focus on fitting more interesting fine-grained structure.
class BiasLayer(nn.Module):
def __init__(self, shape):
super(BiasLayer, self).__init__()
init_bias = torch.zeros(shape)
self.bias = nn.Parameter(init_bias, requires_grad=True)
def forward(self, x):
return x + self.bias
# With that out of the way, we will next define a **nonlinear** and **convolutional** autoencoder. Here's a quick tour of the architecture:
#
# 1. The **encoder** once again maps from images to $\mathbf{h}\in\mathbb{R}^K$. This will use a `BiasLayer` followed by two convolutional layers (`nn.Conv2D`), followed by flattening and linearly projecting down to $K$ dimensions. The convolutional layers will have `ReLU` nonlinearities on their outputs.
# 1. The **decoder** inverts this process, taking in vectors of length $K$ and outputting images. Roughly speaking, its architecture is a "mirror image" of the encoder: the first decoder layer is linear, followed by two **deconvolution** layers (`nn.ConvTranspose2d`). The `ConvTranspose2d` layers will have `ReLU` nonlinearities on their _inputs_. This "mirror image" between the encoder and decoder is a useful and near-ubiquitous convention. The idea is that the decoder can then learn to approximately invert the encoder, but it is not a strict requirement (and it does not guarantee the decoder will be an exact inverse of the encoder!).
#
# Below is a schematic of the architecture for MNIST. Notice that the width and height dimensions of the image planes reduce after each `nn.Conv2d` and increase after each `nn.ConvTranspose2d`. With CIFAR10, the architecture is the same but the exact sizes will differ a bit.
#
# <img src="https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W08_AutoEncoders_GANs/static/conv_sizes.png" />
#
# We will not go into detail about `ConvTranspose2d` here. For now, just know that it acts a bit like, but not exactly, an inverse to `Conv2d`. The following code demonstrates this change in sizes:
# + colab={"base_uri": "https://localhost:8080/"} outputId="43f0d50d-f505-4e67-b99b-8f72c96284d2"
dummy_image = torch.zeros(my_dataset_size).unsqueeze(0)
channels = my_dataset_size[0]
dummy_conv = nn.Conv2d(in_channels=channels,
out_channels=channels,
kernel_size=5)
dummy_conv_transpose = nn.ConvTranspose2d(in_channels=channels,
out_channels=channels,
kernel_size=5)
print(f'Size of image is {dummy_image.size()}')
print(f'Size of Conv2D(image) {dummy_conv(dummy_image).size()}')
print(f'Size of ConvTranspose2D(image) {dummy_conv_transpose(dummy_image).size()}')
print(f'Size of ConvTranspose2D(Conv2D(image)) {dummy_conv_transpose(dummy_conv(dummy_image)).size()}')
# -
# ### Coding Exercise 2.2: Fill in code for the `ConvAutoEncoder` module
# + colab={"base_uri": "https://localhost:8080/"} outputId="707f4e83-558b-4be9-a46a-b57bb<PASSWORD>"
class ConvAutoEncoder(nn.Module):
def __init__(self, K, num_filters=32, filter_size=5):
super(ConvAutoEncoder, self).__init__()
# With padding=0, the number of pixels cut off from each image dimension
# is filter_size // 2. Double it to get the amount of pixels lost in
# width and height per Conv2D layer, or added back in per
# ConvTranspose2D layer.
filter_reduction = 2 * (filter_size // 2)
# After passing input through two Conv2d layers, the shape will be
# 'shape_after_conv'. This is also the shape that will go into the first
# deconvolution layer in the decoder
self.shape_after_conv = (num_filters,
my_dataset_size[1]-2*filter_reduction,
my_dataset_size[2]-2*filter_reduction)
flat_size_after_conv = self.shape_after_conv[0] \
* self.shape_after_conv[1] \
* self.shape_after_conv[2]
####################################################################
# Fill in all missing code below (...),
# then remove or comment the line below to test your class
raise NotImplementedError("Please complete the ConvAutoEncoder class!")
####################################################################
# Create encoder layers (BiasLayer, Conv2d, Conv2d, Flatten, Linear)
self.enc_bias = ...
self.enc_conv_1 = ...
self.enc_conv_2 = ...
self.enc_flatten = ...
self.enc_lin = ...
# Create decoder layers (Linear, Unflatten(-1, self.shape_after_conv), ConvTranspose2d, ConvTranspose2d, BiasLayer)
self.dec_lin = ...
self.dec_unflatten = ...
self.dec_deconv_1 = ...
self.dec_deconv_2 = ...
self.dec_bias = ...
def encode(self, x):
# Your code here: encode batch of images (don't forget ReLUs!)
s = ...
s = ...
s = ...
s = ...
h = ...
return h
def decode(self, h):
# Your code here: decode batch of h vectors (don't forget ReLUs!)
s = ...
s = ...
s = ...
s = ...
x_prime = ...
return x_prime
def forward(self, x):
return self.decode(self.encode(x))
K = 20
set_seed(2021)
# Uncomment to test your solution
# conv_ae = ConvAutoEncoder(K=K)
# assert conv_ae.encode(my_dataset[0][0].unsqueeze(0)).numel() == K, "Encoder output size should be K!"
# conv_losses = train_autoencoder(conv_ae, my_dataset)
# plot_conv_ae(lin_losses, conv_losses)
# + colab={"base_uri": "https://localhost:8080/", "height": 499, "referenced_widgets": ["55cd04c62d174f0ca26449c7da68ba4b", "89ba291999f24da490739e1204c7240f", "8a6370524a3a464b8578f953b22dce72", "593d1fde83084026bedfb5cff1a2abf9", "0a0872324fd24a66a44b2a1b460cea4d", "08f6ee8cec2d46118ce45bc233c2586c", "ec3fd140065947859d2a74430c270b38", "6b0a8476167d4fa1b0886a81f70b709d"]} outputId="11b03d9e-8ac6-42c5-b385-330a98c781ba"
# to_remove solution
class ConvAutoEncoder(nn.Module):
def __init__(self, K, num_filters=32, filter_size=5):
super(ConvAutoEncoder, self).__init__()
# With padding=0, the number of pixels cut off from each image dimension
# is filter_size // 2. Double it to get the amount of pixels lost in
# width and height per Conv2D layer, or added back in per
# ConvTranspose2D layer.
filter_reduction = 2 * (filter_size // 2)
# After passing input through two Conv2d layers, the shape will be
# 'shape_after_conv'. This is also the shape that will go into the first
# deconvolution layer in the decoder
self.shape_after_conv = (num_filters,
my_dataset_size[1]-2*filter_reduction,
my_dataset_size[2]-2*filter_reduction)
flat_size_after_conv = self.shape_after_conv[0] \
* self.shape_after_conv[1] \
* self.shape_after_conv[2]
# Create encoder layers (BiasLayer, Conv2d, Conv2d, Flatten, Linear)
self.enc_bias = BiasLayer(my_dataset_size)
self.enc_conv_1 = nn.Conv2d(my_dataset_size[0], num_filters, filter_size)
self.enc_conv_2 = nn.Conv2d(num_filters, num_filters, filter_size)
self.enc_flatten = nn.Flatten()
self.enc_lin = nn.Linear(flat_size_after_conv, K)
# Create decoder layers (Linear, Unflatten(-1, self.shape_after_conv), ConvTranspose2d, ConvTranspose2d, BiasLayer)
self.dec_lin = nn.Linear(K, flat_size_after_conv)
self.dec_unflatten = nn.Unflatten(dim=-1, unflattened_size=self.shape_after_conv)
self.dec_deconv_1 = nn.ConvTranspose2d(num_filters, num_filters, filter_size)
self.dec_deconv_2 = nn.ConvTranspose2d(num_filters, my_dataset_size[0], filter_size)
self.dec_bias = BiasLayer(my_dataset_size)
def encode(self, x):
# Your code here: encode batch of images (don't forget ReLUs!)
s = self.enc_bias(x)
s = F.relu(self.enc_conv_1(s))
s = F.relu(self.enc_conv_2(s))
s = self.enc_flatten(s)
h = self.enc_lin(s)
return h
def decode(self, h):
# Your code here: decode batch of h vectors (don't forget ReLUs!)
s = F.relu(self.dec_lin(h))
s = self.dec_unflatten(s)
s = F.relu(self.dec_deconv_1(s))
s = self.dec_deconv_2(s)
x_prime = self.dec_bias(s)
return x_prime
def forward(self, x):
return self.decode(self.encode(x))
K = 20
set_seed(2021)
# Uncomment to test your solution
conv_ae = ConvAutoEncoder(K=K)
assert conv_ae.encode(my_dataset[0][0].unsqueeze(0)).numel() == K, "Encoder output size should be K!"
conv_losses = train_autoencoder(conv_ae, my_dataset)
with plt.xkcd():
plot_conv_ae(lin_losses, conv_losses)
# -
# You should see that the `ConvAutoEncoder` achieved lower MSE loss than the linear one. If not, you may need to retrain it (or run another few training epochs from where it left off). We make fewer guarantees on this working with CIFAR10, but it should definitely work with MNIST.
#
# Now let's visually compare the reconstructed images from the linear and nonlinear autoencoders. Keep in mind that both have the same dimensionality for $\mathbf{h}$!
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 318} outputId="c9530662-0dcd-493b-81f6-1be9279b77b9"
#@markdown Visualize the linear and nonlinear AE outputs
n_plot = 7
plt.figure(figsize=(10, 4.5))
for i in range(n_plot):
idx = torch.randint(len(my_dataset), size=())
image, _ = my_dataset[idx]
with torch.no_grad():
# Get reconstructed image from linear autoencoder
lin_recon = lin_ae(image.unsqueeze(0))[0]
# Get reconstruction from deep (nonlinear) autoencoder
nonlin_recon = conv_ae(image.unsqueeze(0))[0]
plt.subplot(3, n_plot, i+1)
plot_torch_image(image)
if i == 0:
plt.ylabel('Original\nImage')
plt.subplot(3, n_plot, i + 1 + n_plot)
plot_torch_image(lin_recon)
if i == 0:
plt.ylabel(f'Lin AE\n(K={K})')
plt.subplot(3, n_plot, i + 1 + 2*n_plot)
plot_torch_image(nonlin_recon)
if i == 0:
plt.ylabel(f'NonLin AE\n(K={K})')
plt.show()
# -
# ## Inspecting the hidden representations
# Let's start by plotting points in the hidden space ($\mathbf{h}$), colored by class of the image (which, of course, the autoencoder didn't know about during training!)
# + colab={"base_uri": "https://localhost:8080/", "height": 430} outputId="0eea925f-6665-4b7e-b75d-36c9015481a8"
h_vectors = torch.zeros(len(my_valset), K, device=DEVICE)
labels = torch.zeros(len(my_valset), dtype=torch.int32)
g = torch.Generator()
g.manual_seed(2021)
loader = DataLoader(my_valset, batch_size=200,
pin_memory=True,
worker_init_fn=seed_worker,
generator=g)
conv_ae.to(DEVICE)
i = 0
for im, la in loader:
b = im.size()[0]
h_vectors[i:i+b, :] = conv_ae.encode(im.to(DEVICE))
labels[i:i+b] = la
i += b
conv_ae.to('cpu')
h_vectors = h_vectors.detach().cpu()
_, _, h_pcs = torch.pca_lowrank(h_vectors, q=2)
h_xy = h_vectors @ h_pcs
plt.figure(figsize=(7, 6))
plt.scatter(h_xy[:, 0], h_xy[:, 1], c=labels, cmap='hsv')
plt.title('2D projection of h, colored by class')
plt.colorbar()
plt.show()
# -
# To explore the hidden representations, $\mathbf{h}$, we're going to pick two random images from the dataset and interpolate them 3 different ways. Let's introduce some notation for this: we'll use a variable $t \in [0,1]$ to gradually transition from image $\mathbf{x}_1$ at $t=0$ to image $\mathbf{x}_2$ at $t=1$. Using $\mathbf{x}(t)$ to denote the interpolated output, the three methods will be
#
# 1. interpolate the raw pixels, so $$\mathbf{x}(t) = (1-t) \cdot \mathbf{x}_1 + t \cdot \mathbf{x}_2$$
# 2. interpolate their encodings from the **linear** AE, so $$\mathbf{x}(t) = \text{linear_decoder}((1-t) \cdot \text{linear_encoder}(\mathbf{x}_1) + t \cdot \text{linear_encoder}(\mathbf{x}_2))$$
# 3. interpolate their encodings from the **nonlinear** AE, so $$\mathbf{x}(t) = \text{conv_decoder}((1-t) \cdot \text{conv_encoder}(\mathbf{x}_1) + t \cdot \text{conv_encoder}(\mathbf{x}_2))$$
#
# Note: this demo will likely look better using MNIST than using CIFAR. Check with other members of your pod. If you're using CIFAR for this notebook, consider having someone using MNIST share their screen.
#
# What do you notice about the "interpolated" images, especially around $t \approx 1/2$? How many distinct classes do you see in the bottom row?
# Re-run the above cell a few times to look at multiple examples.
#
# **Discuss with your pod and describe what is happening here.**
# + colab={"base_uri": "https://localhost:8080/", "height": 316} outputId="36ae3eb9-80e3-4a22-9b09-d20a43591d64"
idx1 = torch.randint(len(my_dataset), size=())
idx2 = torch.randint(len(my_dataset), size=())
x1, _ = my_dataset[idx1]
x2, _ = my_dataset[idx2]
n_interp = 11
with torch.no_grad():
h1_lin = lin_ae.encode(x1.reshape(1, -1))
h2_lin = lin_ae.encode(x2.reshape(1, -1))
h1_conv = conv_ae.encode(x1.unsqueeze(0))
h2_conv = conv_ae.encode(x2.unsqueeze(0))
plt.figure(figsize=(14, 4.5))
for i in range(n_interp):
t = i / (n_interp - 1)
pixel_interp = (1 - t)*x1 + t*x2
plt.subplot(3, n_interp, i + 1)
plot_torch_image(pixel_interp)
if i == 0:
plt.ylabel('Raw\nPixels')
plt.title(f't={i}/{n_interp-1}')
with torch.no_grad():
lin_ae_interp = lin_ae.decode((1-t)*h1_lin + t*h2_lin)
plt.subplot(3, n_interp, i + 1 + n_interp)
plot_torch_image(lin_ae_interp.reshape(my_dataset_size))
if i == 0:
plt.ylabel('Lin AE')
with torch.no_grad():
conv_ae_interp = conv_ae.decode((1-t)*h1_conv + t*h2_conv)[0]
plt.subplot(3, n_interp, i + 1 + 2*n_interp)
plot_torch_image(conv_ae_interp)
if i == 0:
plt.ylabel('NonLin AE')
plt.show()
# -
# ---
# # Section 3: Generative models and density networks
# ## Generating novel images from the decoder
#
# If we isolate the decoder part of the AutoEncoder, what we have is a neural network that takes as input a vector of size $K$ and produces as output an image that looks something like our training data. Recall that in our earlier notation, we had an input $\mathbf{x}$ that was mapped to a low-dimensional hidden representation $\mathbf{h}$ which was then decoded into a reconstruction of the input, $\mathbf{x'}$:
# $$\mathbf{x} \overset{\text{encode}}{\longrightarrow} \mathbf{h} \overset{\text{decode}}{\longrightarrow} \mathbf{x'}\, .$$
# Partly as a matter of convention, and partly to distinguish where we are going next from the previous section, we're going to introduce a new variable, $\mathbf{z} \in \mathbb{R}^K$, which will take the place of $\mathbf{h}$. The key difference is that while $\mathbf{h}$ is produced by the encoder for a particular $\mathbf{x}$, $\mathbf{z}$ will be drawn out of thin air from a prior of our choosing:
# $$\mathbf{z} \sim p(\mathbf{z})\\ \mathbf{z} \overset{\text{decode}}{\longrightarrow} \mathbf{x}\, .$$
# (Note that it is also conventional to drop the "prime" on $\mathbf{x}$ when it is no longer being thought of as a "reconstruction").
# ### Coding Exercise 3.1: sample $\mathbf{z}$ from a standard normal and visualize the images produced
# + colab={"base_uri": "https://localhost:8080/"} outputId="78eae2b5-0638-46c2-dabb-8ef8829ce81b"
def generate_images(autoencoder, K, n_images=1):
"""Generate n_images 'new' images from the decoder part of the given
autoencoder.
returns (n_images, channels, height, width) tensor of images
"""
# Concatenate tuples to get (n_images, channels, height, width)
output_shape = (n_images,) + my_dataset_size
with torch.no_grad():
####################################################################
# Fill in all missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Please complete the generate_images function!")
####################################################################
# sample z, pass through autoencoder.decode(), and reshape output.
z = ...
x = ...
return x
K = 20
set_seed(2021)
# Uncomment to run it
# images = generate_images(conv_ae, K, n_images=25)
# plot_images(images)
# + colab={"base_uri": "https://localhost:8080/", "height": 378} outputId="8a07c8f1-7747-4985-ac81-375a49c11213"
# to_remove solution
def generate_images(autoencoder, K, n_images=1):
"""Generate n_images 'new' images from the decoder part of the given
autoencoder.
returns (n_images, channels, height, width) tensor of images
"""
# Concatenate tuples to get (n_images, channels, height, width)
output_shape = (n_images,) + my_dataset_size
with torch.no_grad():
# sample z, pass through autoencoder.decode(), and reshape output.
z = torch.randn(n_images, K)
x = autoencoder.decode(z).reshape(output_shape)
return x
K = 20
set_seed(2021)
# Uncomment to run it
images = generate_images(conv_ae, K, n_images=25)
with plt.xkcd():
plot_images(images)
# -
# ## Formalizing the problem: density estimation with maximum likelihood
#
# Note: we've moved the technical details of "formalizing the problem" to Appendix A.1 at the end of this notebook. Those who want more of the theoretical/mathematical backstory are encouraged to read it. Those who just want to build a VAE, carry on!
# ---
# # Section 4: Variational Auto-Encoders (VAEs)
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="3844b897-c69f-4c22-b753-14f71476b9ed"
#@title Video 4: ariational Auto-Encoders (VAEs)
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="MKfeTzn_HaA", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
# -
# ## Components of a VAE
# ## Recognition models and density networks
#
# Variational AutoEncoders (VAEs) are a lot like the classic AutoEncoders (AEs) you just saw, but where we explicitly think about probability distributions. In the language of VAEs, the __encoder__ is replaced with a __recognition model__, and the __decoder__ is replaced with a __density network__.
#
# Where in a classic autoencoder the encoder maps from images to a single hidden vector,
# $$\mathbf{x} \overset{\text{AE}}{\longrightarrow} \mathbf{h} \, , $$ in a VAE we would say that a recognition model maps from inputs to entire __distributions__ over hidden vectors,
# $$\mathbf{x} \overset{\text{VAE}}{\longrightarrow} q(\mathbf{z}) \, ,$$
# which we will then sample from.
# We'll say more in a moment about what kind of distribution $q(\mathbf{z})$ is.
# Part of what makes VAEs work is that the loss function will require good reconstructions of the input not just for a single $\mathbf{z}$, but _on average_ from samples of $\mathbf{z} \sim q(\mathbf{z})$.
#
# In the classic autoencoder, we had a decoder which maps from hidden vectors to reconstructions of the input:
# $$\mathbf{h} \overset{\text{AE}}{\longrightarrow} \mathbf{x'} \, .$$
# In a density network, reconstructions are expressed in terms of a distribution:
# $$\mathbf{z} \overset{\text{VAE}}{\longrightarrow} p(\mathbf{x}|\mathbf{z};\mathbf{w}) $$
# where, as above, $p(\mathbf{x}|\mathbf{z};\mathbf{w})$ is defined by mapping $\mathbf{z}$ through a density network then treating the resulting $f(\mathbf{z};\mathbf{w})$ as the mean of a (Gaussian) distribution over $\mathbf{x}$.
# ### Coding Exercise 4.1: sampling from $q(\mathbf{z})$
#
# How can a neural network (the __recognition model__) output an entire probability distribution $$\mathbf{x} \longrightarrow q(\mathbf{z}) \, ?$$
# One idea would be to make the weights of the neural network stochastic, so that every time the network is run, a different $\mathbf{z}$ is produced. (In fact, this is quite common in [Bayesian Neural Networks](https://medium.com/neuralspace/bayesian-neural-network-series-post-1-need-for-bayesian-networks-e209e66b70b2), but this isn't what people use in VAEs.)
#
# Instead, we will start by committing to a particular _family_ of distributions. We'll then have the recognition model output the _parameters_ of $q$, which we'll call $\phi$. A common choice, which we will use throughout, is the family of isotropic multivariate Gaussians$^\dagger$:
# $$q(\mathbf{z};\phi) = \mathcal{N}(\mathbf{z};\boldsymbol{\mu},\sigma^2\mathbf{I}_K) = \prod_{k=1}^K \mathcal{N}(z_k; \mu_k, \sigma^2)$$
# where the $K+1$ parameters are$^*$
# $$\phi = \lbrace{\mu_1, \mu_2, \ldots, \mu_K, \log(\sigma)}\rbrace \, .$$
# By defining the last entry of $\phi$ as the _logarithm_ of $\sigma$, the last entry can be any real number while enforcing the requirement that $\sigma > 0$.
#
# A recognition model is a neural network that takes $\mathbf{x}$ as input and produces $\phi$ as output. The purpose of the following exercise is not to write a recognition model (that will come later), but to clarify the relationship between $\phi$ and $q(\mathbf{z})$. You will write a function, `rsample`, which takes as input a batch $\phi$s and will output a set of samples of $\mathbf{z}$ drawn from $q(\mathbf{z};\phi)$.
# + colab={"base_uri": "https://localhost:8080/"} outputId="a634af97-9c85-40df-a60f-5d14b77adb67"
def rsample(phi, n_samples):
"""Sample z ~ q(z;phi)
Ouput z is size [b, n_samples, K] given phi with shape [b,K+1]. The first K
entries of each row of phi are the mean of q, and phi[:,-1] is the log
standard deviation
"""
b, kplus1 = phi.size()
k = kplus1 - 1
mu, sig = phi[:, :-1], phi[:, -1].exp()
####################################################################
# Fill in all missing code below (...),
# then remove or comment the line below to test your function
raise NotImplementedError("Please complete the rsample function!")
####################################################################
eps = ...
return eps*sig.view(b, 1, 1) + mu.view(b, 1, k)
phi = torch.randn(4, 3, device=DEVICE)
set_seed(2021)
# Uncomment below to test your code
# zs = rsample(phi, 100)
# assert zs.size() == (4, 100, 2), "rsample size is incorrect!"
# assert zs.device == phi.device, "rsample device doesn't match phi device!"
# zs = zs.cpu()
# plot_phi(phi)
# + colab={"base_uri": "https://localhost:8080/", "height": 244} outputId="3e7c8f03-be00-4bf4-f737-e7c17fc170fe"
# to_remove solution
def rsample(phi, n_samples):
"""Sample z ~ q(z;phi)
Ouput z is size [b,n_samples,K] given phi with shape [b,K+1]. The first K
entries of each row of phi are the mean of q, and phi[:,-1] is the log
standard deviation
"""
b, kplus1 = phi.size()
k = kplus1 - 1
mu, sig = phi[:, :-1], phi[:, -1].exp()
eps = torch.randn(b, n_samples, k, device=phi.device)
return eps*sig.view(b, 1, 1) + mu.view(b, 1, k)
phi = torch.randn(4, 3, device=DEVICE)
set_seed(2021)
# Uncomment below to test your code
zs = rsample(phi, 100)
assert zs.size() == (4, 100, 2), "rsample size is incorrect!"
assert zs.device == phi.device, "rsample device doesn't match phi device!"
zs = zs.cpu()
with plt.xkcd():
plot_phi(phi)
# -
# $^\dagger$ PyTorch has a `MultivariateNormal` class which handles multivariate Gaussian distributions with arbitrary covariance matrices. It is not very beginner-friendly, though, so we will write our own functions to work with $\phi$, which will both teach you some implementation details and is not very hard especially if we use only an isotropic ($\sigma$) or diagonal ($\lbrace{\sigma_1, \ldots, \sigma_K}\rbrace$) covariance
#
# $^*$ Another common parameterization is to use a separate $\sigma$ for each dimension of $\mathbf{z}$, in which case $\phi$ would instead contain $2K$ parameters:
# $$\phi = \lbrace{\mu_1, \mu_2, \ldots, \mu_K, \log(\sigma_1), \ldots, \log(\sigma_K)}\rbrace \, .$$
# ---
# # Section 5: State of the art VAEs and Wrap-up
# + cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 519} outputId="c404c2e0-7044-4bc9-8257-af3c10ed48ca"
#@title Video: State-of-the-art VAEs
video = YouTubeVideo(id="f2jSzq7lndo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
| tutorials/W2D5_GenerativeModels/W2D5_Tutorial1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # 關於 Movielens 協同舉薦作業
#
# ### 檔案說明
# > 第一部分使用用戶協同推薦算法完成
#
# ### 一、基於用戶的協同過濾算法
#
# #### 1) 讀取數據
import pandas as pd
import numpy as np
import math
# +
u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code']
users = pd.read_csv('./movielens/ml-100k/u.user', sep='|', names=u_cols,encoding='latin-1')
r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
ratings = pd.read_csv('./movielens/ml-100k/u.data', sep='\t', names=r_cols,encoding='latin-1')
m_cols = ['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url']
movies = pd.read_csv('./movielens/ml-100k/u.item', sep='|', names=m_cols, usecols=range(5),encoding='latin-1')
movie_ratings = pd.merge(movies, ratings)
lens = pd.merge(movie_ratings, users)
# print(lens)
# -
# #### 2) 整理數據格式
# +
# 943 名用户 1682 部电影
user_list = np.zeros([943, 1682])
for i in range(len(ratings.values)):
user_list[ratings.values[i][0]-1][ratings.values[i][1]-1] = ratings.values[i][2]
# -
# #### 3) 核心推薦算法實現
# +
def user_based_recommend(data, user_num, userK, topK):
""" 基於用戶user的topK 舉薦
Args:
data: 數據表
user: 用戶編號
userK: 用戶組 topK
topK: 商品組 topK
Returns:
舉薦列表
"""
user_num -= 1
user = data[user_num]
sim_list = []
# del data[user_num] # 從用戶數據表中刪除用戶本身
for i in range(len(data)):
sim_list.append([cos_sim(user, data[i]), i])
sim_list.sort()
sim_list = sim_list[-userK - 1:]
result = {}
for i in range(len(sim_list)):
for a in range(len(data[0])):
if data[sim_list[i][1]][a] != 0 and user[a] == 0:
if a in result:
result[a] += data[sim_list[i][1]][a]
else:
result[a] = data[sim_list[i][1]][a]
result = sorted(result.items(), key=lambda x: x[1], reverse=True)
# print(result[:topK])
return result[:topK]
def cos_sim(x_, y_):
""" 余憲相似性
Args:
- x: mat, 以行向量形式存儲
- y: mat, 以行向量形式存儲
Return: x 和 y 之間的余憲相似的度
"""
x, y = [], []
for i in range(len(x_)):
if x_[i] != 0 and y_[i] != 0:
x.append(x_[i])
y.append(y_[i])
x, y = np.array(x), np.array(y)
numerator = np.sum(x.T * y)
denominator = np.sqrt(np.sum(x.T * x)) * np.sqrt(np.sum(y.T * y))
return numerator / denominator
# -
user_based_recommend(user_list, 1, 3, 3)
# #### 說明: 從數據中可以查到,一號用戶未看過的三部電影,算法通過TopK排名為其推薦,下列將其電影的詳細打印出來
li = user_based_recommend(user_list, 1, 3, 3)
# print(movies.values)
for i in li:
print(movies.values[i[0]])
# #### 4) 使用驗證集和測試集進行實驗
# +
r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
train = pd.read_csv('./movielens/ml-100k/u1.base', sep='\t', names=r_cols, encoding='latin-1')
test = pd.read_csv('./movielens/ml-100k/u1.test', sep='\t', names=r_cols, encoding='latin-1')
train_list = np.zeros([943, 1682])
test_list = np.zeros([943, 1682])
for i in range(len(train.values)):
train_list[train.values[i][0] - 1][train.values[i][1] - 1] = train.values[i][2]
for i in range(len(test.values)):
test_list[test.values[i][0] - 1][test.values[i][1] - 1] = test.values[i][2]
user_based_recommend(train_list, 1, 10, 5)
# +
tj = user_based_recommend(train_list, 2, 10, 10)
tr=0
num=0
for i in tj:
if test_list[5-1][i[0]]>0:
tr+=1
num+=1
print('推薦成功:{}部'.format(tr))
print('推薦正確率:{}'.format(tr/num))
# -
| movielens/work.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# +
import requests
import ultron8
from ultron8 import debugger
from ultron8 import client
from ultron8.api import settings
from ultron8.u8client import session
username = settings.FIRST_SUPERUSER
password = settings.FIRST_SUPERUSER_PASSWORD
# +
s = session.BasicAuth(username, password)
# +
debugger.debug_dump_exclude(s)
# +
debugger.dump_magic(s)
# +
debugger.dump_all(s)
# +
assert isinstance(s, ultron8.u8client.session.BasicAuth)
# +
url = "http://localhost:11267/v1/users"
r = requests.Request('GET', url, auth=s)
p = r.prepare()
assert isinstance(s, ultron8.u8client.session.BasicAuth)
assert s.password == "password"
assert s.username == "<EMAIL>"
assert p.headers['Authorization'] == requests.auth._basic_auth_str(username, password)
# -
p
debugger.dump_all(p)
from tests.utils.utils import get_superuser_jwt_request
r = get_superuser_jwt_request()
r.json()
| example_notebooks/session-debug-notebook.ipynb |
// -*- coding: utf-8 -*-
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .js
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Javascript (Node.js)
// language: javascript
// name: javascript
// ---
// # Software Development 2
// Topics for today will include:
// - Announcing Teams
// - Iteration 0
// - Defining and Choosing Agile Roles
// - Agile Meeting Types
// - Git Branching and Branch Management
// - Introducing Sass/Scss
// - Introducing Bootstrap, Foundation, & Materialize
// - The Grid System
// - Connecting Javascript
// - Building a page with Bootstrap
// - Bonus: REST and Swagger
//
//
// If you'd like to take notes in this Jupyter Notebook, I **HIGHLY RECOMMEND** you make a duplicate, add the word student or something to the name so you don't get a merge conflict!
// ## Project Groups!
// If you haven't decided on who you'd like to be in a group yet you'll be assigned to a group this week. If you have a partial group or preferred members Slack me as a group by the end of class. Next week I want groups to present their initial ideas. I'd also like for you to have at least 5, 2 week iterations. You'll be rotating roles as well!
//
// Also a great place to have some of the project discussions that will be coming about is in the class slack!
//
// | Group/Company Name | Confirmed Groups |
// | ------------------ |------------------|
// | ------------------ | <NAME>, <NAME>, <NAME>, <NAME>, <NAME> | 5
// | ------------------ | <NAME>, <NAME>, <NAME>, <NAME>, <NAME> | 10
// | ------------------ | <NAME> ,<NAME>, <NAME>, <NAME> | 14
// | ------------------ | <NAME>, <NAME>, <NAME>, <NAME> | 18
// | ------------------ | <NAME>, <NAME>, <NAME>, <NAME> | 22
// | ------------------ | <NAME>, <NAME>, <NAME>, <NAME> | 26
// ## Iteration 0
// This will have several components to it. The purpose of an Iteration 0 meeting is so that you can get acclimated with your team as well as get started on developing your application.
//
// [Design Doc](https://ilearn.marist.edu/access/content/group/7cd1aac3-f42d-4971-8637-c3891caae5ab/Semester%20Project/Semester%20Project%20HLD-2.pdf)
//
// We need to do at least the following:
// - Introduce the team
// - Define team meeting cadence, means of communication, and scheduled activities
// - Discuss means of documentation and development
// - Create Design Specifications
// - Create Initial Backlog
// - Assign Roles (These will change since we want every one to be in every role at some point)
//
// This all needs to be discussed and decided by your team so that you can get started. Getting this stuff out of the way early get's everyone on the same page. It also make it so that the only things that you should have to focus on during the iterations is the work itself.
// ## Defining and Choosing Agile Roles
//
// We've discussed this already but since we need to choose the roles for our first iteration lets talk about them again. We have 3 roles in our basic scheme. First we have our developers, second we have our Iteration Manager/Scrum Master, then finally we have our Product Owner.
//
// These three roles are very important because they tell folks what they should be primarily responsible for and focused on.
//
// Since we've already gone over these we'll hit the highlights:
// - **Developers**
// - Includes all members of the team. Not just developers by traditional name.
// - Responsible for taking thing off of the backlog and completing them.
// - Should be attending stand ups and most significant team meetings.
// - Should be communication with one another as well as the PO and IM when things go awry or when they think things should pivot.
// - Should be able to self organize. This isn't a top down structure. You need to be able to manage yourself however optimal to benefit the team and get work done.
// </br>
// </br>
// - **Product Owner**
// - Is responsible for managing the scrum/standup backlog
// - Is paired with the IM/SM
// - Is responsible for Release Management, we won't utilize this to the fullest extent but one of the core tenets of agile is constant and consistent delivery. Meaning delivering modularized and working pieces that work on their own. Typically it's up to the PO to say when things are good to go. For this class we'll mostly be communicating on status.
// - Is responsible for Stakeholder/Customer Management, you're responsible for the communication between your team and the customer. You don't wanna leave your team out to dry by fully siding with the customer. You also don't want to make the customer feel unwanted by doing only what the team wants. _**BE SURE TO CONFER WITH YOUR TEAM BEFORE YOU PROMISE THE CUSTOMER ANYTHING!**_
// </br>
// </br>
// - **Iteration Manager/Scrum Master**
// - Should be keeping the team all on the same page!
// - Is paired with the PO
// - Should be making sure the team has full transparency. You want to know what everyone on the team is doing so that you can be firing on all cylinders. Everyone should also know what everyone is doing!
// - Should be focused on Empiricism. Basically, we tried it out, it didn't work, we're gonna do something different. You're focused on being adaptive and doing things that way.
// - Is focused on promoting self organization. You want to enable the team to be able to get things done on their own. Remember this isn't top down, you have to communicate with one another to resolve issues and disputes.
// - Should be promoting 5 core values to the team. Now as cheesy as this is gonna sound it make being a team a lot easier. Those values are courage, focus, commitment, respect, and openness. I'm not going to expect you to remember this but to more so embody it. Treat your teammates well and it becomes a lot easier to be on a team.
// ## Agile Meeting Types
//
// So depending on the team you're on or the style of agile that you're in you may have more or less of these meetings but for the purpose of this semester we only have 4. 3 the way I recommend. That being said lets get into them and get our head wrapped around what we'll be doing.
//
// ### Stand Ups
// Your stand ups are probably the easiest and most complex meeting at the same time. The reason that I personally believe this is because the standup is supposed to be super short. This means it's up to the team to be concise yet maximize communicativeness. Some teams implement an ELMO rule (Enough, Let's Move On), some choose to hash it out there and elongate the meeting (Typically not favored), Some forget to have the meeting at all/choose to have it weekly instead of just daily (More rare, and you have to do them lol).
//
// For the purpose of this class I'd say you should probably have at least 3 a week to start. Before you burn me at the stake I'm going to show you why this is something that becomes really easy if you're managed properly and if your IM is on point.
//
// In Slack you can set a reminder to yourself or your channel like this:
// `/remind me to “Post daily standup status! (ABC Format)” at 9AM every weekday`
//
// This will set a slack reminder to ping you every day to post your status. Now I also have `ABC Format` in there this is the other part that makes this easy. ABC Format stands for Accomplishments, Blockers, and Commitments. For example:
//
// ```
// A: Lesson 4 Written
// B: ijavascript has been giving the students problems.
// C: Fixing ijavascript issue, writing Lesson 5, Happy Hour
// ```
//
// This is a quick and simple solution to not having to have an actual meeting. People know whats going on with you and it takes 5 minutes out of your day.
//
// Missing stand ups is ok... as a whole. Missing stand ups as an individual can really peeve your team 👀
//
// ### Retrospectives
//
// Retrospectives are super important because they make sure you're not doing something that's not working for you or the team for too long. I believe the typical rule of thumb is that your retrospective should be 15 minutes for every week in the iteration. We're not going to abide by this. I want you guys to have the retrospective but model it to being college students with other things to do.
//
// So what I think that we may do is have these in class for the first 2 and you can use a dumbed down version for the rest of the semester. I may make it a part of your lab so that way it's useful to your grade.
//
// Here's what my template looks like for work.
// 
//
// With this I poll my team and make sure that nothing is bothering them. If something is I try and find ways to improve it or fix it. I usually put it into a second board so I don't forget but that's not required.
// 
//
// ### Iteration Planning
//
// This is probably one of the easiest to understand. Implementing well on the other hand takes some practice. So we need to know what our goals are for the iteration at hand and this meeting is where we figure some of that out. What makes this tricky is that you're trying to make sure that you don't take on too much work but also are doing enough work that you can hit your deadline comfortably
//
// This can be paired with retrospectives but typically the iteration planning meeting is done in person so that you can hash things out then and there. Similar to the retrospectives my thought is to maybe do the first two in class and then the last 3 be part of the lab grades.
//
// There's no given time to hit for this one. It's mainly get it done kinda work.
//
// ### Showcases
// Finally there are Showcases! Showcases are usually where you show off what you've done so far. Think of these as demos so that the invested stakeholders can see the direction that the project is going in!
//
// There's not much to it BUT it's SUPER important to impress the stakeholders. Make sure your demos are on point!
// ## Git Branching and Branch Management
// [Github's article on GitHub Flow](https://guides.github.com/introduction/flow/)
// ## Introducing Sass/Scss
// Talking about another thing that we've already seen/talked about that we're gonna go a little more in depth on. Sass or Syntactically Awesome Style Sheets is a tool that makes dealing with CSS a whole lot easier. With the ability to add some elements of programming into our css like variables, nesting, partials, modules, mixins, inheritance, and math!
//
// [Sass Site](https://sass-lang.com)
//
// Feel free to use sass in your project, this is something that you'll have to agree on as a team!
// ## Introducing Bootstrap, Foundation, & Materialize
//
// Another that we've touched on but will talk more about now are some front-end frameworks. Today we'll look at Bootstrap, Foundation, and Materialize. Now these all have some things in common but they also have a lot of differences. Luckily for us the styles and approaches are what's drastically different, and not the means of implementation.
//
// We're going to take a look at Bootstrap, Foundation, and Materialize and look into getting them into your projects. Looking at using CDN's vs downloading the packages yourself.
//
// [Bootstrap Site](https://getbootstrap.com)
//
// [Foundation Site](https://get.foundation)
//
// [Materialize Site](https://materializecss.com)
//
// Feel free to use these frameworks in your project, this is something that you'll have to agree on as a team!
// ## The Grid System
//
// Now one of the biggest things that the three frameworks that I'm showing you have in common is the Grid System. This isn't the most complex thing to understand but in modern web development its very important to understand. The Grid System makes it so that we can write one webpage that is responsible for both our mobile and desktop view.
//
// Typically these grids are 12 equally sized columns in size. There are some systems that use more but the default is typically 12.
//
//
// ## Connecting Javascript (in the front end) and Talking about the DOM
| JupyterNotebooks/Lessons/Lesson 4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Chương 5 - Máy Vector Hỗ trợ**
#
# _Notebook này chứa toàn bộ mã nguồn mẫu và lời giải bài tập Chương 5 - tập 1._
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/mlbvn/handson-ml2-vn/blob/main/05_support_vector_machines.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# </td>
# <td>
# <a target="_blank" href="https://kaggle.com/kernels/welcome?src=https://github.com/mlbvn/handson-ml2-vn/blob/main/05_support_vector_machines.ipynb"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" /></a>
# </td>
# </table>
# # Cài đặt
# Đầu tiên hãy nhập một vài mô-đun thông dụng, đảm bảo rằng Matplotlib sẽ vẽ đồ thị ngay trong notebook, và chuẩn bị một hàm để lưu đồ thị. Ta cũng kiểm tra xem Python phiên bản từ 3.5 trở lên đã được cài đặt hay chưa (mặc dù Python 2.x vẫn có thể hoạt động, phiên bản này đã bị deprecated nên chúng tôi rất khuyến khích việc sử dụng Python 3), cũng như Scikit-Learn ≥ 0.20.
# +
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "svm"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# -
# # Phân loại biên lớn
# **Large margin classification**
# Các dòng lệnh dưới đây vẽ những đồ thị đầu tiên của chương 5. Đoạn mã lập trình cho phần này sẽ được đề cập về sau:
#
# +
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
# SVM Classifier model
svm_clf = SVC(kernel="linear", C=float("inf"))
svm_clf.fit(X, y)
# +
# Bad models
x0 = np.linspace(0, 5.5, 200)
pred_1 = 5*x0 - 20
pred_2 = x0 - 1.8
pred_3 = 0.1 * x0 + 0.5
def plot_svc_decision_boundary(svm_clf, xmin, xmax):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
# At the decision boundary, w0*x0 + w1*x1 + b = 0
# => x1 = -w0/w1 * x0 - b/w1
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = -w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
svs = svm_clf.support_vectors_
plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA')
plt.plot(x0, decision_boundary, "k-", linewidth=2)
plt.plot(x0, gutter_up, "k--", linewidth=2)
plt.plot(x0, gutter_down, "k--", linewidth=2)
fig, axes = plt.subplots(ncols=2, figsize=(10,2.7), sharey=True)
plt.sca(axes[0])
plt.plot(x0, pred_1, "g--", linewidth=2)
plt.plot(x0, pred_2, "m-", linewidth=2)
plt.plot(x0, pred_3, "r-", linewidth=2)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris versicolor")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris setosa")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.sca(axes[1])
plot_svc_decision_boundary(svm_clf, 0, 5.5)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo")
plt.xlabel("Petal length", fontsize=14)
plt.axis([0, 5.5, 0, 2])
save_fig("large_margin_classification_plot")
plt.show()
# -
# # Độ nhạy đối với khoảng giá trị đặc trưng
# **Sensitivity to feature scales**
# +
Xs = np.array([[1, 50], [5, 20], [3, 80], [5, 60]]).astype(np.float64)
ys = np.array([0, 0, 1, 1])
svm_clf = SVC(kernel="linear", C=100)
svm_clf.fit(Xs, ys)
plt.figure(figsize=(9,2.7))
plt.subplot(121)
plt.plot(Xs[:, 0][ys==1], Xs[:, 1][ys==1], "bo")
plt.plot(Xs[:, 0][ys==0], Xs[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, 0, 6)
plt.xlabel("$x_0$", fontsize=20)
plt.ylabel("$x_1$ ", fontsize=20, rotation=0)
plt.title("Unscaled", fontsize=16)
plt.axis([0, 6, 0, 90])
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(Xs)
svm_clf.fit(X_scaled, ys)
plt.subplot(122)
plt.plot(X_scaled[:, 0][ys==1], X_scaled[:, 1][ys==1], "bo")
plt.plot(X_scaled[:, 0][ys==0], X_scaled[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, -2, 2)
plt.xlabel("$x'_0$", fontsize=20)
plt.ylabel("$x'_1$ ", fontsize=20, rotation=0)
plt.title("Scaled", fontsize=16)
plt.axis([-2, 2, -2, 2])
save_fig("sensitivity_to_feature_scales_plot")
# -
# # Độ nhạy đối với các điểm ngoại lai
# **Sensitivity to outliers**
# +
X_outliers = np.array([[3.4, 1.3], [3.2, 0.8]])
y_outliers = np.array([0, 0])
Xo1 = np.concatenate([X, X_outliers[:1]], axis=0)
yo1 = np.concatenate([y, y_outliers[:1]], axis=0)
Xo2 = np.concatenate([X, X_outliers[1:]], axis=0)
yo2 = np.concatenate([y, y_outliers[1:]], axis=0)
svm_clf2 = SVC(kernel="linear", C=10**9)
svm_clf2.fit(Xo2, yo2)
fig, axes = plt.subplots(ncols=2, figsize=(10,2.7), sharey=True)
plt.sca(axes[0])
plt.plot(Xo1[:, 0][yo1==1], Xo1[:, 1][yo1==1], "bs")
plt.plot(Xo1[:, 0][yo1==0], Xo1[:, 1][yo1==0], "yo")
plt.text(0.3, 1.0, "Impossible!", fontsize=24, color="red")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[0][0], X_outliers[0][1]),
xytext=(2.5, 1.7),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
plt.sca(axes[1])
plt.plot(Xo2[:, 0][yo2==1], Xo2[:, 1][yo2==1], "bs")
plt.plot(Xo2[:, 0][yo2==0], Xo2[:, 1][yo2==0], "yo")
plot_svc_decision_boundary(svm_clf2, 0, 5.5)
plt.xlabel("Petal length", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[1][0], X_outliers[1][1]),
xytext=(3.2, 0.08),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
save_fig("sensitivity_to_outliers_plot")
plt.show()
# -
# # Biên lớn *vs* vi phạm biên
# **Large margin *vs* margin violations**
# Đây là ví dụ mã lập trình đầu tiên của chương 5:
# +
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris virginica
svm_clf = Pipeline([
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge", random_state=42)),
])
svm_clf.fit(X, y)
# -
svm_clf.predict([[5.5, 1.7]])
# Bây giờ hãy vẽ đồ thị so sánh giữa các cài đặt điều chuẩn khác nhau:
# +
scaler = StandardScaler()
svm_clf1 = LinearSVC(C=1, loss="hinge", random_state=42)
svm_clf2 = LinearSVC(C=100, loss="hinge", random_state=42)
scaled_svm_clf1 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf1),
])
scaled_svm_clf2 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf2),
])
scaled_svm_clf1.fit(X, y)
scaled_svm_clf2.fit(X, y)
# +
# Convert to unscaled parameters
b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_])
b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_])
w1 = svm_clf1.coef_[0] / scaler.scale_
w2 = svm_clf2.coef_[0] / scaler.scale_
svm_clf1.intercept_ = np.array([b1])
svm_clf2.intercept_ = np.array([b2])
svm_clf1.coef_ = np.array([w1])
svm_clf2.coef_ = np.array([w2])
# Find support vectors (LinearSVC does not do this automatically)
t = y * 2 - 1
support_vectors_idx1 = (t * (X.dot(w1) + b1) < 1).ravel()
support_vectors_idx2 = (t * (X.dot(w2) + b2) < 1).ravel()
svm_clf1.support_vectors_ = X[support_vectors_idx1]
svm_clf2.support_vectors_ = X[support_vectors_idx2]
# +
fig, axes = plt.subplots(ncols=2, figsize=(10,2.7), sharey=True)
plt.sca(axes[0])
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris virginica")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris versicolor")
plot_svc_decision_boundary(svm_clf1, 4, 5.9)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.title("$C = {}$".format(svm_clf1.C), fontsize=16)
plt.axis([4, 5.9, 0.8, 2.8])
plt.sca(axes[1])
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 5.99)
plt.xlabel("Petal length", fontsize=14)
plt.title("$C = {}$".format(svm_clf2.C), fontsize=16)
plt.axis([4, 5.9, 0.8, 2.8])
save_fig("regularization_plot")
# -
# # Phân loại phi tuyến
# **Non-linear classification**
# +
X1D = np.linspace(-4, 4, 9).reshape(-1, 1)
X2D = np.c_[X1D, X1D**2]
y = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(10, 3))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.plot(X1D[:, 0][y==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][y==1], np.zeros(5), "g^")
plt.gca().get_yaxis().set_ticks([])
plt.xlabel(r"$x_1$", fontsize=20)
plt.axis([-4.5, 4.5, -0.2, 0.2])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(X2D[:, 0][y==0], X2D[:, 1][y==0], "bs")
plt.plot(X2D[:, 0][y==1], X2D[:, 1][y==1], "g^")
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$ ", fontsize=20, rotation=0)
plt.gca().get_yaxis().set_ticks([0, 4, 8, 12, 16])
plt.plot([-4.5, 4.5], [6.5, 6.5], "r--", linewidth=3)
plt.axis([-4.5, 4.5, -1, 17])
plt.subplots_adjust(right=1)
save_fig("higher_dimensions_plot", tight_layout=False)
plt.show()
# +
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.show()
# +
from sklearn.datasets import make_moons
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge", random_state=42))
])
polynomial_svm_clf.fit(X, y)
# +
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
save_fig("moons_polynomial_svc_plot")
plt.show()
# +
from sklearn.svm import SVC
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
poly_kernel_svm_clf.fit(X, y)
# -
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
# +
fig, axes = plt.subplots(ncols=2, figsize=(10.5, 4), sharey=True)
plt.sca(axes[0])
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.45, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.4, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.sca(axes[1])
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.45, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.4, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
plt.ylabel("")
save_fig("moons_kernelized_polynomial_svc_plot")
plt.show()
# +
def gaussian_rbf(x, landmark, gamma):
return np.exp(-gamma * np.linalg.norm(x - landmark, axis=1)**2)
gamma = 0.3
x1s = np.linspace(-4.5, 4.5, 200).reshape(-1, 1)
x2s = gaussian_rbf(x1s, -2, gamma)
x3s = gaussian_rbf(x1s, 1, gamma)
XK = np.c_[gaussian_rbf(X1D, -2, gamma), gaussian_rbf(X1D, 1, gamma)]
yk = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(10.5, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.scatter(x=[-2, 1], y=[0, 0], s=150, alpha=0.5, c="red")
plt.plot(X1D[:, 0][yk==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][yk==1], np.zeros(5), "g^")
plt.plot(x1s, x2s, "g--")
plt.plot(x1s, x3s, "b:")
plt.gca().get_yaxis().set_ticks([0, 0.25, 0.5, 0.75, 1])
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"Similarity", fontsize=14)
plt.annotate(r'$\mathbf{x}$',
xy=(X1D[3, 0], 0),
xytext=(-0.5, 0.20),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.text(-2, 0.9, "$x_2$", ha="center", fontsize=20)
plt.text(1, 0.9, "$x_3$", ha="center", fontsize=20)
plt.axis([-4.5, 4.5, -0.1, 1.1])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(XK[:, 0][yk==0], XK[:, 1][yk==0], "bs")
plt.plot(XK[:, 0][yk==1], XK[:, 1][yk==1], "g^")
plt.xlabel(r"$x_2$", fontsize=20)
plt.ylabel(r"$x_3$ ", fontsize=20, rotation=0)
plt.annotate(r'$\phi\left(\mathbf{x}\right)$',
xy=(XK[3, 0], XK[3, 1]),
xytext=(0.65, 0.50),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.plot([-0.1, 1.1], [0.57, -0.1], "r--", linewidth=3)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplots_adjust(right=1)
save_fig("kernel_method_plot")
plt.show()
# -
x1_example = X1D[3, 0]
for landmark in (-2, 1):
k = gaussian_rbf(np.array([[x1_example]]), np.array([[landmark]]), gamma)
print("Phi({}, {}) = {}".format(x1_example, landmark, k))
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001))
])
rbf_kernel_svm_clf.fit(X, y)
# +
from sklearn.svm import SVC
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10.5, 7), sharex=True, sharey=True)
for i, svm_clf in enumerate(svm_clfs):
plt.sca(axes[i // 2, i % 2])
plot_predictions(svm_clf, [-1.5, 2.45, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.45, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
if i in (0, 1):
plt.xlabel("")
if i in (1, 3):
plt.ylabel("")
save_fig("moons_rbf_svc_plot")
plt.show()
# -
# # Hồi quy
# **Regression**
#
np.random.seed(42)
m = 50
X = 2 * np.random.rand(m, 1)
y = (4 + 3 * X + np.random.randn(m, 1)).ravel()
# +
from sklearn.svm import LinearSVR
svm_reg = LinearSVR(epsilon=1.5, random_state=42)
svm_reg.fit(X, y)
# +
svm_reg1 = LinearSVR(epsilon=1.5, random_state=42)
svm_reg2 = LinearSVR(epsilon=0.5, random_state=42)
svm_reg1.fit(X, y)
svm_reg2.fit(X, y)
def find_support_vectors(svm_reg, X, y):
y_pred = svm_reg.predict(X)
off_margin = (np.abs(y - y_pred) >= svm_reg.epsilon)
return np.argwhere(off_margin)
svm_reg1.support_ = find_support_vectors(svm_reg1, X, y)
svm_reg2.support_ = find_support_vectors(svm_reg2, X, y)
eps_x1 = 1
eps_y_pred = svm_reg1.predict([[eps_x1]])
# +
def plot_svm_regression(svm_reg, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)
y_pred = svm_reg.predict(x1s)
plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")
plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")
plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")
plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')
plt.plot(X, y, "bo")
plt.xlabel(r"$x_1$", fontsize=18)
plt.legend(loc="upper left", fontsize=18)
plt.axis(axes)
fig, axes = plt.subplots(ncols=2, figsize=(9, 4), sharey=True)
plt.sca(axes[0])
plot_svm_regression(svm_reg1, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
#plt.plot([eps_x1, eps_x1], [eps_y_pred, eps_y_pred - svm_reg1.epsilon], "k-", linewidth=2)
plt.annotate(
'', xy=(eps_x1, eps_y_pred), xycoords='data',
xytext=(eps_x1, eps_y_pred - svm_reg1.epsilon),
textcoords='data', arrowprops={'arrowstyle': '<->', 'linewidth': 1.5}
)
plt.text(0.91, 5.6, r"$\epsilon$", fontsize=20)
plt.sca(axes[1])
plot_svm_regression(svm_reg2, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg2.epsilon), fontsize=18)
save_fig("svm_regression_plot")
plt.show()
# -
np.random.seed(42)
m = 100
X = 2 * np.random.rand(m, 1) - 1
y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
# **Chú ý**: để cập nhật với phiên bản hiện tại, ta đặt gamma="scale" bởi đây là giá trị mặc định trong Scikit-Learn 0.22"
# +
from sklearn.svm import SVR
svm_poly_reg = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="scale")
svm_poly_reg.fit(X, y)
# +
from sklearn.svm import SVR
svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="scale")
svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1, gamma="scale")
svm_poly_reg1.fit(X, y)
svm_poly_reg2.fit(X, y)
# -
fig, axes = plt.subplots(ncols=2, figsize=(9, 4), sharey=True)
plt.sca(axes[0])
plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.sca(axes[1])
plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18)
save_fig("svm_with_polynomial_kernel_plot")
plt.show()
# # Giải thích mô hình
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris virginica
# +
from mpl_toolkits.mplot3d import Axes3D
def plot_3D_decision_function(ax, w, b, x1_lim=[4, 6], x2_lim=[0.8, 2.8]):
x1_in_bounds = (X[:, 0] > x1_lim[0]) & (X[:, 0] < x1_lim[1])
X_crop = X[x1_in_bounds]
y_crop = y[x1_in_bounds]
x1s = np.linspace(x1_lim[0], x1_lim[1], 20)
x2s = np.linspace(x2_lim[0], x2_lim[1], 20)
x1, x2 = np.meshgrid(x1s, x2s)
xs = np.c_[x1.ravel(), x2.ravel()]
df = (xs.dot(w) + b).reshape(x1.shape)
m = 1 / np.linalg.norm(w)
boundary_x2s = -x1s*(w[0]/w[1])-b/w[1]
margin_x2s_1 = -x1s*(w[0]/w[1])-(b-1)/w[1]
margin_x2s_2 = -x1s*(w[0]/w[1])-(b+1)/w[1]
ax.plot_surface(x1s, x2, np.zeros_like(x1),
color="b", alpha=0.2, cstride=100, rstride=100)
ax.plot(x1s, boundary_x2s, 0, "k-", linewidth=2, label=r"$h=0$")
ax.plot(x1s, margin_x2s_1, 0, "k--", linewidth=2, label=r"$h=\pm 1$")
ax.plot(x1s, margin_x2s_2, 0, "k--", linewidth=2)
ax.plot(X_crop[:, 0][y_crop==1], X_crop[:, 1][y_crop==1], 0, "g^")
ax.plot_wireframe(x1, x2, df, alpha=0.3, color="k")
ax.plot(X_crop[:, 0][y_crop==0], X_crop[:, 1][y_crop==0], 0, "bs")
ax.axis(x1_lim + x2_lim)
ax.text(4.5, 2.5, 3.8, "Decision function $h$", fontsize=16)
ax.set_xlabel(r"Petal length", fontsize=16, labelpad=10)
ax.set_ylabel(r"Petal width", fontsize=16, labelpad=10)
ax.set_zlabel(r"$h = \mathbf{w}^T \mathbf{x} + b$", fontsize=18, labelpad=5)
ax.legend(loc="upper left", fontsize=16)
fig = plt.figure(figsize=(11, 6))
ax1 = fig.add_subplot(111, projection='3d')
plot_3D_decision_function(ax1, w=svm_clf2.coef_[0], b=svm_clf2.intercept_[0])
save_fig("iris_3D_plot")
plt.show()
# -
# # Vector trọng số nhỏ dẫn đến biên lớn
# +
def plot_2D_decision_function(w, b, ylabel=True, x1_lim=[-3, 3]):
x1 = np.linspace(x1_lim[0], x1_lim[1], 200)
y = w * x1 + b
m = 1 / w
plt.plot(x1, y)
plt.plot(x1_lim, [1, 1], "k:")
plt.plot(x1_lim, [-1, -1], "k:")
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot([m, m], [0, 1], "k--")
plt.plot([-m, -m], [0, -1], "k--")
plt.plot([-m, m], [0, 0], "k-o", linewidth=3)
plt.axis(x1_lim + [-2, 2])
plt.xlabel(r"$x_1$", fontsize=16)
if ylabel:
plt.ylabel(r"$w_1 x_1$ ", rotation=0, fontsize=16)
plt.title(r"$w_1 = {}$".format(w), fontsize=16)
fig, axes = plt.subplots(ncols=2, figsize=(9, 3.2), sharey=True)
plt.sca(axes[0])
plot_2D_decision_function(1, 0)
plt.sca(axes[1])
plot_2D_decision_function(0.5, 0, ylabel=False)
save_fig("small_w_large_margin_plot")
plt.show()
# +
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris virginica
svm_clf = SVC(kernel="linear", C=1)
svm_clf.fit(X, y)
svm_clf.predict([[5.3, 1.3]])
# -
# # Mất mát Hinge
# **Hinge loss**
# +
t = np.linspace(-2, 4, 200)
h = np.where(1 - t < 0, 0, 1 - t) # max(0, 1-t)
plt.figure(figsize=(5,2.8))
plt.plot(t, h, "b-", linewidth=2, label="$max(0, 1 - t)$")
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.yticks(np.arange(-1, 2.5, 1))
plt.xlabel("$t$", fontsize=16)
plt.axis([-2, 4, -1, 2.5])
plt.legend(loc="upper right", fontsize=16)
save_fig("hinge_plot")
plt.show()
# -
# # Tài liệu bổ sung
# ## Thời gian huấn luyện
X, y = make_moons(n_samples=1000, noise=0.4, random_state=42)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
# +
import time
tol = 0.1
tols = []
times = []
for i in range(10):
svm_clf = SVC(kernel="poly", gamma=3, C=10, tol=tol, verbose=1)
t1 = time.time()
svm_clf.fit(X, y)
t2 = time.time()
times.append(t2-t1)
tols.append(tol)
print(i, tol, t2-t1)
tol /= 10
plt.semilogx(tols, times, "bo-")
plt.xlabel("Tolerance", fontsize=16)
plt.ylabel("Time (seconds)", fontsize=16)
plt.grid(True)
plt.show()
# -
# ## Triển khai bộ phân loại hồi quy tuyến tính SVM sử dụng phương pháp Hạ Gradient theo Batch
# **Linear SVM classifier implementation using Batch Gradient Descent**
# Training set
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64).reshape(-1, 1) # Iris virginica
# +
from sklearn.base import BaseEstimator
class MyLinearSVC(BaseEstimator):
def __init__(self, C=1, eta0=1, eta_d=10000, n_epochs=1000, random_state=None):
self.C = C
self.eta0 = eta0
self.n_epochs = n_epochs
self.random_state = random_state
self.eta_d = eta_d
def eta(self, epoch):
return self.eta0 / (epoch + self.eta_d)
def fit(self, X, y):
# Random initialization
if self.random_state:
np.random.seed(self.random_state)
w = np.random.randn(X.shape[1], 1) # n feature weights
b = 0
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_t = X * t
self.Js=[]
# Training
for epoch in range(self.n_epochs):
support_vectors_idx = (X_t.dot(w) + t * b < 1).ravel()
X_t_sv = X_t[support_vectors_idx]
t_sv = t[support_vectors_idx]
J = 1/2 * np.sum(w * w) + self.C * (np.sum(1 - X_t_sv.dot(w)) - b * np.sum(t_sv))
self.Js.append(J)
w_gradient_vector = w - self.C * np.sum(X_t_sv, axis=0).reshape(-1, 1)
b_derivative = -self.C * np.sum(t_sv)
w = w - self.eta(epoch) * w_gradient_vector
b = b - self.eta(epoch) * b_derivative
self.intercept_ = np.array([b])
self.coef_ = np.array([w])
support_vectors_idx = (X_t.dot(w) + t * b < 1).ravel()
self.support_vectors_ = X[support_vectors_idx]
return self
def decision_function(self, X):
return X.dot(self.coef_[0]) + self.intercept_[0]
def predict(self, X):
return (self.decision_function(X) >= 0).astype(np.float64)
C=2
svm_clf = MyLinearSVC(C=C, eta0 = 10, eta_d = 1000, n_epochs=60000, random_state=2)
svm_clf.fit(X, y)
svm_clf.predict(np.array([[5, 2], [4, 1]]))
# -
plt.plot(range(svm_clf.n_epochs), svm_clf.Js)
plt.axis([0, svm_clf.n_epochs, 0, 100])
print(svm_clf.intercept_, svm_clf.coef_)
svm_clf2 = SVC(kernel="linear", C=C)
svm_clf2.fit(X, y.ravel())
print(svm_clf2.intercept_, svm_clf2.coef_)
# +
yr = y.ravel()
fig, axes = plt.subplots(ncols=2, figsize=(11, 3.2), sharey=True)
plt.sca(axes[0])
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^", label="Iris virginica")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs", label="Not Iris virginica")
plot_svc_decision_boundary(svm_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("MyLinearSVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
plt.legend(loc="upper left")
plt.sca(axes[1])
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("SVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
# +
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(loss="hinge", alpha=0.017, max_iter=1000, tol=1e-3, random_state=42)
sgd_clf.fit(X, y.ravel())
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_b = np.c_[np.ones((m, 1)), X] # Add bias input x0=1
X_b_t = X_b * t
sgd_theta = np.r_[sgd_clf.intercept_[0], sgd_clf.coef_[0]]
print(sgd_theta)
support_vectors_idx = (X_b_t.dot(sgd_theta) < 1).ravel()
sgd_clf.support_vectors_ = X[support_vectors_idx]
sgd_clf.C = C
plt.figure(figsize=(5.5,3.2))
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(sgd_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("SGDClassifier", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
# -
# # Lời giải bài tập
# ## 1. to 7.
# Tham khảo phụ lục A.
# # 8.
# _Bài tập: huấn luyện mô hình `LinearSVC` trên một tập dữ liệu tách biệt tuyến tính. Sau đó huấn luyện mô hình `SVC` và mô hình `SGDClassifier` trên cùng tập dữ liệu đó.
# Hãy kiểm tra xem liệu các mô hình thu được có gần giống nhau không?_
# Hãy sử dụng tập dữ liệu Iris: Các lớp Iris Setosa và Iris Versicolor đều tách biệt tuyến tính.
# +
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
# +
from sklearn.svm import SVC, LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import StandardScaler
C = 5
alpha = 1 / (C * len(X))
lin_clf = LinearSVC(loss="hinge", C=C, random_state=42)
svm_clf = SVC(kernel="linear", C=C)
sgd_clf = SGDClassifier(loss="hinge", learning_rate="constant", eta0=0.001, alpha=alpha,
max_iter=1000, tol=1e-3, random_state=42)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
lin_clf.fit(X_scaled, y)
svm_clf.fit(X_scaled, y)
sgd_clf.fit(X_scaled, y)
print("LinearSVC: ", lin_clf.intercept_, lin_clf.coef_)
print("SVC: ", svm_clf.intercept_, svm_clf.coef_)
print("SGDClassifier(alpha={:.5f}):".format(sgd_clf.alpha), sgd_clf.intercept_, sgd_clf.coef_)
# -
# Hãy vẽ đồ thị ranh giới quyết định của cả ba mô hình trên:
# +
# Compute the slope and bias of each decision boundary
w1 = -lin_clf.coef_[0, 0]/lin_clf.coef_[0, 1]
b1 = -lin_clf.intercept_[0]/lin_clf.coef_[0, 1]
w2 = -svm_clf.coef_[0, 0]/svm_clf.coef_[0, 1]
b2 = -svm_clf.intercept_[0]/svm_clf.coef_[0, 1]
w3 = -sgd_clf.coef_[0, 0]/sgd_clf.coef_[0, 1]
b3 = -sgd_clf.intercept_[0]/sgd_clf.coef_[0, 1]
# Transform the decision boundary lines back to the original scale
line1 = scaler.inverse_transform([[-10, -10 * w1 + b1], [10, 10 * w1 + b1]])
line2 = scaler.inverse_transform([[-10, -10 * w2 + b2], [10, 10 * w2 + b2]])
line3 = scaler.inverse_transform([[-10, -10 * w3 + b3], [10, 10 * w3 + b3]])
# Plot all three decision boundaries
plt.figure(figsize=(11, 4))
plt.plot(line1[:, 0], line1[:, 1], "k:", label="LinearSVC")
plt.plot(line2[:, 0], line2[:, 1], "b--", linewidth=2, label="SVC")
plt.plot(line3[:, 0], line3[:, 1], "r-", label="SGDClassifier")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs") # label="Iris versicolor"
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo") # label="Iris setosa"
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper center", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.show()
# -
# Chúng trông khá tương đồng!
# # 9.
# _Hãy huấn luyện bộ phân loại SVM trên tập dữ liệu MNIST.
# Do SVM là bộ phân loại nhị phân, bạn sẽ cần sử dụng phương pháp một-còn lại để phân loại 10 chữ số. Bạn có thể cần tinh chỉnh siêu tham số với các tập kiểm định nhỏ để tăng tốc quá trình này.
# Độ chính xác cao nhất bạn có thể đạt được là bao nhiêu?_
#
#
# Đầu tiên, hãy nạp tập dữ liệu và chia nó thành tập huấn luyện và tập kiểm tra.
# Chúng ta có thể sử dụng `train_test_split()` nhưng mọi người thường lấy 60,000 mẫu đầu tiên cho tập huấn luyện, và 10,000 mẫu còn lại cho tập kiểm tra (điều này giúp chúng ta có thể so sánh mô hình của mình với những người khác)
# **Lưu ý:** Từ Scikit-Learn 0.24 về sau, `fetch_openml()` trả về một `DataFrame` Pandas theo mặc định. Để tránh điều này, ta có thể đặt `as_frame=False`.
# +
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True, as_frame=False)
X = mnist["data"]
y = mnist["target"].astype(np.uint8)
X_train = X[:60000]
y_train = y[:60000]
X_test = X[60000:]
y_test = y[60000:]
# -
# Nhiều thuật toán huấn luyện khá nhạy với thứ tự của các mẫu huấn luyện, vì vậy sẽ tốt hơn nếu chúng ta xáo trộn thứ tự các mẫu trước khi huấn luyện.
# Tuy nhiên, vì tập dữ liệu vốn đã được xáo trộn, ta không cần làm điều này nữa.
# Hãy bắt đầu với một bộ phân loại tuyến tính SVM đơn giản.
# Bộ phân loại này sẽ tự động sử dụng chiến lược Một-Toàn bộ (hay còn được gọi là Một-Còn lại), vì thế chúng ta không cần phải làm gì đặc biệt.
# Thật dễ dàng!
#
# **Lưu ý:** quá trình này có thể mất vài phút phụ thuộc vào phần cứng của bạn.
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train, y_train)
# Hãy đưa ra dự đoán trên tập huấn luyện và đo độ chính xác của mô hình (chúng ta chưa muốn đo độ chính xác trên tập kiểm tra ngay bởi ta chưa lựa chọn mô hình cuối cùng để huấn luyện):
# +
from sklearn.metrics import accuracy_score
y_pred = lin_clf.predict(X_train)
accuracy_score(y_train, y_pred)
# -
# Okay, 89.5% độ chính xác trên MNIST là một kết quả khá tệ.
# Mô hình tuyến tính này quá đơn giản để huấn luyện MNIST, tuy nhiên có lẽ là chúng ta chỉ cần co giãn dữ liệu trước khi huấn luyện:
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float32))
X_test_scaled = scaler.transform(X_test.astype(np.float32))
# **Lưu ý:** quá trình này có thể mất vài phút phụ thuộc vào phần cứng của bạn.
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train_scaled, y_train)
y_pred = lin_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
# Kết quả trông tốt hơn nhiều (chúng ta giảm tỉ lệ lỗi xuống khoảng 25%), tuy nhiên kết quả vẫn chưa đủ tốt với MNIST.
# Nếu muốn sử dụng SVM, ta sẽ phải sử dụng một hạt nhân. Hãy thử mô hình `SVC` với một hạt nhân RBF (mặc định)
# **Chú ý**: để cập nhật với phiên bản hiện tại, ta đặt `gamma="scale"` bởi đây là giá trị mặc định trong Scikit-Learn 0.22"
svm_clf = SVC(gamma="scale")
svm_clf.fit(X_train_scaled[:10000], y_train[:10000])
y_pred = svm_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
# Trông khá ổn đấy, ta có được mô hình với chất lượng tốt hơn mặc dù chỉ huấn luyện với dữ liệu ít hơn tới 6 lần.
# Hãy tinh chỉnh các siêu tham số bằng phương pháp tìm kiếm ngẫu nhiên với kiểm định chéo.
# Chúng ta sẽ làm điều này trên một tập dữ liệu nhỏ để tăng tốc quá trình:
# +
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(svm_clf, param_distributions, n_iter=10, verbose=2, cv=3)
rnd_search_cv.fit(X_train_scaled[:1000], y_train[:1000])
# -
rnd_search_cv.best_estimator_
rnd_search_cv.best_score_
# Kết quả trông khá thấp, tuy nhiên hay nhớ rằng chúng ta chỉ huấn luyện mô hình trên 1,000 mẫu.
# Hãy thử huấn luyện lại mô hình tốt nhất trên toàn bộ tập huấn luyện
# **Lưu ý:** quá trình này có thể mất vài phút phụ thuộc vào phần cứng của bạn.
rnd_search_cv.best_estimator_.fit(X_train_scaled, y_train)
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
# Ah, trông rất ổn đấy! Hãy lựa chọn mô hình này. Giờ ta có thể kiểm tra chất lượng mô hình trên tập kiểm tra:
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
accuracy_score(y_test, y_pred)
# Không tệ chút nào, nhưng rõ ràng là mô hình đã quá khớp một chút.
# Ta có thể muốn điều chỉnh các siêu tham số thêm một chút (ví dụ như giảm `C` và/hoặc `gamma`), nhưng sẽ xảy ra rủi ro cao là mô hình quá khớp tập kiểm tra.
# Những người khác đã phát hiện ra rằng mô hình thậm chí sẽ hoạt động tốt hơn khi sử dụng siêu tham số `C=5` and `gamma=0.005` (trên 98% độ chính xác).
# Bằng cách chạy tìm kiếm ngẫu nhiên lâu hơn trên tập huấn luyện lớn hơn, ta cũng có thể tìm thấy các giá trị này.
# ## 10.
# _Bài tập: Hãy huấn luyện bộ hồi quy SVM trên tập dữ liệu nhà ở California._
# Hãy nạp tập dữ liệu sử dụng hàm `fetch_california_housing()` của Scikit-Learn:
# +
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
X = housing["data"]
y = housing["target"]
# -
# Chia tập dữ liệu thành tập huấn luyện và tập kiểm tra:
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# -
# Đừng quên co giãn dữ liệu:
# +
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# -
# Hãy huấn luyện một mô hình `LinearSVR` đơn giản trước tiên:
# +
from sklearn.svm import LinearSVR
lin_svr = LinearSVR(random_state=42)
lin_svr.fit(X_train_scaled, y_train)
# -
# Hãy kiểm tra xem chất lượng mô hình trên tập huấn luyện như thế nào:
# +
from sklearn.metrics import mean_squared_error
y_pred = lin_svr.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
mse
# -
# Bây giờ hãy nhìn vào RMSE:
np.sqrt(mse)
# Trong tập huấn luyện này, các giá trị mục tiêu là hàng chục nghìn đô-la.
# RMSE cung cấp một hình dung sơ bộ về loại lỗi mà ta sẽ bắt gặp (với trọng số cao hơn trên các lỗi lớn):
# vì vậy với mô hình này, lỗi dao động được ước tính khoảng $10,000.
# Không tốt cho lắm.
# Hãy xem liệu chúng ta có thể làm tốt hơn với một Hạt nhân RBF.
# Ta sẽ sử dụng tìm kiếm ngẫu nhiên với kiểm định chéo để tìm được các siêu tham số phù hợp cho `C` và `gamma`:
#
# +
from sklearn.svm import SVR
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(SVR(), param_distributions, n_iter=10, verbose=2, cv=3, random_state=42)
rnd_search_cv.fit(X_train_scaled, y_train)
# -
rnd_search_cv.best_estimator_
# Bây giờ hay đo RMSE trên tập huấn luyện:
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
np.sqrt(mse)
# Hãy lựa chọn mô hình này và đánh giá nó trên tập kiểm tra:
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
np.sqrt(mse)
| 05_support_vector_machines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="U4eQ_Ck5wStf" outputId="1e3649a3-5dda-46f5-b432-a6601469d7ba"
from google.colab import drive
drive.mount('/content/drive')
# + id="VhZfBQFlwEfd"
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/content/drive/MyDrive/spotifyclassifier-b2d573dee4ec.json"
from google.cloud import bigquery
# Construct a BigQuery client ob
client = bigquery.Client()
# + id="_sNyGVcqv7Pc"
query_string = """
SELECT * from
spotifyclassifier.features.features_updated
"""
dataframe = (
client.query(query_string)
.result()
.to_dataframe(
)
)
# + [markdown] id="z7Yx0pFxwuzR"
# Get pitch values for each song, split them into separate columns
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="hce143b9wmmB" outputId="be4a662d-c9fa-451f-8d9a-a27722d9a42b"
import pandas as pd
list_pitches = []
for i in range(len(dataframe)):
pitches = []
for j in dataframe['avg_pitches'][i]['list']:
pitches.append(j['item'])
list_pitches.append(pitches)
split_df = pd.DataFrame(list_pitches, columns=["pitch" + str(i) for i in range(12)])
split_df
# + id="rvuffGelwqnd"
dataframe = pd.concat([dataframe, split_df], axis=1)
dataframe = dataframe.drop('avg_pitches', axis=1)
# + [markdown] id="OJ6tRTpewhcG"
# Get timbre values for each song, split them into separate columns
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="aDiFRgjuw4cK" outputId="3a32265e-aea5-4d47-bcb5-1276c8560ea0"
list_timbre = []
for i in range(len(dataframe)):
timbre = []
for j in dataframe['avg_timbre'][i]['list']:
timbre.append(j['item'])
list_timbre.append(timbre)
split_df = pd.DataFrame(list_timbre, columns=["timbre" + str(i) for i in range(12)])
split_df
# + colab={"base_uri": "https://localhost:8080/", "height": 488} id="dQz7D-0Tw_kL" outputId="5572bf71-c3fa-4231-b942-ac56a26478bb"
dataframe = pd.concat([dataframe, split_df], axis=1)
dataframe = dataframe.drop('avg_timbre', axis=1)
dataframe
# + [markdown] id="mkR9ly27wmoF"
# Delete rows with more than 5 empty column values
# + id="pHl0NEBewDfl"
dataframe = dataframe.loc[dataframe.eq(0).sum(1).le(5),]
dataframe
# + [markdown] id="cupNAarkwsd5"
# Drop songs with multiple supergenre labels
# + id="8B50up5owLsr"
grouped_df = dataframe.groupby("id")
grouped_df = grouped_df.agg({"super_genre": "nunique"})
grouped_df = grouped_df.reset_index()
grouped_df = grouped_df[grouped_df['super_genre'] == 1]
keep = grouped_df['id'].tolist()
dataframe = dataframe[dataframe['id'].isin(keep)]
# + [markdown] id="FCj3fg-yw3RI"
# Drop songs with multiple subgenre labels
# + id="Qtl2FY2uwNEE"
grouped_df = dataframe.groupby("id")
grouped_df = grouped_df.agg({"subgenre": "nunique"})
grouped_df = grouped_df.reset_index()
grouped_df = grouped_df[grouped_df['subgenre'] == 1]
keep = grouped_df['id'].tolist()
dataframe = dataframe[dataframe['id'].isin(keep)]
# + [markdown] id="TSekoXQjw6a5"
# Load dataframe into BigQuery table
# + id="MCSaGSpIxB5b"
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
table_id = "spotifyclassifier.features.features_updated_pit_tim_cols"
# + id="lZbJduvxv-WB"
job = client.load_table_from_dataframe(
dataframe, table_id
) # Make an API request.
job.result() # Wait for the job to complete.
# + id="tcTaglgxv_sq"
table = client.get_table(table_id) # Make an API request.
print(
"Loaded {} rows and {} columns to {}".format(
table.num_rows, len(table.schema), table_id
)
)
| DataProcessing/ProcessNewData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ___
#
# <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
# ___
# # K Means Clustering Project - Solutions
#
# For this project we will attempt to use KMeans Clustering to cluster Universities into to two groups, Private and Public.
#
# ___
# It is **very important to note, we actually have the labels for this data set, but we will NOT use them for the KMeans clustering algorithm, since that is an unsupervised learning algorithm.**
#
# When using the Kmeans algorithm under normal circumstances, it is because you don't have labels. In this case we will use the labels to try to get an idea of how well the algorithm performed, but you won't usually do this for Kmeans, so the classification report and confusion matrix at the end of this project, don't truly make sense in a real world setting!.
# ___
#
# ## The Data
#
# We will use a data frame with 777 observations on the following 18 variables.
# * Private A factor with levels No and Yes indicating private or public university
# * Apps Number of applications received
# * Accept Number of applications accepted
# * Enroll Number of new students enrolled
# * Top10perc Pct. new students from top 10% of H.S. class
# * Top25perc Pct. new students from top 25% of H.S. class
# * F.Undergrad Number of fulltime undergraduates
# * P.Undergrad Number of parttime undergraduates
# * Outstate Out-of-state tuition
# * Room.Board Room and board costs
# * Books Estimated book costs
# * Personal Estimated personal spending
# * PhD Pct. of faculty with Ph.D.’s
# * Terminal Pct. of faculty with terminal degree
# * S.F.Ratio Student/faculty ratio
# * perc.alumni Pct. alumni who donate
# * Expend Instructional expenditure per student
# * Grad.Rate Graduation rate
# ## Import Libraries
#
# ** Import the libraries you usually use for data analysis.**
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# ## Get the Data
# ** Read in the College_Data file using read_csv. Figure out how to set the first column as the index.**
df = pd.read_csv('College_Data',index_col=0)
# **Check the head of the data**
df.head()
# ** Check the info() and describe() methods on the data.**
df.info()
df.describe()
# ## EDA
#
# It's time to create some data visualizations!
#
# ** Create a scatterplot of Grad.Rate versus Room.Board where the points are colored by the Private column. **
sns.set_style('whitegrid')
sns.lmplot('Room.Board','Grad.Rate',data=df, hue='Private',
palette='coolwarm',size=6,aspect=1,fit_reg=False)
# **Create a scatterplot of F.Undergrad versus Outstate where the points are colored by the Private column.**
sns.set_style('whitegrid')
sns.lmplot('Outstate','F.Undergrad',data=df, hue='Private',
palette='coolwarm',size=6,aspect=1,fit_reg=False)
# ** Create a stacked histogram showing Out of State Tuition based on the Private column. Try doing this using [sns.FacetGrid](https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.FacetGrid.html). If that is too tricky, see if you can do it just by using two instances of pandas.plot(kind='hist'). **
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',size=6,aspect=2)
g = g.map(plt.hist,'Outstate',bins=20,alpha=0.7)
# **Create a similar histogram for the Grad.Rate column.**
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',size=6,aspect=2)
g = g.map(plt.hist,'Grad.Rate',bins=20,alpha=0.7)
# ** Notice how there seems to be a private school with a graduation rate of higher than 100%.What is the name of that school?**
df[df['Grad.Rate'] > 100]
# ** Set that school's graduation rate to 100 so it makes sense. You may get a warning not an error) when doing this operation, so use dataframe operations or just re-do the histogram visualization to make sure it actually went through.**
df['Grad.Rate']['Cazenovia College'] = 100
df[df['Grad.Rate'] > 100]
sns.set_style('darkgrid')
g = sns.FacetGrid(df,hue="Private",palette='coolwarm',size=6,aspect=2)
g = g.map(plt.hist,'Grad.Rate',bins=20,alpha=0.7)
# ## K Means Cluster Creation
#
# Now it is time to create the Cluster labels!
#
# ** Import KMeans from SciKit Learn.**
from sklearn.cluster import KMeans
# ** Create an instance of a K Means model with 2 clusters.**
kmeans = KMeans(n_clusters=2)
# **Fit the model to all the data except for the Private label.**
kmeans.fit(df.drop('Private',axis=1))
# ** What are the cluster center vectors?**
kmeans.cluster_centers_
# ## Evaluation
#
# There is no perfect way to evaluate clustering if you don't have the labels, however since this is just an exercise, we do have the labels, so we take advantage of this to evaluate our clusters, keep in mind, you usually won't have this luxury in the real world.
#
# ** Create a new column for df called 'Cluster', which is a 1 for a Private school, and a 0 for a public school.**
def converter(cluster):
if cluster=='Yes':
return 1
else:
return 0
df['Cluster'] = df['Private'].apply(converter)
df.head()
# ** Create a confusion matrix and classification report to see how well the Kmeans clustering worked without being given any labels.**
from sklearn.metrics import confusion_matrix,classification_report
print(confusion_matrix(df['Cluster'],kmeans.labels_))
print(classification_report(df['Cluster'],kmeans.labels_))
# Not so bad considering the algorithm is purely using the features to cluster the universities into 2 distinct groups! Hopefully you can begin to see how K Means is useful for clustering un-labeled data!
#
# ## Great Job!
| Udemy/Refactored_Py_DS_ML_Bootcamp-master/17-K-Means-Clustering/03-K Means Clustering Project - Solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="intelligent-american" outputId="48187e6e-d2bb-4edc-8b74-0d584a8adcd6" papermill={"duration": 1.468375, "end_time": "2021-05-10T13:37:29.075098", "exception": false, "start_time": "2021-05-10T13:37:27.606723", "status": "completed"} tags=[]
# !nvidia-smi
# +
import torch
torch.cuda.is_available(), torch.cuda.current_device(), torch.cuda.device(0), torch.cuda.device_count(), torch.cuda.get_device_name(0)
# + id="arctic-federal" papermill={"duration": 4.173856, "end_time": "2021-05-10T13:39:48.379648", "exception": false, "start_time": "2021-05-10T13:39:44.205792", "status": "completed"} tags=[]
import os
import gc
import sys
import json
import time
import random
import logging
import warnings
from pathlib import Path
from functools import partial
from collections import OrderedDict
import cv2
import yaml
import psutil
import torch
import numpy as np
import pandas as pd
from tqdm import tqdm
import rasterio as rio
from PIL import Image
#import nvidia_smi
import albumentations as albu
import albumentations.pytorch as albu_pt
warnings.filterwarnings("ignore", category=rio.errors.NotGeoreferencedWarning)
import segmentation_models_pytorch as smp
import ttach as tta
# + id="familiar-mother" papermill={"duration": 0.036293, "end_time": "2021-05-10T13:39:48.435801", "exception": false, "start_time": "2021-05-10T13:39:48.399508", "status": "completed"} tags=[]
class TFReader:
"""Reads tiff files.
If subdatasets are available, then use them, otherwise just handle as usual.
"""
def __init__(self, path_to_tiff_file: str):
self.ds = rio.open(path_to_tiff_file)
self.subdatasets = self.ds.subdatasets
# print('subs:', self.subdatasets)
self.is_subsets_avail = len(self.subdatasets) > 1 # 0? WTF
if self.is_subsets_avail:
path_to_subdatasets = self.ds.subdatasets
self.list_ds = [rio.open(path_to_subdataset)
for path_to_subdataset in path_to_subdatasets]
def read(self, window=None, boundless=True):
if window is not None: ds_kwargs = {'window':window, 'boundless':boundless}
else: ds_kwargs = {}
if self.is_subsets_avail:
t = [ds.read(**ds_kwargs) for ds in self.list_ds]
output = np.vstack(t)
else:
output = self.ds.read(**ds_kwargs)
return output
@property
def shape(self):
return self.ds.shape
def __del__(self):
del self.ds
if self.is_subsets_avail:
del self.list_ds
def close(self):
self.ds.close()
if self.is_subsets_avail:
for i in range(len(self.list_ds)):
self.list_ds[i].close()
def get_images(train=False):
p = 'train' if train else 'test'
return list(Path(fr'C:\Users\yiju\Desktop\Copy\Data\hubmap-kidney-segmentation/{p}').glob('*.tiff'))
def get_random_crops(n=8, ss=512):
imgs = get_images(False)
img = random.choice(imgs)
tfr = TFReader(img)
W,H = tfr.shape
for i in range(n):
x,y,w,h = random.randint(5000, W-5000),random.randint(5000, H-5000), ss,ss
res = tfr.read(window=((y,y+h),(x,x+w)))
yield res
tfr.close()
def test_tf_reader():
window=((5000,5100),(5000,5100))
imgs = get_images(False)
for img in imgs:
tfr = TFReader(img)
res = tfr.read(window=window)
print(img.name, tfr.shape, tfr.is_subsets_avail, res.shape, res.dtype, res.max())
tfr.close()
# + id="focal-footage" papermill={"duration": 0.034738, "end_time": "2021-05-10T13:39:48.490289", "exception": false, "start_time": "2021-05-10T13:39:48.455551", "status": "completed"} tags=[]
def mb_to_gb(size_in_mb): return round(size_in_mb / (1024 * 1024 * 1024), 3)
def get_ram_mems():
total, avail, used = mb_to_gb(psutil.virtual_memory().total), \
mb_to_gb(psutil.virtual_memory().available), \
mb_to_gb(psutil.virtual_memory().used)
return f'Total RAM : {total} GB, Available RAM: {avail} GB, Used RAM: {used} GB'
def load_model(model, model_folder_path):
model = _load_model_state(model, model_folder_path)
model.eval()
return model
def _load_model_state(model, path):
path = Path(path)
state_dict = torch.load(path, map_location=torch.device('cpu'))['model_state']
new_state_dict = OrderedDict()
for k, v in state_dict.items():
if k.startswith('module'):
k = k.lstrip('module')[1:]
new_state_dict[k] = v
del state_dict
model.load_state_dict(new_state_dict)
del new_state_dict
return model
# + [markdown] id="introductory-offering" papermill={"duration": 0.019563, "end_time": "2021-05-10T13:39:48.529648", "exception": false, "start_time": "2021-05-10T13:39:48.510085", "status": "completed"} tags=[]
# ## postp
# + id="unable-bridal" papermill={"duration": 0.04636, "end_time": "2021-05-10T13:39:48.595856", "exception": false, "start_time": "2021-05-10T13:39:48.549496", "status": "completed"} tags=[]
class ToTensor(albu_pt.ToTensorV2):
def apply_to_mask(self, mask, **params): return torch.from_numpy(mask).permute((2,0,1))
def apply(self, image, **params): return torch.from_numpy(image).permute((2,0,1))
def rescale(batch_img, scale):
return torch.nn.functional.interpolate(batch_img, scale_factor=(scale, scale), mode='bilinear', align_corners=False)
class MegaModel():
def __init__(self, model_folders, use_tta=True, use_cuda=True, threshold=.5, half_mode=False):
self.use_cuda = use_cuda
self.threshold = threshold
self.use_tta = use_tta
self.half_mode = half_mode
self.averaging = 'mean'
self._model_folders = model_folders#list(Path(root).glob('*')) # root / model1 ; model2; model3; ...
# TODO : SORT THEM
# self._model_types = [
# smp.Unet(encoder_name='timm-regnety_016', encoder_weights=None),
# smp.Unet(encoder_name='timm-regnety_016', encoder_weights=None, decoder_attention_type='scse')
# ]
self._model_types = [
smp.UnetPlusPlus(encoder_name='timm-regnety_016', encoder_weights=None, decoder_attention_type='scse'),
smp.Unet(encoder_name='timm-regnetx_032', encoder_weights=None),
smp.Unet(encoder_name='timm-regnety_016', encoder_weights=None),
smp.Unet(encoder_name='timm-regnety_016', encoder_weights=None, decoder_attention_type='scse')
]
self._model_scales = [3,3,3,3]
assert len(self._model_types) == len(self._model_folders)
assert len(self._model_scales) == len(self._model_folders)
self.models = self.models_init()
mean, std = [0.6226 , 0.4284 , 0.6705], [ 0.1246 , 0.1719 , 0.0956]
self.itransform = albu.Compose([albu.Normalize(mean=mean, std=std), ToTensor()])
print(self.models)
def models_init(self):
models = {}
tta_transforms = tta.Compose(
[
tta.HorizontalFlip(),
tta.Rotate90(angles=[0, 90])
]
)
for mt, mf, ms in zip(self._model_types, self._model_folders, self._model_scales):
mf = Path(mf)
model_names = list(mf.rglob('*.pth'))
mm = []
for mn in model_names:
m = load_model(mt, mn)
# print(mn)
if self.use_tta : m = tta.SegmentationTTAWrapper(m, tta_transforms, merge_mode='mean')
if self.use_cuda: m = m.cuda()
if self.half_mode: m = m.half()
mm.append(m)
models[mf] = {'scale': ms, 'models':mm}
return models
def _prepr(self, img, scale):
ch, H, W, dtype = *img.shape, img.dtype
assert ch==3 and dtype==np.uint8
img = img.transpose(1,2,0)
img = cv2.resize(img, (W // scale, H // scale), interpolation=cv2.INTER_AREA)
return self.itransform(image=img)['image']
def each_forward(self, model, scale, batch):
#print(batch.shape, batch.dtype, batch.max(), batch.min())
with torch.no_grad():
res = model(batch)
res = torch.sigmoid(res)
res = rescale(res, scale)
return res
def __call__(self, imgs, cuda=True):
batch = None
scale = None
preds = []
for mod_name, params in self.models.items(): # [{'a/b/c/1231.pth':{'type':Unet,'scale':3 }}, .. ]
_scale = params['scale']
if batch is None or scale != _scale:
scale = _scale
batch = [self._prepr(i, scale) for i in imgs]
batch = torch.stack(batch, axis=0)
if self.half_mode: batch = batch.half()
if self.use_cuda: batch = batch.cuda()
#print(batch.shape, batch.dtype, batch.max())
models = params['models']
_predicts = torch.stack([self.each_forward(m, scale, batch) for m in models])
preds.append(_predicts)
res = torch.vstack(preds).mean(0) # TODO : mean(0)? do right thing
res = res > self.threshold
return res
class Dummymodel(torch.nn.Module):
def forward(self, x):
x[x<.5]=0.9
return x
def infer(imgs, model):
assert isinstance(imgs, list)
# print("imgs length: ",len(imgs))
#c, h, w = imgs[0].shape
#batch = torch.rand(len(imgs), 1, h, w).float()
with torch.no_grad():
predicts = model(imgs)
return predicts
def get_infer_func(model_folders, use_tta, use_cuda, threshold):
m = MegaModel(model_folders=model_folders, use_tta=use_tta, use_cuda=use_cuda, threshold=threshold)
return partial(infer, model=m)
# + id="afraid-grounds" papermill={"duration": 0.027901, "end_time": "2021-05-10T13:39:48.643762", "exception": false, "start_time": "2021-05-10T13:39:48.615861", "status": "completed"} tags=[]
# model_folders = ['../input/pambmod5/models5/Unet_timm-regnety_016',
# '../input/pambmod5/models5/Unet_timm-regnety_016_scse'
# ]
# model_folders = ['../input/pambmod7sc/models7_SC/UnetPlusPlus_timm-regnety_016_scse',
# '../input/pambmod7sc/models7_SC/Unet_timm-regnety_016',
# '../input/pambmod7sc/models7_SC/Unet_timm-regnety_016_scse'
# ]
model_folders = ['./models8/UnetPlusPlus_timm-regnety_016_scse',
'./models8/Unet_timm-regnetx_032',
'./models8/Unet_timm-regnety_016',
'./models8/Unet_timm-regnety_016_scse'
]
def calc_infer_appr_time(trials, block_size, batch_size, model_folders, tta=True, use_cuda=True, scale = 3):
block_size = block_size * 3
pad = block_size // 4
f = get_infer_func(model_folders, use_tta=tta, use_cuda=use_cuda, threshold=.5)
for trial in tqdm(range(trials), position=0, leave=True):
imgs = list(get_random_crops(batch_size, block_size + 2*pad))
res = f(imgs).cpu()
# + id="loaded-tribute" papermill={"duration": 0.025744, "end_time": "2021-05-10T13:39:48.690115", "exception": false, "start_time": "2021-05-10T13:39:48.664371", "status": "completed"} tags=[]
# 5 images ~ 2700 blocks
# 512 8 True d2 - 6.2 hours
# + id="abroad-surrey" papermill={"duration": 0.025895, "end_time": "2021-05-10T13:39:48.736739", "exception": false, "start_time": "2021-05-10T13:39:48.710844", "status": "completed"} tags=[]
# calc_infer_appr_time(10, 512, 6, model_folders, True, True)
# + id="functional-alert" papermill={"duration": 0.036564, "end_time": "2021-05-10T13:39:48.793472", "exception": false, "start_time": "2021-05-10T13:39:48.756908", "status": "completed"} tags=[]
def get_gpu_mems():
nvidia_smi.nvmlInit()
handle = nvidia_smi.nvmlDeviceGetHandleByIndex(0)
info = nvidia_smi.nvmlDeviceGetMemoryInfo(handle)
total, avail, used = mb_to_gb(info.total), \
mb_to_gb(info.free), \
mb_to_gb(info.used)
nvidia_smi.nvmlShutdown()
return f'Total GPU mem: {total} GB, Available GPU mem: {avail} GB, Used GPU mem: {used} GB'
# + id="racial-aging" papermill={"duration": 0.046307, "end_time": "2021-05-10T13:39:48.860115", "exception": false, "start_time": "2021-05-10T13:39:48.813808", "status": "completed"} tags=[]
def _count_blocks(dims, block_size):
nXBlocks = (int)((dims[0] + block_size[0] - 1) / block_size[0])
nYBlocks = (int)((dims[1] + block_size[1] - 1) / block_size[1])
return nXBlocks, nYBlocks
def generate_block_coords(H, W, block_size):
h,w = block_size
nYBlocks = (int)((H + h - 1) / h)
nXBlocks = (int)((W + w - 1) / w)
for X in range(nXBlocks):
cx = X * h
for Y in range(nYBlocks):
cy = Y * w
yield cy, cx, h, w
def pad_block(y,x,h,w, pad): return np.array([y-pad, x-pad, h+2*pad, w+2*pad])
def crop(src, y,x,h,w): return src[..., y:y+h, x:x+w]
def crop_rio(ds, y,x,h,w):
block = ds.read(window=((y,y+h),(x,x+w)), boundless=True)
#block = np.zeros((3, h, w), dtype = np.uint8)
return block
def paste(src, block, y,x,h,w):src[..., y:y+h, x:x+w] = block
def paste_crop(src, part, block_cd, pad):
_,H,W = src.shape
y,x,h,w = block_cd
h, w = min(h, H-y), min(w, W-x)
part = crop(part, pad, pad, h, w)
paste(src, part, *block_cd)
def mp_func_wrapper(func, args): return func(*args)
def chunkify(l, n): return [l[i:i + n] for i in range(0, len(l), n)]
def mask2rle(img):
pixels = img.T.flatten()
pixels[0] = 0
pixels[-1] = 0
runs = np.where(pixels[1:] != pixels[:-1])[0] + 2
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
def infer_blocks(blocks, do_inference):
blocks = do_inference(blocks).cpu().numpy()
if isinstance(blocks, tuple): blocks = blocks[0]
#print(blocks.shape, blocks.dtype, blocks.max())
#c, h, w = blocks[0].shape
#blocks = np.ones((len(blocks), 1, h, w), dtype=np.int32)
#blocks = blocks > threshold
return blocks#.astype(np.uint8)
def start(img_name, do_inference):
logger.info('Start')
scale = 3
s = 512
block_size = s * scale
batch_size = 6
pad = s//4 * scale
ds = TFReader(str(img_name))
H, W = ds.shape
cds = list(generate_block_coords(H, W, block_size=(block_size, block_size)))
#cds = cds[:36]
total_blocks = len(cds)
mask = np.zeros((1,H,W)).astype(np.bool)
count = 0
batch = []
for block_cd in tqdm(cds):
if len(batch) == batch_size:
blocks = [b[0] for b in batch]
block_cds = [b[1] for b in batch]
# logger.info(get_gpu_mems())
block_masks = infer_blocks(blocks, do_inference)
[paste_crop(mask, block_mask, block_cd, pad) for block_mask, block_cd, _ in zip(block_masks, block_cds, blocks)]
batch = []
gc.collect()
padded_block_cd = pad_block(*block_cd, pad)
block = crop_rio(ds, *(padded_block_cd))
batch.append((block, block_cd))
#print(block_cd, block.shape)
count+=1
if batch:
blocks = [b[0] for b in batch]
block_cds = [b[1] for b in batch]
# logger.info(get_gpu_mems())
block_masks = infer_blocks(blocks, do_inference)
[paste_crop(mask, block_mask, block_cd, pad) for block_mask, block_cd, _ in zip(block_masks, block_cds, blocks)]
#print('Drop last batch', len(batch))
batch = []
print(mask.shape, mask.dtype, mask.max(), mask.min(), mask.sum()/np.prod(mask.shape))
#mask = (mask > threshold).astype(np.uint8)
ds.close()
print('to rle: ')
rle = mask2rle(mask)
return rle
# -
from py3nvml.py3nvml import *
nvmlInit()
handle = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(handle)
# + id="satisfied-wonder" outputId="ebdfb2a5-208a-4f1f-89c4-2a785feac331" papermill={"duration": 8379.70989, "end_time": "2021-05-10T15:59:28.590566", "exception": false, "start_time": "2021-05-10T13:39:48.880676", "status": "completed"} tags=[]
logger = logging.getLogger('dev')
logger.setLevel(logging.INFO)
fileHandler = logging.FileHandler('log.log')
fileHandler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fileHandler.setFormatter(formatter)
logger.addHandler(fileHandler)
# model_folders = ['../input/pambmod5/models5/Unet_timm-regnety_016',
# '../input/pambmod5/models5/Unet_timm-regnety_016_scse'
# ]
model_folders = ['./models8/UnetPlusPlus_timm-regnety_016_scse',
'./models8/Unet_timm-regnetx_032',
'./models8/Unet_timm-regnety_016',
'./models8/Unet_timm-regnety_016_scse'
]
threshold = 0.5
use_tta=True
use_cuda=True
do_inference = get_infer_func(model_folders, use_tta=use_tta, use_cuda=use_cuda, threshold=threshold)
imgs = get_images()
subm = {}
imgs
# -
# %%time
for img_name in imgs[:]:
print(img_name)
#if img_name.stem != '3589adb90': continue
#if img_name.stem != '2ec3f1bb9': continue
rle = start(img_name, do_inference=do_inference)
subm[img_name.stem] = {'id':img_name.stem, 'predicted': rle}
# + id="standard-yield" papermill={"duration": 0.717856, "end_time": "2021-05-10T15:59:30.056734", "exception": false, "start_time": "2021-05-10T15:59:29.338878", "status": "completed"} tags=[]
# a = subm['3589adb90']['predicted']
# b = subm['3589adb90']['predicted']
# df_sub = pd.DataFrame(subm).T
# df_sub.to_csv('3589adb90_b.csv', index=False)
# a = np.array(pd.read_csv('./3589adb90_a.csv')['predicted'])[0]
# b = np.array(pd.read_csv('./3589adb90_b.csv')['predicted'])[0]
# + id="mexican-demographic" papermill={"duration": 1.1191, "end_time": "2021-05-10T15:59:31.876043", "exception": false, "start_time": "2021-05-10T15:59:30.756943", "status": "completed"} tags=[]
# + id="global-cover" papermill={"duration": 0.718347, "end_time": "2021-05-10T15:59:33.416532", "exception": false, "start_time": "2021-05-10T15:59:32.698185", "status": "completed"} tags=[]
# shape = (22165, 29433)
# mask_a = rle2mask(a, shape)
# mask_b = rle2mask(b, shape)
# print(mask_a.shape)
# print(mask_b.shape)
# mask_a = mask_a.T/255
# mask_b = mask_b.T/255
# with torch.no_grad():
# mask_a = torch.from_numpy(mask_a).contiguous().view(-1)
# mask_b = torch.from_numpy(mask_b).contiguous().view(-1)
# #print(dice_loss(mask_a[int(1e6):3*int(1e6)], mask_b[int(1e6):3*int(1e6)]))
# for i in range(0, int(1e8), int(1e3)):
# print(dice_loss(mask_a[i:i+1000], mask_b[i:i+1000]))
# + id="lucky-communications" papermill={"duration": 0.714059, "end_time": "2021-05-10T15:59:34.825485", "exception": false, "start_time": "2021-05-10T15:59:34.111426", "status": "completed"} tags=[]
# def dice_loss(a, b, eps=1e-6):
# with torch.no_grad():
# intersection = (a * b).sum()
# dice = ((2. * intersection + eps) / (a.sum() + b.sum() + eps))
# return dice.item()
# def rle2mask(mask_rle, shape):
# s = mask_rle.split()
# starts, lengths = [np.asarray(x, dtype=np.int32) for x in (s[0:][::2], s[1:][::2])]
# starts -= 1
# ends = starts + lengths
# img = np.zeros(shape[0]*shape[1], dtype=np.uint8)
# for lo, hi in zip(starts, ends):
# img[lo:hi] = 255
# return img.reshape(shape)
# + id="satisfactory-marsh" outputId="ea1cbfe5-503e-4070-a64a-0b57d3f090d4" papermill={"duration": 0.868368, "end_time": "2021-05-10T15:59:36.411788", "exception": false, "start_time": "2021-05-10T15:59:35.543420", "status": "completed"} tags=[]
df_sub = pd.DataFrame(subm).T
df_sub
# + id="surrounded-anatomy" papermill={"duration": 1.240746, "end_time": "2021-05-10T15:59:38.496749", "exception": false, "start_time": "2021-05-10T15:59:37.256003", "status": "completed"} tags=[]
df_sub.to_csv('submission_kidney.csv', index=False)
# + id="smaller-winter" papermill={"duration": 0.701855, "end_time": "2021-05-10T15:59:39.914411", "exception": false, "start_time": "2021-05-10T15:59:39.212556", "status": "completed"} tags=[]
# -
| models/2-Gleb/SecondWinnerGleb_1_kidney.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Object-Oriented Programming
# We've talked about how everyting in Python is an object. In addition, we've come to use many objects. However, we have not created any objects. In this lecture, we will discuss object-oriented programming, and what we can achieve with it. Here is what we will cover:
#
# 1. Classes
# > Attributes
# 2. Methods
# 3. Inheritance
#
# Let's take a look at the building block of creating an object - a class!
# ## Class
# The fundamental building block of an object, is the class. A class defines all of the specifications of an object, from its attributes, methods, and more. The declaration of a class begins with the "class" keyword.
#
# class Name(superclass):
# // etc
#
# We'll talk later about what superclass means. However, this is the opening statement for a class. Here's another example:
# Creating a class called Bike
class Bike:
pass
# If you do not already know, the word "instantiation" means to create a version of an object. Here is how we would *instantiate* a bike.
# An 'instance' of a bike
my_bike = Bike()
type(my_bike)
# Now, my_bike is an object reference to "Bike". This means that the variable doesn't actually hold the object in memory, but simply *points* to it.
# ## Attribute
# To give objects attributes (i.e. a bike's wheel size, speed, weight), you need to create the
#
# __init__(attributes...)
#
# function. This function is called whenever an object is instantiated. It's what assigns the values you want on an object. Let's see its use below.
class Bike:
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
# What just happened? We created the init method in Bike, and provided it four parameters: self, speed, wheel, and weight. In the body of the method, we assigned self.attr to the attribute. First, let's discuss the self. The word *self* is actually not a keyword, but a type of requirement from Python. You see, all Python methods must have a reference to the object itself. You can use any name you like for that reference, but everyone uses the word "self" because it is simply convention.
#
# The **attributes** in this class are speed, wheel, and weight. In the method body, we set the referenced object's attribute value to... well... itself (or, in other words, whatever was sent in). Let's try an instantiation below.
# Instantiating a Bike Object
woo = Bike(2, 4, 5)
# The instantiation checks out. Here's what happened:
#
# self.speed = 2
# self.wheel = 4
# self.weight = 5
#
# This allows us to use *dot notation* to access the properties. How do we get the wheel size of the bike? We use the following notation:
#
# object.attr
#
# It's that simple.
woo.speed
woo.wheel
woo.weight
# ## Methods
# We've discussed functions numerous amounts of times, and methods are essentially the same thing but written inside of functions. We can use attribute values now, instead.
class Bike:
# __init__() function
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
# A method calculates the max weight of a person on the bike
def max_weight(self, rider_weight):
max_weight = rider_weight * self.weight
return max_weight
# Another method
def some_method(self):
pass
woo = Bike(2, 4, 5)
woo.max_weight(30)
# ## Special Methods
# There are a few methods in Python that you can alter in your class referred to as "special methods". These methods allow you to control certain behaviors that happen behind the scenes in your program. Things like printing your object, or how it reacts to operators, etc can be controlled via these methods. Let's look at a few.
class Bike():
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
def __str__(self):
return "Bike Speed: {} Wheel Size: {} Weight: {}".format(self.speed, self.wheel, self.weight)
woo = Bike(3, 4, 5)
print(woo)
| Object Oriented Programming/Object-Oriented Programming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to the Dataset
# Many American cities have communal bike sharing stations where you can rent bicycles by the hour or day. Washington, D.C. is one of these cities. The District collects detailed data on the number of bicycles people rent by the hour and day.
#
# <NAME> at the University of Porto compiled this data into a CSV file, which I will be working with in this project. The file contains 17380 rows, with each row representing the number of bike rentals for a single hour of a single day.
#
# Here's what the first five rows look like:
import pandas as pd
bike_rentals = pd.read_csv("C:/Users/Jennifer/Documents/Python/Data/bike_rental_hour.csv")
bike_rentals.head()
# Here are the descriptions for the relevant columns:
#
# instant - A unique sequential ID number for each row
# dteday - The date of the rentals
# season - The season in which the rentals occurred
# yr - The year the rentals occurred
# mnth - The month the rentals occurred
# hr - The hour the rentals occurred
# holiday - Whether or not the day was a holiday
# weekday - Whether or not the day was a weekday
# workingday - Whether or not the day was a working day
# weathersit - The weather (as a categorical variable)
# temp - The temperature, on a 0-1 scale
# atemp - The adjusted temperature
# hum - The humidity, on a 0-1 scale
# windspeed - The wind speed, on a 0-1 scale
# casual - The number of casual riders (people who hadn't previously signed up with the bike sharing program)
# registered - The number of registered riders (people who had already signed up)
# cnt - The total number of bike rentals (casual + registered)
#
# In this project, I will try to predict the total number of bikes people rented in a given hour. I will predict the cnt column using all of the other columns, except for casual and registered. To accomplish this, I will create a few different machine learning models and evaluate their performance.
# # Calculating Features
# It can often be helpful to calculate features before applying machine learning models. Features can enhance the accuracy of models by introducing new information, or distilling existing information.
#
# For example, the hr column in bike_rentals contains the hours during which bikes are rented, from 1 to 24. A machine will treat each hour differently, without understanding that certain hours are related. We can introduce some order into the process by creating a new column with labels for morning, afternoon, evening, and night. This will bundle similar times together, enabling the model to make better decisions.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
plt.hist(bike_rentals["cnt"])
# -
bike_rentals.corr()["cnt"]
# +
def assign_label(hour):
if hour >=0 and hour < 6:
return 4
elif hour >=6 and hour < 12:
return 1
elif hour >= 12 and hour < 18:
return 2
elif hour >= 18 and hour <=24:
return 3
bike_rentals["time_label"] = bike_rentals["hr"].apply(assign_label)
# -
# # Splitting Data to Train/Test
# Before we can begin applying machine learning algorithms, we need to split the data into training and testing sets. This will enable us to train an algorithm using the training set, and evaluate its accuracy on the testing set. If you train an algorithm on the training data, then evaluate its performance on the same data, you can get an unrealistically low error value, due to overfitting.
# This line will generate a Boolean series that's False when a row in bike_rentals isn't found in train:
# bike_rentals.index.isin(train.index)
# This line will select any rows in bike_rentals that aren't found in train to be in the testing set:
# bike_rentals.loc[~bike_rentals.index.isin(train.index)]
train = bike_rentals.sample(frac=.8)
test = bike_rentals.loc[~bike_rentals.index.isin(train.index)]
print(train.shape,test.shape)
# # Applying Linear Regression
# Linear regression will probably work fairly well on this data, given that many of the columns are highly correlated with cnt.
#
# As you learned in earlier missions, linear regression works best when predictors are linearly correlated to the target and also independent -- in other words, they don't change meaning when we combine them with each other. The good thing about linear regression is that it's fairly resistant to overfitting because it's straightforward. It also can be prone to underfitting the data, however, and not building a powerful enough model. This means that linear regression usually isn't the most accurate option.
#
# I am going to ignore the casual and registered columns because cnt is derived from them. If you're trying to predict the number of people who rent bikes in a given hour (cnt), it doesn't make sense that you'd already know casual or registered, because those numbers are added together to get cnt.
# +
from sklearn.linear_model import LinearRegression
predictors = list(train.columns)
predictors.remove("cnt")
predictors.remove("casual")
predictors.remove("registered")
predictors.remove("dteday")
reg = LinearRegression()
reg.fit(train[predictors], train["cnt"])
# +
import numpy
predictions = reg.predict(test[predictors])
numpy.mean((predictions - test["cnt"]) ** 2)
# -
# # Error
# The error is very high, which may be due to the fact that the data has a few extremely high rental counts, but otherwise mostly low counts. Larger errors are penalized more with MSE, which leads to a higher total error.
# # Applying Decision Trees
# Now we're ready to apply the decision tree algorithm. We will be able to compare its error with the error from linear regression, which will enable us to pick the right algorithm for this data set.
#
# Decision trees tend to predict outcomes much more reliably than linear regression models. Because a decision tree is a fairly complex model, it also tends to overfit, particularly when we don't tweak parameters like maximum depth and minimum number of samples per leaf. Decision trees are also prone to instability -- small changes in the input data can result in a very different output model.
# +
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(min_samples_leaf=5)
reg.fit(train[predictors], train["cnt"])
# +
predictions = reg.predict(test[predictors])
numpy.mean((predictions - test["cnt"]) ** 2)
# +
reg = DecisionTreeRegressor(min_samples_leaf=2)
reg.fit(train[predictors], train["cnt"])
predictions = reg.predict(test[predictors])
numpy.mean((predictions - test["cnt"]) ** 2)
# -
# # Applying Random Forest
# We can now apply the random forest algorithm, which improves on the decision tree algorithm. Random forests tend to be much more accurate than simple models like linear regression. Due to the way random forests are constructed, they tend to overfit much less than decision trees. Random forests can still be prone to overfitting, though, so it's important to tune parameters like maximum depth and minimum samples per leaf.
# +
from sklearn.ensemble import RandomForestRegressor
reg = RandomForestRegressor(min_samples_leaf=5)
reg.fit(train[predictors], train["cnt"])
# +
predictions = reg.predict(test[predictors])
numpy.mean((predictions - test["cnt"]) ** 2)
# -
# # Conclusion
# By taking the nonlinear predictors into account, the decision tree regressor appears to have much higher accuracy than linear regression.By removing some of the sources of overfitting, the random forest accuracy is improved over the decision tree accuracy.
| PredictingBikeRentals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:fight_prediction]
# language: python
# name: conda-env-fight_prediction-py
# ---
#
#
# # 1.1 Import Libraries and Data
# +
# Disable Warnings
import warnings
warnings.filterwarnings('ignore')
# General Libraries
import numpy as np
import pandas as pd
import pickle
import category_encoders as ce
# Import plotting libraries
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# Sklearn Specific
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_validate
from sklearn import preprocessing
# sklearn algos
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB # Naive Bayes
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier # SGD
from sklearn.neural_network import MLPClassifier # neural network (multilayer perceptron)
# Bayesian Hyperparameter optimization
from hyperopt import hp, tpe, fmin, space_eval, Trials
# Import Data
with open('../data/temp_data/data_dicts_v2.pickle', 'rb') as handle:
data_dicts = pickle.load(handle)
# -
# # 1.2 Functions
# +
def split_encode_standardize(df):
# Split features and labels
y = df['result'].astype('int')
X = df.iloc[:,4:]
X = X.drop(['result'], axis = 1)
X = X.fillna(0)
# Wrangle
X['f_stance'] = X.apply(lambda x: str(x['f_stance']), axis = 1)
X['o_stance'] = X.apply(lambda x: str(x['o_stance']), axis = 1)
# Encode categorical data
ce_binary = ce.BinaryEncoder(cols = ['f_stance','o_stance'])
X = ce_binary.fit_transform(X, y)
# Scale features with mean = 0 and sd = 1
X = preprocessing.scale(X)
return(X,y)
def without_keys(d, keys):
return {x: d[x] for x in d if x not in keys}
# -
# # 2. Setup Bayesian Hyperparameter Optimization
# +
def objective_function(params):
"""Objective function to minimize: (1- cross validated test_accuracy_score)"""
h_model = params['model'] # Gets the model name
del params['model'] # Gets the hyperparameters
# Initialize model with parameters
if h_model == "RandomForestClassifier":
model = RandomForestClassifier(**params)
elif h_model == "KNeighborsClassifier":
model = KNeighborsClassifier(**params)
elif h_model == "LogisticRegression":
model = LogisticRegression(**params)
elif h_model == "SVC":
model = svm.SVC(**params)
elif h_model == "DecisionTreeClassifier":
model = DecisionTreeClassifier(**params)
elif h_model == "GaussianNB":
model = GaussianNB(**params)
elif h_model == "Perceptron":
model = Perceptron(**params)
elif h_model == "SGDClassifier":
model = SGDClassifier(**params)
elif h_model == "MLPClassifier":
model = MLPClassifier(**params)
scoring_stats = {'accuracy': 'accuracy',
'recall': 'recall',
'precision': 'precision',
'roc_auc': 'roc_auc'}
avg_scores = cross_validate(model, X, y, cv=5, scoring = scoring_stats)
cv_accuracy = np.mean(avg_scores['test_accuracy'])
return(1 - cv_accuracy)
def get_space(model):
model_name = type(model).__name__
if model_name == "LogisticRegression":
space_dict = {'model': model_name,
'dual': hp.choice('dual', [True,False]),
'fit_intercept': hp.choice('fit_intercept', [True,False])}
elif model_name == "RandomForestClassifier":
space_dict = {'model' : model_name,
'max_depth': hp.choice('max_depth', range(1,50)),
'max_features': hp.choice('max_features', range(1,50)),
'n_estimators': hp.choice('n_estimators', range(1,50)),}
elif model_name == "SVC": # Fix space dict
space_dict = {'model': model_name,
'kernel': hp.choice('kernel',['linear','poly','rbf','sigmoid']),
'shrinking': hp.choice('shrinking', [True, False])}
elif model_name == "KNeighborsClassifier":
space_dict = {'model': model_name,
'n_neighbors': hp.choice('n_neighbors', range(1,100)),
'weights': hp.choice('weights', ['uniform', 'distance']),
'algorithm': hp.choice('algorithm', ['auto', 'ball_tree', 'kd_tree', 'brute']),
'leaf_size': hp.choice('leaf_size', range(1,60))}
elif model_name == "DecisionTreeClassifier":
space_dict = {'model': model_name,
'criterion': hp.choice('criterion', ['gini','entropy']),
'splitter': hp.choice('splitter', ['best','random']),
'max_features': hp.choice('max_features', ['auto','sqrt','log2']),
}
elif model_name == "GaussianNB":
space_dict = {'model': model_name,
'var_smoothing': hp.uniform('var_smoothing', 1e-15, 1e-3)}
elif model_name == "Perceptron":
space_dict = {'model': model_name,
'alpha': hp.uniform('alpha', 1e-15, 1e-3),
'fit_intercept': hp.choice('fit_intercept', [True,False])}
elif model_name == "SGDClassifier":
space_dict = {'model': model_name,
'fit_intercept': hp.choice('fit_intercept', [True,False]),
'loss': hp.choice('loss', ['hinge','squared_hinge'])
}
elif model_name == "MLPClassifier":
space_dict = {'model': model_name,
'activation': hp.choice('activation ', ['identity','logistic','tanh','relu'])}
return(space_dict)
def get_max_evals(model_name):
if model_name == "LogisticRegression":
num_evals = 20
elif model_name == "RandomForestClassifier":
num_evals = 50
elif model_name == "SVC":
num_evals = 20
elif model_name == "KNeighborsClassifier":
num_evals = 20
elif model_name == "DecisionTreeClassifier":
num_evals = 20
elif model_name == "GaussianNB":
num_evals = 10
elif model_name == "Perceptron":
num_evals = 10
elif model_name == "SGDClassifier":
num_evals = 20
elif model_name == "MLPClassifier":
num_evals = 20
return(num_evals)
# -
# # 3. Initialize Models and Generate Dictionary for Results
# +
# Initialize models
models = []
blr_clf = LogisticRegression()
rf_clf = RandomForestClassifier()
svm_clf = svm.SVC()
knn_clf = KNeighborsClassifier()
dtree_clf = DecisionTreeClassifier()
nb_clf = GaussianNB()
perc_clf = Perceptron()
sgd_clf = SGDClassifier()
mlp_clf = MLPClassifier()
# add models to a list
models.extend((blr_clf,rf_clf,svm_clf,knn_clf, dtree_clf, nb_clf, perc_clf, sgd_clf, mlp_clf))
# Initialize a dictionary for scores, dataset
score_d = {}
score_d['hyper_opt_par'] = []
score_d['dict_type'] = []
score_d['dataset'] = []
score_d['num_obs'] = []
score_d['model_name'] = []
score_d['accuracy'] = []
score_d['precision'] = []
score_d['recall'] = []
score_d['roc_auc'] = []
score_d['hp_dict'] = []
# -
# # 5. Train and print scores for each model
cumu_dfs_dict = data_dicts[0]
fe_cumu_dfs_dict = data_dicts[1]
test_exact = fe_cumu_dfs_dict['Cumulative Data: 1 Fight Lookback Window']
dict_key = 1
for data_dict in data_dicts:
df_index = 0
for key in data_dict:
df_index += 1
if (df_index <= 0):
continue
if dict_key == 1:
dict_type = "cumu_dfs_dict"
else:
dict_type = "ef_cumu_dfs_dict"
df = data_dict[key].copy()
dataset = key
num_obs = df.shape[0]
X,y = split_encode_standardize(df)
for model in models:
model_name = type(model).__name__
print(model_name)
space = get_space(model) # Get space
# number_of_evals = get_max_evals(model_name) # get_max_evals from testing
best_h = fmin(fn=objective_function, space=space, algo=tpe.suggest, max_evals=5)
opt_hp = space_eval(space, best_h)
score_d['hyper_opt_par'].append(opt_hp)
print(opt_hp)
del opt_hp['model']
# create new model with tuned hyperparameters
if model_name == "RandomForestClassifier":
ht_model = RandomForestClassifier(**opt_hp)
elif model_name == "KNeighborsClassifier":
ht_model = KNeighborsClassifier(**opt_hp)
elif model_name == "LogisticRegression":
ht_model = LogisticRegression(**opt_hp)
elif model_name == "SVC":
ht_model = svm.SVC(**opt_hp)
elif model_name == "DecisionTreeClassifier":
ht_model = DecisionTreeClassifier(**opt_hp)
elif model_name == "GaussianNB":
ht_model = GaussianNB(**opt_hp)
elif model_name == "Perceptron":
ht_model = Perceptron(**opt_hp)
elif model_name == "SGDClassifier":
ht_model = SGDClassifier(**opt_hp)
elif model_name == "MLPClassifier":
ht_model = MLPClassifier(**opt_hp)
scoring_stats = {'accuracy': 'accuracy',
'recall': 'recall',
'precision': 'precision',
'roc_auc': 'roc_auc'}
# Calculate scores
avg_scores = cross_validate(ht_model, X, y, cv=5, scoring = scoring_stats)
accuracy = np.mean(avg_scores['test_accuracy'])
precision = np.mean(avg_scores['test_precision'])
recall = np.mean(avg_scores['test_recall'])
roc_auc = np.mean(avg_scores['test_roc_auc'])
# append to dictionary
score_d['dict_type'].append(dict_type)
score_d['dataset'].append(dataset)
score_d['num_obs'].append(num_obs)
score_d['model_name'].append(model_name)
score_d['accuracy'].append(accuracy)
score_d['precision'].append(precision)
score_d['recall'].append(recall)
score_d['roc_auc'].append(roc_auc)
score_d['hp_dict'].append(opt_hp)
dict_key += 1
scores_dict = without_keys(score_d, 'hp_dict')
scores_df = pd.DataFrame(scores_dict)
scores_df
non_fe = scores_df[scores_df['dict_type'] == "cumu_dfs_dict"]
yes_fe = scores_df[scores_df['dict_type'] == "ef_cumu_dfs_dict"]
print("The mean non-fe accuracy is: " + str(np.mean(non_fe['accuracy'])))
print("The mean fe accuracy is: " + str(np.mean(yes_fe['accuracy'])))
scores_df.to_csv("../data/scores/hopt_scores_v3.csv")
# save dictionary using pickle
with open('../data/temp_data/hyperopt.pickle', 'wb') as handle:
pickle.dump(scores_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('../data/temp_data/hyperopt.pickle', 'rb') as handle:
hyperopt_dict = pickle.load(handle)
# # Preliminary Analysis
# - The hyperparameters from the sklearn classifiers were set to their defaults and will be tuned.
#
#
# - Nevertheless, the effectiveness of each model appears to vary based off the length of the look back number. I will soon be transforming the printed text data above into nicer looking graphs.
#
#
# - Please scroll around in the above cell to view the accuracy, recall, precision, and ROC-AUC from using a cross validate method.
#
#
# - Now, most of the accuracy percentages are hovering near the 50% mark. As noted by previous literature, this data is inherently noisy and will likely make it very difficult to have an accuracy of over 60%. It is even more of an issue when generating the data using a look back window because the number of observations decreases substantially.
#
#
# # Calculate Max Evals Required for each model
fe_cumu_1lb = data_dicts[1]['Cumulative Data: 1 Fight Lookback Window']
X,y = split_encode_standardize(fe_cumu_1lb)
max_evals_dict = {}
max_evals_dict['hyper_opt_par'] = []
max_evals_dict['model_name'] = []
max_evals_dict['accuracy'] = []
max_evals_dict['hp_dict'] = []
# +
models = []
blr_clf = LogisticRegression()
rf_clf = RandomForestClassifier()
svm_clf = svm.SVC()
knn_clf = KNeighborsClassifier()
dtree_clf = DecisionTreeClassifier()
nb_clf = GaussianNB()
perc_clf = Perceptron()
sgd_clf = SGDClassifier()
mlp_clf = MLPClassifier()
# add models to a list
models.extend((blr_clf,rf_clf,svm_clf,knn_clf, dtree_clf, nb_clf, perc_clf, sgd_clf, mlp_clf))
# -
for model in models:
model_name = type(model).__name__
print(model_name)
space = get_space(model) # Get space
# number_of_evals = get_max_evals(model_name) # get_max_evals from testing
trials = Trials()
best_h = fmin(fn=objective_function, space=space, algo=tpe.suggest, max_evals=30, trials = trials)
# Plot accuracy over evaluations
f, ax = plt.subplots(1)
xs = [t['tid'] for t in trials.trials]
ys = [1 - t['result']['loss'] for t in trials.trials]
ax.scatter(xs, ys, s=20, linewidth=0.01, alpha=0.75)
ax.set_title(model_name + ' $Accuracy$ $vs$ $Trial Number$ ', fontsize=18)
ax.set_xlabel('$trial number$', fontsize=16)
ax.set_ylabel('$Accuracy$', fontsize=16)
opt_hp = space_eval(space, best_h)
max_evals_dict['hyper_opt_par'].append(opt_hp)
print(opt_hp)
del opt_hp['model']
# create new model with tuned hyperparameters
if model_name == "RandomForestClassifier":
ht_model = RandomForestClassifier(**opt_hp)
elif model_name == "KNeighborsClassifier":
ht_model = KNeighborsClassifier(**opt_hp)
elif model_name == "LogisticRegression":
ht_model = LogisticRegression(**opt_hp)
elif model_name == "SVC":
ht_model = svm.SVC(**opt_hp)
elif model_name == "DecisionTreeClassifier":
ht_model = DecisionTreeClassifier(**opt_hp)
elif model_name == "GaussianNB":
ht_model = GaussianNB(**opt_hp)
elif model_name == "Perceptron":
ht_model = Perceptron(**opt_hp)
elif model_name == "SGDClassifier":
ht_model = SGDClassifier(**opt_hp)
elif model_name == "MLPClassifier":
ht_model = MLPClassifier(**opt_hp)
scoring_stats = {'accuracy': 'accuracy',
'recall': 'recall',
'precision': 'precision',
'roc_auc': 'roc_auc'}
# Calculate scores
avg_scores = cross_validate(ht_model, X, y, cv=5, scoring = scoring_stats)
accuracy = np.mean(avg_scores['test_accuracy'])
# append to dictionary
max_evals_dict['model_name'].append(model_name)
max_evals_dict['accuracy'].append(accuracy)
max_evals_dict['hp_dict'].append(opt_hp)
pd.DataFrame(max_evals_dict)
| notebooks/6-jmk-machine learning data as is.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
import matplotlib.pyplot as plt
print(mnist)
print(mnist.train.images[3])
def display_digit(num):
print(mnist.train.labels[num])
label = mnist.train.labels[num].argmax(axis=0)
image = mnist.train.images[num].reshape([28,28])
plt.title('Example: %d Label: %d' % (num, label))
plt.imshow(image, cmap=plt.get_cmap('gray_r'))
plt.show()
# +
display_digit(3)
# -
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
# + active=""
# Deep MNIST tutorial
# -
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.global_variables_initializer())
y = tf.matmul(x,W) + b
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
# +
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
# +
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# -
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1, 28, 28, 1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
# +
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
# +
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# -
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# +
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
# +
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
sess.close()
# +
| Digit+recog+v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # ML Pipeline Preparation
# ### 1. Import libraries and load data from database.
# >- Import Python libraries
# >- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)
# >- Define feature and target variables X and Y
# +
#import libraries
#measuring time and making basic math
from time import time
import math
import numpy as np
import udacourse2 #my library for this project!
import statistics
#my own ETL pipeline
#import process_data as pr
#dealing with datasets and showing content
import pandas as pd
#import pprint as pp
#SQLAlchemy toolkit
from sqlalchemy import create_engine
from sqlalchemy import pool
from sqlalchemy import inspect
#natural language toolkit
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
#REGEX toolkit
import re
#Machine Learning preparing/preprocessing toolkits
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
#Machine Learning Feature Extraction tools
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
#Machine Learning Classifiers
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier #need MOClassifier!
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier
#Machine Learning Classifiers extra tools
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
#Machine Learning Metrics
from sklearn.metrics import f1_score
from sklearn.metrics import classification_report
#pickling tool
import pickle
# -
# When trying to use NLTK, I took the following error:
#
# - the point is - it´s not only about installing a library
#
# - you need to install de supporting dictionnaries for doing the tasks
#
# - this can be solved quite easilly (in hope that I will find a Portuguese-Brazil dictionnary when I will need to put it in practic in my work)
# LookupError:
# **********************************************************************
# Resource stopwords not found.
# Please use the NLTK Downloader to obtain the resource:
#
# >>> import nltk
# >>> nltk.download('stopwords')
#
# For more information see: https://www.nltk.org/data.html
#
# Attempted to load corpora/stopwords`
# +
#import nltk
#nltk.download('punkt')
# -
# LookupError:
# **********************************************************************
# Resource stopwords not found.
# Please use the NLTK Downloader to obtain the resource:
#
# >>> import nltk
# >>> nltk.download('stopwords')
# +
#nltk.download('stopwords')
# -
# LookupError:
# **********************************************************************
# Resource wordnet not found.
# Please use the NLTK Downloader to obtain the resource:
#
# >>> import nltk
# >>> nltk.download('wordnet')
# +
#nltk.download('wordnet')
# +
#load data from database
#setting NullPool prevents a pool, so it is easy to close the database connection
#in our case, the DB is so simple, that it looks the best choice
#SLQAlchemy documentation
#https://docs.sqlalchemy.org/en/14/core/reflection.html
engine = create_engine('sqlite:///Messages.db', poolclass=pool.NullPool) #, echo=True)
#retrieving tables names from my DB
#https://stackoverflow.com/questions/6473925/sqlalchemy-getting-a-list-of-tables
inspector = inspect(engine)
print('existing tables in my SQLite database:', inspector.get_table_names())
# -
# As my target is Messages table, so I reed this table as a Pandas dataset
# +
#importing MySQL to Pandas
#https://stackoverflow.com/questions/37730243/importing-data-from-a-mysql-database-into-a-pandas-data-frame-including-column-n/37730334
#connection_str = 'mysql+pymysql://mysql_user:mysql_password@mysql_host/mysql_db'
#connection = create_engine(connection_str)
connection = engine.connect()
df = pd.read_sql('SELECT * FROM Messages', con=connection)
connection.close()
df.name = 'df'
df.head(1)
# -
# Splitting in X and Y datasets:
#
# - X is the **Message** column
X = df['message']
X.head(1)
# - Y is the **Classification** labels
#
# - I excluded all my columns that don´t make sense as labels to classify our message
Y = df[df.columns[4:]]
Y.head(1)
# ### 2. Write a tokenization function to process your text data
msg_text = X.iloc[0]
msg_text
# +
#let´s insert some noise to see if it is filtering well
msg_text = "Weather update01 - a 00cold-front from Cuba's that could pass over Haiti' today"
low_text = msg_text.lower()
#I need to take only valid words
#a basic one (very common in Regex courses classes)
gex_text = re.sub(r'[^a-zA-Z]', ' ', low_text)
#other tryed sollutions from several sources
#re.sub(r'^\b[^a-zA-Z]\b', ' ', low_text)
#re.sub(r'^/[^a-zA-Z ]/g', ' ', low_text)
#re.sub(r'^/[^a-zA-Z0-9 ]/g', ' ', low_text)
gex_text
# -
# Found this [here](https://stackoverflow.com/questions/1751301/regex-match-entire-words-only)
#
# - '-' passed away, so it´s not so nice!
re.sub(r'^/\b($word)\b/i', ' ', low_text)
re.sub(r'^\b[a-zA-Z]{3}\b', ' ', low_text)
re.sub(r'^[a-zA-Z]{3}$', ' ', low_text)
col_words = word_tokenize(gex_text)
col_words
unnuseful = stopwords.words("english")
relevant_words = [word for word in col_words if word not in unnuseful]
relevant_words
# I noticed a lot of geographic references. I think they will not be so useful for us. Let´s try to remove them too...
#
# References for City at NLKT [here](https://stackoverflow.com/questions/37025872/unable-to-import-city-database-dataset-from-nltk-data-in-anaconda-spyder-windows?rq=1)
import nltk.sem.chat80 as ct #.sql_demo()
# LookupError:
# **********************************************************************
# Resource city_database not found.
# Please use the NLTK Downloader to obtain the resource:
#
# >>> import nltk
# >>> nltk.download('city_database')
#
# For more information see: https://www.nltk.org/data.html
#
# Attempted to load corpora/city_database/city.db
#
# Searched in:
# - 'C:\\Users\\epass/nltk_data'
# - 'C:\\ProgramData\\Anaconda3\\nltk_data'
# - 'C:\\ProgramData\\Anaconda3\\share\\nltk_data'
# - 'C:\\ProgramData\\Anaconda3\\lib\\nltk_data'
# - 'C:\\Users\\epass\\AppData\\Roaming\\nltk_data'
# - 'C:\\nltk_data'
# - 'D:\\nltk_data'
# - 'E:\\nltk_data'
# **********************************************************************
# +
#import nltk
#nltk.download('city_database')
# -
countries = {
country:city for city, country in ct.sql_query(
"corpora/city_database/city.db",
"SELECT City, Country FROM city_table"
)
}
# They look nice (and lower cased):
#
# - observe possible errors with composite names, like united_states
for c in countries:
print(c)
# I couldn't find Haiti:
#
# - countries list is not complete!
#
# - it gaves `KeyError: 'haiti'`
# +
#countries['haiti']
# -
nogeo_words = [word for word in relevant_words if word not in countries]
nogeo_words
# Unfortatelly, it´s only a **demo**! We need something better for our project...
#df_cities = pd.read_csv('cities15000.txt', sep=';')
df_cities = pd.read_csv('cities15000.txt', sep='\t', header=None)
df_cities_15000 = df_cities[[1, 17]]
df_cities_15000.columns = ['City', 'Region']
df_cities_15000.head(5)
# Tried this [here](https://data.opendatasoft.com/explore/dataset/geonames-all-cities-with-a-population-1000%40public/information/?disjunctive.cou_name_en)
df_cities.head(5)
# found country names at Github [here](https://github.com/lukes/ISO-3166-Countries-with-Regional-Codes/blob/master/all/all.csv)
#
# - a small trick and we have our own coutries list!
df_countries = pd.read_csv('all.csv')
df_countries = df_countries['name'].apply(lambda x: x.lower())
countries = df_countries.tolist()
countries
# I can elliminate (perhaps not the whole) a lot of names of countries. In our case, the produce noise on our data.
nogeo_words = [word for word in relevant_words if word not in countries]
nogeo_words
# First test:
#
# - over the first message only
message = 'Weather update - a cold front from Cuba that could pass over Haiti'
tokens = udacourse2.fn_tokenize_fast(msg_text,
verbose=True)
# +
message = 'Weather update - a cold front from Cuba that could pass over Haiti'
tokens = udacourse2.fn_tokenize(msg_text,
lemmatize=True,
rem_city=True,
agg_words=True,
rem_noise=True,
elm_short=3,
verbose=True)
tokens
# -
# It´s not so cool, some noise is still appearing in lemmatized words:
#
# - an "l" was found, as in **French words**, like *l'orange*;
#
# - my **City** filter needs a lot of improving, as it didn´t filter avenues and so many other **geographic** references;
#
# - it passed a lot of unnuseful **two** or less letters words, as **u**, **st**;
#
# - a lot of noisy words as **help**, **thanks**, **please** were found;
#
# - there are several words **repetition** in some messages, like ['river', ... 'river', ...]
# Basic test call
#
# - only for the first 50 messages, verbose
# +
b_start = time()
i = 0
for message in X:
out = udacourse2.fn_tokenize_fast(message,
verbose=True)
i += 1
if i > 200: #it´s only for test, you can adjust it!
break
b_spent = time() - b_start
print('process time:{:.0f} seconds'.format(b_spent))
# -
# Another Call:
# +
b_start = time()
i = 0
for message in X:
print(message)
out = udacourse2.fn_tokenize(message,
lemmatize=True,
rem_city=True,
agg_words=True,
rem_noise=True,
elm_short=3,
great_noisy=True,
verbose=True)
print(out)
print()
i += 1
if i > 20: #it´s only for test, you can adjust it!
break
b_spent = time() - b_start
print('process time:{:.4f} seconds'.format(b_spent))
# -
# Don´t try it! (complete tokenizer)
#
# - it´s a slow test! (takes like 221 seconds to tokenize all the dataframe)
# +
#b_start = time()
#X_tokens = X.apply(lambda x: udacourse2.fn_tokenize(x,
# lemmatize=True,
# rem_city=True,
# agg_words=True,
# rem_noise=True,
# elm_short=3,
# great_noisy=True,
# verbose=False))
#b_spent = time() - b_start
#print('process time:{:.0f} seconds'.format(b_spent))
# -
# - it´s a bit faster test (it takes 46 seconds to run)
#
# - the secret is that it loops only one time for row, as it condenses all the filters into one loop
# +
b_start = time()
X_tokens = X.apply(lambda x: udacourse2.fn_tokenize_fast(x,
verbose=False))
b_spent = time() - b_start
print('process time:{:.0f} seconds'.format(b_spent))
# -
# Now I have a **series** with all my tokenized messages:
X_tokens.head(5)
# And I can filter it for rows that have an **empty list**:
#
# - solution found [here](https://stackoverflow.com/questions/29100380/remove-empty-lists-in-pandas-series)
X_tokens[X_tokens.str.len() == 0]
ser2 = X_tokens[X_tokens.str.len() > 0]
ser2
# +
b_start = time()
dic_tokens = udacourse2.fn_subcount_lists(column=X_tokens,
verbose=False)
b_spent = time() - b_start
print('process time:{:.0f} seconds'.format(b_spent))
# -
# Sorted dictionnary [here](https://stackoverflow.com/questions/613183/how-do-i-sort-a-dictionary-by-value)
# +
dic_tokens
d_tokens = dic_tokens['elements']
t_sorted = sorted(d_tokens.items(), key=lambda kv: kv[1], reverse=True)
if t_sorted:
print('data processed')
# -
# Sorted list of tuples of most counted tokens:
#
# - filtering the more counted 300 elements
t_sorted[:300]
# Modifying the **tokenize** function just to absorve less meaningful tokens to discard:
#
# - **ver 1.2** update: tokenizer function created!
great_noisy = ['people', 'help', 'need', 'said', 'country', 'government', 'one', 'year', 'good', 'day',
'two', 'get', 'message', 'many', 'region', 'city', 'province', 'road', 'district', 'including', 'time',
'new', 'still', 'due', 'local', 'part', 'problem', 'may', 'take', 'come', 'effort', 'note', 'around',
'person', 'lot', 'already', 'situation', 'see', 'response', 'even', 'reported', 'caused', 'village', 'bit',
'made', 'way', 'across', 'west', 'never', 'southern', 'january', 'least', 'zone', 'small', 'next', 'little',
'four', 'must', 'non', 'used', 'five', 'wfp', 'however', 'com', 'set', 'every', 'think', 'item', 'yet',
'carrefour', 'asking', 'ask', 'site', 'line', 'put', 'unicef', 'got', 'east', 'june', 'got', 'ministry']
# ---
#
# #### Older atempt to clear tokens
#
# Tried to isolate some words that I think are noisy, for exclusion:
#
# - general geographic references, as **area** and **village**;
#
# - social communication words, as **thanks** and **please**;
#
# - religious ways to talk, as **pray**
#
# - unmeaningful words, as **thing** and **like**
#
# - visually filtered some words that I think don´t aggregate too much to the **Machine Learning**
#
# - just think about - you prefer your **IA** trained for 'thanks' or for 'hurricane'?
#
# - really I´m not 100% sure about these words, buy my **tokenize** function can enable and disable this list, and re-train the machine, and see if the performance increase or decrease
unhelpful_words = ['thank', 'thanks', 'god', 'fine', 'number', 'area', 'let', 'stop', 'know', 'going', 'thing',
'would', 'hello', 'say', 'neither', 'right', 'asap', 'near', 'want', 'also', 'like', 'since', 'grace',
'congratulate', 'situated', 'tell', 'almost', 'hyme', 'sainte', 'croix', 'ville', 'street', 'valley', 'section',
'carnaval', 'rap', 'cry', 'location', 'ples', 'bless', 'entire', 'specially', 'sorry', 'saint', 'village',
'located', 'palace', 'might', 'given']
# Testing **elliminate duplicates**:
test = ['addon', 'place', 'addon']
test = list(set(test))
test
# Testing **elliminate short words**:
# +
min = 3
list2 = []
test2 = ['addon', 'l', 'us', 'place']
for word in test2:
if len(word) < min:
print('elliminate:', word)
else:
list2.append(word)
list2
# -
# solution [here](https://stackoverflow.com/questions/3501382/checking-whether-a-variable-is-an-integer-or-not)
if isinstance(min, int):
print('OK')
# Now I have two **Tokenizer** functions:
#
# - `fn_tokenize` $\rightarrow$ it allows to test each individual methods, and contains all the methods described, but a bit slow, as it iterates all the words again for each method
#
# - `fn_tokenize_fast` $\rightarrow$ it is a **boosted** version, with only one iteration, for running faster, but you cannot set each method individually for more accurate test
# ### 3. Build a machine learning pipeline
# This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
#
#
# ---
#
# ### A small review over each item for our first machine learning pipelines
#
# #### Feature Extraction
#
# Feature Extraction from SKlearn documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)
#
# "Convert a collection of text documents to a matrix of token counts"
#
# - we are looking for **tokens** that will be turned into **vectors** in a Machine Learning Model;
#
# - they are represented as **scalars** in a **matrix**, that indicates the scale of each one of these tokens.
#
# "This implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix."
#
# - normally matrix representations of the natural reallity are a bit **sparse**
#
# - in this case, to save some memory, they indicate a use of a propper representation
#
# "If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data."
#
# - me already made it, drastically reducing the **variability** of terms
#
# - it its represented by our **fn_tokenizer**
#
# #### Preprocessing
#
# TF-IDF from SKlearn documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html)
#
# - **tf** is about **term frequency** and;
#
# - **idf** is about **inverse document frequency**.
#
# "Transform a count matrix to a normalized tf or tf-idf representation"
#
# - it means that it basically **normalizes** the count matrix
#
# *Tf means term-frequency while tf-idf means term-frequency times inverse document-frequency. This is a common term weighting scheme in information retrieval, that has also found good use in document classification.*
#
# - it takes term-frequency and it **rescales** it by the gereral document-frequency
#
# *The goal of using tf-idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus.*
#
# - the idea is to not weight too much a **noisy** and very frequent word
#
# - we tried to "manually" elliminate some of the **noisy** words, but as the number of tokens is too high, it´s quite impossible to make a good job
#
# #### Training a Machine Learning
#
# As we have **labels**, a good strategy is to use **supervised learning**
#
# - we could try to kind of make **clusters** of messages, using **unsupervised learning**, or try some strategy on **semi-supervised learning**, as we have some of the messages (40) that don´t have any classification;
#
# - the most obvious way is to train a **Classifier**;
#
# - as we have multiple labels, a **Multi Target Classifier** seems to be the better choice.
#
# Multi target classification [here](https://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html)
#
# "This strategy consists of fitting one classifier per target. This is a simple strategy for extending classifiers that do not natively support multi-target classification"
#
# - OK, we will be basically using **slices** of train for each feature, as we don´t have so much **Machines** that are natively supporting multi-target.
# ## I. Prepare the data
#
# Make the lasts opperations for preparing the dataset for training on **Machine Learning**
# For **training** data, it is a **data inconsistency** if you consider that all the labels are blank
#
# - so we have 6,317 rows that we need to **remove** before **training**
print('all labels are blank in {} rows'.format(df[df['if_blank'] == 1].shape[0]))
df = df[df['if_blank'] == 0]
df.shape[0]
# Verifying if removal was complete
if df[df['if_blank'] == 1].shape[0] == 0:
print('removal complete!')
else:
raise Exception('something went wrong with rows removal before training')
# **Version 1.3** update: **pre-tokenizer** (a premature tokenization strategy) created, for removing **untrainable rows**
#
# What is this **crazy thing** over here?
#
# >- I created a **provisory** column, and **tokenizing** it
# >- Why I need it for now? Just for removing rows that are **impossible to train**
# >- After tokenization, if I get a **empty list**, I need to remove this row before training
# +
start = time()
try:
df = df.drop('tokenized', axis=1)
except KeyError:
print('OK')
#inserting a provisory column
df.insert(1, 'tokenized', np.nan)
#tokenizing over the provisory
df['tokenized'] = df.apply(lambda x: udacourse2.fn_tokenize_fast(x['message']), axis=1)
#removing NaN over provisory (if istill exist)
df = df[df['tokenized'].notnull()]
spent = time() - start
print('process time:{:.0f} seconds'.format(spent))
df.head(1)
# -
# Filtering empy lists on `provisory`, found [here](https://stackoverflow.com/questions/42964724/pandas-filter-out-column-values-containing-empty-list)
#
# **Version 1.4** update: could absorb **pre-tokenized** column as a input for **Machine Learning Classifier**, saving time!
#
# And another **crazy thing**, I regret about removing `provisory` tokenized column:
#
# >- why? Just because I already **trained** my **X** subdataset, and I will not need to do it later!
# >- and if I make the thing **wizely**, I will accelerate the pipeline process, as I already made the hard job for the **CountVectorized**
# >- it will also faccilitate to **train** diverse Classifiers, as I save a lot of individual processing, making it **early** in my process!
#
# ---
#
# **Version 1.21** update: for preventing **pipeline leakage** using Picke, I modified `train_data.py` for having pre_tokenization preprocessing as optional. For more details see reference[here](https://rebeccabilbro.github.io/module-main-has-no-attribute/)
# +
empty_tokens = df[df['tokenized'].apply(lambda x: len(x)) == 0].shape[0]
print('found {} rows with no tokens'.format(empty_tokens))
df = df[df['tokenized'].apply(lambda x: len(x)) > 0]
empty_tokens = df[df['tokenized'].apply(lambda x: len(x)) == 0].shape[0]
print('*after removal, found {} rows with no tokens'.format(empty_tokens))
#I will not drop it anymore!
#try:
# df = df.drop('provisory', axis=1)
#except KeyError:
# print('OK')
#Instead, I will drop 'message' column
try:
df = df.drop('message', axis=1)
except KeyError:
print('OK')
print('now I have {} rows to train'.format(df.shape[0]))
df.head(1)
# -
# ---
#
# #### Database data inconsistency fix
#
# **Version 1.5** update: added **hierarchical structure** on labels, for checking and correcting unfilled classes that already have at least one subclass alredy filled
#
# A **more advanced** issue about these data
#
# A more detailed explanation, you can found at the file `ETL Pipeline Preparatione.ipynb`
#
# The fact is:
#
# >- these labels are not **chaotic** as we initially think they are
# >- looking with care, we can see a very clear **hierarchic structure** on them
# >- it they are really hierarchized, so, we can verify them for **data inconsistencies**, using **database fundamentals**
#
# ---
#
# #### Another viewpoint about these labels
#
# If we look at them more carefully, we can find a curious pattern on them
#
# These labels looks as they have a kind of hierarchy behind their shape, as:
#
# First **hierarchical** class:
#
# >- **related**
# >- **request**
# >- **offer**
# >- **direct_report**
#
# And then, **related** seems to have a **Second** hierarchical class
#
# Features for considering a training a classifier on **two layes**, or to **group** them all in main groups, as they are clearly **collinear**:
#
# >- **aid_related** $\rightarrow$ groups aid calling (new things to add/ to do **after** the disaster)
# >>- **food**
# >>- **shelter**
# >>- **water**
# >>- **death**
# >>- **refugees**
# >>- **money**
# >>- **security**
# >>- **military**
# >>- **clothing**
# >>- **tools**
# >>- **missing_people**
# >>- **child_alone**
# >>- **search_and_rescue**
# >>- **medical_help**
# >>- **medical_products**
# >>- **aid_centers**
# >>- **other_aid**
# >- **weather_related** $\rightarrow$ groups what was the main **cause** of the disaster
# >>- **earthquake**
# >>- **storm**
# >>- **floods**
# >>- **fire**
# >>- **cold**
# >>- **other_weather**
# >- **infrastructure_related** $\rightarrow$ groups **heavy infra** that was probably dammaged during the disaster
# >>- **buildings**
# >>- **transport**
# >>- **hospitals**
# >>- **electricity**
# >>- **shops**
# >>- **other_infrastructure**
# Applying a correction for **database data consistency**:
#
# >- using the function that I already created (see: `ETL Pipeline Preparatione.ipynb`)
# >- the idea is when at least some element of a **subcategory** is filled for one **category**, it is expected that the **category** was filled too
# >- this is valido for the main category **related** too!
#
# *This is only one more **advanced step** for **data preparation**, as it involves only a mechanic and automatized correction*
#correction for aid_related
df = udacourse2.fn_group_check(dataset=df,
subset='aid',
correct=True,
shrink=False,
shorten=False,
verbose=True)
#correction for weather_related
df = udacourse2.fn_group_check(dataset=df,
subset='wtr',
correct=True,
shrink=False,
shorten=False,
verbose=True)
#correction for infrastrucutre_related
df = udacourse2.fn_group_check(dataset=df,
subset='ifr',
correct=True,
shrink=False,
shorten=False,
verbose=True)
#correction for related(considering that the earlier were already corrected)
df = udacourse2.fn_group_check(dataset=df,
subset='main',
correct=True,
shrink=False,
shorten=False,
verbose=True)
print(df.shape)
df.head(1)
# ## II. Break the data
#
# Break the dataset into the **training columns** and **labels** (if it have **multilabels**)
# X is the **Training Text Column**:
#
# - if I observe the potential training data really well, I could `genre` column as training data too!
#
# - or I can use also `related`, `request`, `offer` columns for training `aid_related` data
#
# *A discussion of how much these **Label** columns are **hierarchically defined** is made laterly in this notebook*
#
# ---
#
# For this moment, I am using only `message` as training data
X = df['tokenized']
X.head(1)
# Y is constituted by the **Classification Labels**
#
# **Version 1.6** update: removed `related` column from the Labels dataset. Why? Because when I go to statistics after training the **Machine Learning Classifier**, it turns allways at `1`. So, sometimes this column (like in Adaboost) is causing problems when training our Classifier, and adding nothing to the model
#
# >- was: `y = df[df.columns[4:]]`
# >- now: `y = df[df.columns[5:]]`
#
# **Version 1.7** update: removed from training columns that contains **only zeroes** on labels. Why? Just because they are **impossible to train** on our Classifier!, so they add nothing to the model
#
# ---
#
# **Version 1.19** update: **not** removing anymore any column from the Labels dataset. For accomplish criteria for project approving, it is needed to train **exactly** 36 labels. I know that these ones cannot be trained, or train so poorly with the data that was provided. But it is only about obeying the **requisites** for this project.
# +
y = df[df.columns[4:]]
#y = df[df.columns[5:]] #uncheck this if you want to elliminate related column
#uncheck this if you want to elliminate untrainable columns (everything is zero)
#remove_lst = []
#for column in y.columns:
# col = y[column]
# if (col == 0).all():
# print('*{} -> only zeroes training column!'.format(column))
# remove_lst.append(column)
# else:
# #print('*{} -> column OK'.format(column))
# pass
#print(remove_lst)
#y = y.drop(remove_lst, axis=1)
verbose=True
if y.shape[1] == 36:
if verbose:
print('y dataset has 36 labels')
else:
raise Exception('something went wrong, dataset has {} labels instead of 36'.format(y.shape[1]))
y.head(1)
# -
# ## III. Split the data
#
# Into **Train** and **Test** subdatasets
#
# >- let´s start it with **20%** of test data
# >- I am not using **random_state** settings (and why **42**? I personally think it is about a reference to the book **The Hitchhicker´s Guide to de Galaxy**, from <NAME>!
#
# **Version 1.8** update: now I am using **random_state** parameter, so I can compare exactly the same thing, when using randomized processes, for ensuring the same results for each function call
#
# ---
#
# **Future** possible updates:
#
# >- I can test/train using other parameters for test_size, like **0.25** and see if it interfers so much
# >- I can try to do **bootstrap** and see if I can plot a good **normalization** curve for it!
#
# **NEW Future** possible update:
#
# >- I could use **Cross Validation** in order to use all my data for training!
# >- **Warning** there are some papers saying that to take care about using **Cross Validation** on Model Training. The reason is, it may let **data leakage** from your **train** to your **test** dataset, masking the real power of your model!
# >- so I need to **study more** about that before trying to implement it in Python
# >- the discussion about "data leakage" when using cross validation strategies when **fitting** data is [here](https://stackoverflow.com/questions/56129726/fitting-model-when-using-cross-validation-to-evaluate-performance)
#Split makes randomization, so random_state parameter was set
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.25,
random_state=42)
# And it looks OK:
X_train.shape[0] + X_test.shape[0]
# ## IV. Choose your first Classifier
#
# - and build a **Pipeline** for it
#
# Each Pipeline is a Python Object that can be called for **methods**, as **fit()**
#
# ---
#
# What **Classifier** to choose?
#
# - **Towards Data Science** give us some tips [here](https://towardsdatascience.com/machine-learning-nlp-text-classification-using-scikit-learn-python-and-nltk-c52b92a7c73a)
#
# ---
#
# Start with a **Naïve Bayes** (NB)
#
# `clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)`
#
# In a Pipeline way (pipeline documentation is [here](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html):
#
# >- I had some issues with `CountVectorizer`, but could clear it using Stack Overflow [here](https://stackoverflow.com/questions/32674380/countvectorizer-vocabulary-wasnt-fitted)
# >- should I use `CountVectorizer(tokenizer=udacourse2.fn_tokenize_fast)`?... but I will **not**!
# >- why? Just because I already proceeded with **tokenization** in a earlier step
# >- so, how to overpass this hellish `tokenizer=...` parameter?
# >- I found a clever solution [here](https://stackoverflow.com/questions/35867484/pass-tokens-to-countvectorizer)
# >- so, I prepared a **dummy** function to overpass the tokenizer over **CountVertorizer**
#
# First I tried to set Classifier as **MultinomialNB()**, and it crashes:
#
# >- only **one** Label to be trained was expected, and there were 36 Labels!;
# >- reading the documentation for SKlearn, it turned clear that it is necessary (if your Classifier algorithm was not originally built for **multicriteria**, to run it **n** times, one for each label
# >- so it is necessary to include it our pipeline, using `MultiOutputClassifier()` transformer
#
# *And... it looks pretty **fast** to train, not? What is the secret? We are **bypassing** the tokenizer and preprecessor, as we **already made** it at the dataset!*
#
# *Another thing, we are not using the **whole** dataset... it´s just about a little **issue** we have, as there are a lot of **missing labels** at the dataset! And for me, it will **distort** our training! (lately I will compare the results with traning the **raw** dataset)*
#
# **Naïve Bayes** is known as a very **fast** method:
#
# >- but it is also known as being not so **accurate**
# >- and it have so **few** parameters for a later refinement
#
# I could reach Model Accuracy of **92.2**, after **.58** seconds for fitting the Classifier
# +
start = time()
def dummy(doc):
return doc
#Naïve Bayes classifier pipeline - no randomization involved
pipeline_mbnb = Pipeline([('vect', CountVectorizer(tokenizer=dummy, preprocessor=dummy)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(MultinomialNB()))])
#('clf', MultinomialNB())]) #<-my terrible mistake!
#remembering:
#CountVectorizer -> makes the count for tokenized vectors
#TfidTransformer -> makes the weight "normalization" for word occurences
#MultinomialNB -> is my Classifier
#fit text_clf (our first Classifier model)
pipeline_mbnb.fit(X_train, y_train)
spent = time() - start
print('NAÏVE BAYES - process time: {:.2f} seconds'.format(spent))
# -
# If I want, I can see the parameters for my **Pipeline**, using this command
# +
#pipeline_mbnb.get_params()
# -
# ## V. Run metrics for it
# Predicting using **Naïve Bayes** Classifier
#
# And I took this **weird** Error Message:
#
# "**UndefinedMetricWarning:**"
#
# >- "Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples"
# >- "Use `zero_division` parameter to control this behavior"
#
# And searching, I found this explanation [here](https://stackoverflow.com/questions/43162506/undefinedmetricwarning-f-score-is-ill-defined-and-being-set-to-0-0-in-labels-wi)
#
# >- it is not an **weird error** at all. Some labels could´t be predicted when running the Classifier
# >- so the report don´t know how to handle them
#
# "What you can do, is decide that you are not interested in the scores of labels that were not predicted, and then explicitly specify the labels you are interested in (which are labels that were predicted at least once):"
#
# `metrics.f1_score(y_test, y_pred, average='weighted', labels=np.unique(y_pred))`
#
# #### Dealing with this issue
#
# **First**, I altered my function `fn_plot_scores` for not allowing comparisons over an empty (**not trained**) column, as `y_pred`
#
# And to check if all predicted values are **zeroes** [here](https://stackoverflow.com/questions/48570797/check-if-pandas-column-contains-all-zeros)
#
# And I was using in my function a **general** calculus for Accuracy. The problem is: **zeroes** for **zeroes** result a **1** accuracy, distorting my actual Accuracy, for a better (**unreal**) higher value:
#
# >- so, for general model Accuracy, I cannot use this `accuracy = (y_pred == y_test.values).mean()`
# >- using instead `f1_score(y_test, y_pred, average='weighted', labels=np.unique(y_pred))`
#
# **Version 1.9** updated: created my own customized function for showing metrics
#
# **Version 1.15** updated: improved my customized function for other metrics
#
# >- I was using the mean F1 Score as "Model Precision" and that seems a bit **silly**, as there were other metrics
# >- I could find a better material At SkLearn documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html)
# >- for example, as we are using binary labels, and the most important one is the "1", label, we can set is in the parameters as `average='binary'` and `pos_label=1`
# >- another thing, **Precision** and **Reacall** are more **effective** for Machine Learning than **F1**
# >- about ill-defined parameters, I found some documentation at [Udacity](https://knowledge.udacity.com/questions/314220)
#
# **Future improvement**
#
# >- there are better metrics for **multilabel classificication** [here](https://medium.com/analytics-vidhya/metrics-for-multi-label-classification-49cc5aeba1c3#id_token=<KEY>)
# >- we could use **Precision at k** `P@k`, **Avg precision at k** `AP@k`, **Mean avg precision at k** `MAP@k` and **Sampled F1 Score** `F1 Samples`
#
# ---
#
# **Version 1.17** update: for **Naïve Bayes** updated new, **more realistic** metrics based on **10 top** labels:
#
# >- Model Accuracy now is **31.2**%
# >- Precision now is **85.9**%
# >- Recall now is **26.4**%
#
# ---
#
# **Version 1.18** update: for **Naïve Bayes** letting the tokenizer take the same word more than once:
#
# >- Model Accuracy now is **31.5**%
# >- Precision now is **86.3**%
# >- Recall now is **26.6**%
y_pred = pipeline_mbnb.predict(X_test)
udacourse2.fn_scores_report2(y_test,
y_pred,
best_10=True,
verbose=True)
#udacourse2.fn_scores_report(y_test, y_pred)
# And if I want a **complete report**, over the 36 y-labels:
#
# - just set `best_10=False`
y_pred = pipeline_mbnb.predict(X_test)
udacourse2.fn_scores_report2(y_test,
y_pred,
best_10=False,
verbose=True)
# Model Accuracy is distorted by **false fitting** (zeroes over zeroes)
#
# Manually, I could find the true meaning as near to **82%**
#
# ---
#
# **Version 1.17** update: now this consideration is useless, as metrics were reformed!
# +
real_f1 = [.78, .86, .83, .85, .80, .83, .81, .91, .86, .69, .83]
corr_precision = statistics.mean(real_f1)
print('F1 corrected Model Accuracy: {:.2f} ({:.0f}%)'.format(corr_precision, corr_precision*100.))
# -
# #### Critics about the performance of my Classifier
#
# I know what you are thinking: "Uh, there is something **wrong** with the Accuracy of this guy"
#
# So, as you can see: **92.2%** is too high for a **Naïve Bayes Classifier**!
#
# There are some explanations here:
#
# >- if you read it with care, you will find this **weird** label `related`. And it seems to **positivate** for every row on my dataset. So It distorts the average for a **higher** one
# >- if you look at each **weighted avg**, you will find some clearly **bad** values, as **68%** for **aid_related** (if you start thinking about it, is something like in **2/3** of the cases the model guesses well for this label... so a really **bad** performance)
#
# *Updated 1: when I removed `related` column, my **Model Accuracy** felt down to **56.1%**. Normally my Labels are holding something as **75-78%** f1-score. Now I think that these **untrainable columns** are making my average Accuracy to fall down!*
#
# ---
#
# But there is another **critic** about this data.
#
# I am **Engineer** by profession. And I work for almost **19** years in a **hidrology** datacenter for the Brazillian Government. So, in some cases, you see some data and start thinking: "this data is not what it seems".
#
# And the main problem with this data is:
#
# >- it is a **mistake** to think that all we need to do with it is to train a **Supervised Learning** machine!
# >- if you look with care, this is not about **Supervised Learning**, it is an actual **Semi-Supervised Learning** problem. Why?
# >- just consider that there were **zillions** of Tweeter messages about catastrophes all around the world. And then, when the message was not originally in English, they translated it. And then someone manually **labeled** each of these catastrophe reports. And a **lot** of them remained with **no classification**
# >- it I just interpret it as a **Supervised Learning** challenge, I will feed my Classifier with a lot of **false negatives**. And my Machine Learning Model will learn how to **keep in blank** a lot of these messages, as it was trained by my **raw** data!
#
# So in **preprocessing** step, I avoided **unlabelled data**, filtering and removing for training every row that not contains any label on it. They were clearly, **negleted** for labeling, when manually processed!
#
#
#
# ## VI. Try other Classifiers
#
# - I will try some Classifiers based on a **hierarchical structure**:
#
# >- why **hierarchical structure** for words? Just because I think we do it **naturally** in our brain
# >- when science mimic nature I personally think that things goes in a better way. So, let´s try it!
#
# First of them, **Random Forest** Classifier
#
# >- as **RFC** is a **single-label** Classifier, we need to call it **n** times for each label to be classified
# >- so, que need to call it indirectly, using **Multi-Output** Classifier tool
# >- it took **693.73 seconds** (as 11 minutes and 35 seconds) to complete the tast (not so bad!)
# >- I tried to configure a **GridSearch**, just to set the number of processors to `-1` (meaning, the **maximum** number)
#
# Accuracy was near to **93%** before removing `related` label. Now it remains as **93.8%**. So, it don't matter!
#
# **Version 1.10** update: prepared other Machine Learning Classifiers for training the data
#
# ---
#
# **Version 1.17** for **Random Forest** updated new, **more realistic** metrics based on **10 top** labels:
#
# >- Model Accuracy now is **66.5**%
# >- Precision now is **69.8**%
# >- Recall now is **70.1**%
#
# ---
#
# **Version 1.18** for **Random Forest** letting the tokenizer take the same word more than once:
#
# >- Model Accuracy now is **66.4**%
# >- Precision now is **79.8**%
# >- Recall now is **59.7**%
#
# ---
#
# **Version 1.19** for **Random Forest** :
#
# >- Model Accuracy now is **66.3**%
# >- Precision now is **79.5**%
# >- Recall now is **59.7**%
# Only uncomment if you really want use this code, it takes too much time to process!
# +
#start = time()
#def dummy(doc):
# return doc
#Random Forest makes randomization, so random_state parameter was set
#pipeline_rafo = Pipeline([('vect', CountVectorizer(tokenizer=dummy, preprocessor=dummy)),
# ('tfidf', TfidfTransformer()),
# ('clf', MultiOutputClassifier(RandomForestClassifier(random_state=42)))])
#pipeline_rafo.fit(X_train, y_train)
#spent = time() - start
#s_min = spent // 60
#print('RANDOM FOREST - process time: {:.0f} minutes, {:.2f} seconds ({:.2f}s)'\
# .format(s_min, spent-(s_min*60), spent))
# -
y_pred = pipeline_rafo.predict(X_test)
udacourse2.fn_scores_report2(y_test,
y_pred,
best_10=True,
verbose=True)
#udacourse2.fn_scores_report(y_test, y_pred)
# Another tree like Classifier is **Adaboost**:
#
# >- they say Adaboost is specially good for **differenciate** positives and negatives
# >- it took **106.16 seconds** (kind of **1** minute and **45** seconds) to complete the task... not so bad... (as AdaBoost don´t use **trees**, but **stumps** for doing its job)
#
# Accuracy was near to **91%**. After removing `related` label:
#
# >- it raised to **93.6%**. As Adaboost is based on **stumps**, a bad label perhaps distorts the model
# >- training time lowered to **71,57** seconds, so kind of a time reduction about 30%
#
# *Adaboost seems to be really **fast**, when compared to Random Forest. And without loosing too much in terms of Model Accuracy...*
#
# ---
#
# **Version 1.17** for **Adaboost** updated new, **more realistic** metrics based on **10 top** labels:
#
# >- Model Accuracy now is **66.3**%
# >- Precision now is **77.7**%
# >- Recall now is **58.7**%
#
# ---
#
# **Version 1.18** update: for **Adaboost** letting the tokenizer take the same word more than once:
#
# >- Model Accuracy now is **65.4**%
# >- Precision now is **77.3**%
# >- Recall now is **57.8**%
#
# ---
#
# **Version 1.19** update: for **Adaboost** was not affected, as **Linear SVM** when I inserted two really problematic labels for training `related` (everything is labelled as **1**) and `missing_child` (everything is labelled as **0**)
#
# >- Model Accuracy now is **65.2**%
# >- Precision now is **77.5**%
# >- Recall now is **57.8**%
#
# **Version 1.20** update: after running **GridSearch** on Adaboost, I could make some ajustments on parameters:
#
# >- learning_rate $\rightarrow$ was **1.0**, now is **0.5**
# >- n_estimators $\rightarrow$ was **50**, now is **80**
#
# Train time was **100.84** seconds and now is **159.48** seconds
#
# And my model performance now is:
#
# >- Model Accuracy now is **64.0**%
# >- Precision now is **81.2**%
# >- Recall now is **55.1**%
#
# *So, with the new parameters **precision** increased near to 4%, but **recall** decreased near to 3%. Training time increased 60%. And I don´t think these new parameters are really nice*
#
# Another thing, I am trying `algorithm='SAMME'`. And why `SAMME`, just because we have a kind of **discrete** problem to solve, and this one is better for **discrete boosting**
#
# >- Model Accuracy now is **49.3**%
# >- Precision now is **80.6**%
# >- Recall now is **38.1**%
#
# *Not a good job, let´s keep the original algorithm!*
#
# ---
#
# **Version 1.21** update: for preventing **pipeline leakage** using Picke, I modified `train_data` for having preprocessor as optional. For more details see reference[here](https://rebeccabilbro.github.io/module-main-has-no-attribute/)
# +
start = time()
def dummy(doc):
return doc
#CountVectorizer(tokenizer=udacourse2.fn_tokenize_fast)
#Adaboost makes randomization, so random_state parameter was set
pipeline_adab = Pipeline([('vect', CountVectorizer(tokenizer=dummy, preprocessor=dummy)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier(learning_rate=1.0,
n_estimators=50,
algorithm='SAMME.R',
random_state=42)))])
pipeline_adab.fit(X_train, y_train)
spent = time() - start
print('ADABOOST - process time: {:.2f} seconds'.format(spent))
# -
y_pred = pipeline_adab.predict(X_test)
udacourse2.fn_scores_report2(y_test,
y_pred,
best_10=True,
verbose=True)
#udacourse2.fn_scores_report(y_test, y_pred)
# ---
#
# #### Falling in a trap when choosing another Classifier
#
# Then I tried a **Stochastic Gradient Descent** (SGD) [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html)
#
# _"Linear classifiers (SVM, logistic regression, etc.) with SGD training"_
#
# It can works with a **Support Vector Machine** (SVM), that is a fancy way of defining a good frontier
#
#
# `clf = SGDClassifier()` with some parameters
#
# >- `learning_rate='optimal'`$\rightarrow$ **decreasing strength schedule** used for updating the gradient of the loss at each sample
# >- `loss='hinge'` $\rightarrow$ **Linear SVM** for the fitting model (works with data represented as dense or sparse arrays for features)
# >- `penalty=[‘l2’, ‘l1’, ‘elasticnet’]` $\rightarrow$ **regularizer** shrinks model parameters towards the zero vector using an **Elastic Net** (l2) or
# >- `alpha=[1e-5, 1e-4, 1e-3]` $\rightarrow$ stopping criteria, the higher the value, the **stronger** the regularization (also used to compute the **Learning Rate**, when set to learning_rate is set to ‘optimal’
# >- `n_iter=[1, 5, 10]` $\rightarrow$ number of passes over the **Epochs** (Training Data). It only impacts the behavior in the **fit method**, and not the partial_fit method
# >- `random_state=42` $\rightarrow$ if you want to replicate exactly the same output each time you retrain your machine
#
# *Observe that this is a kind of a lecture over the text at SkLearn website for this Classifier*
#
# ---
#
# And **SGDC** didn´t work! It gave me a **ValueError: y should be a 1d array, got an array instead**. So, something went wrong:
#
# Searching for the cause of the problem, I found this explanation [here](https://stackoverflow.com/questions/20335853/scikit-multilabel-classification-valueerror-bad-input-shape)
#
# *"No, SGDClassifier does not do **multilabel classification** (what I need!) -- it does **multiclass classification**, which is a different problem, although both are solved using a one-vs-all problem reduction"*
#
# *(we use Multiclass Classification when the possible classifications are **mutually exclusive**. For example, I have a picture with a kind of fruit, and it could be classified as a **banana**, or a **pear**, or even an **apple**. Clearly that is not our case!)*
#
# *Then, neither **SGD** nor OneVsRestClassifier.fit will accept a **sparse matrix** (is what I have!) for y*
#
# *- SGD wants an **array of labels** (is what I have!), as you've already found out*
#
# *- OneVsRestClassifier wants, for multilabel purposes, a list of lists of labels*
#
# *Observe that this is a kind of a lecture over the explanatory text that I got at SKLearn website for SGDC for Multilabel*
#
# ---
#
# There is a good explanation about **Multiclass** and **Multilabel** Classifiers [here](https://scikit-learn.org/stable/modules/multiclass.html)
#
# Don´t try to run this code:
# +
#start = time()
#def dummy(doc):
# return doc
#random_state=42 #<-just to remember!
#pipeline_sgrd = Pipeline([('vect', CountVectorizer(tokenizer=dummy, preprocessor=dummy)),
# ('tfidf', TfidfTransformer()),
# ('clf', SGDClassifier(loss='hinge',
# penalty='l2',
# alpha=1e-3))])
#fit_sgrd = pipeline_sgrd.fit(X_train, y_train)
#spent = time() - start
#print('STOCHASTIC GRADIENT DESCENT - process time:{:.2f} seconds'.format(spent))
# -
# Let's try **K-Neighbors Classifier**
#
# **First** try, `n_neighbors=3`:
#
# >- model Accuracy was **91.8%**... not so bad!
# >- and... why only **3** neighbors? You see this parameter is quite **arbitrary** in our case... it could be 2 or 5... as we have so much (or so few neighbors that we can rely on, this can **tune better** our classifier)... and why not try it, using **GridSearch**?
#
# **Second** try, `n_neighbors=7` and `p=1` (using **GridSearch**, explanation below to tune it for a better result):
#
# >- it took **.74** seconds to **fit** the Classifier
# >- the slowest part was to **predict**, as **5** minutes and **27** seconds!
# >- it gave us **92.0%** of model Accuracy... and a lot of **non-fitting** labels!
# >- so, it was not a good idea to use the new parameters, the **original ones** are better!
#
# Some reflexions about models, **GridSearch** and best parameters:
#
# >- sometimes a **slight** difference don´t worth the computational price
# >- another thing to reflect about: why I started with only **3** neighbors? Just because Tweeter messages are quite **short**. When tokenized, the number of **tokens** normally don´t exceed **7**!
# >- so, giving a brutal **resolution** to poor data, normally is not a good idea
#
# **Third** try, `n_neighbors=3` and `p=1`
#
# >- I achieved **91.3** accuracy, don´t using so much computational power!
# >- only tunning a bit the **power** parameter provided me with a silghtly **better** result
# >- training time is **0.79** seconds and predict is **5** minutes and **27** seconds
#
# **Version 1.11** update: preparation of k-Neighbors Classifier for training
#
# *k-Neighbors seems to not fit so well for this kind of problems!*
#
# ---
#
# **Version 1.17** for **k-Nearest** updated new, **more realistic** metrics based on **10 top** labels:
#
# >- Model Accuracy now is **39.1**%
# >- Precision now is **60.1**%
# >- Recall now is **32.6**%
#
# ---
#
# **Version 1.18** for **k-Nearest** letting the tokenizer take the same word more than once:
#
# >- Model Accuracy now is **38.8**%
# >- Precision now is **60.5**%
# >- Recall now is **32.2**%
# +
start = time()
def dummy(doc):
return doc
#k-Neighbors don´t use randomization
pipeline_knbr = Pipeline([('vect', CountVectorizer(tokenizer=dummy, preprocessor=dummy)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(KNeighborsClassifier(n_neighbors=3, p=1)))])
pipeline_knbr.fit(X_train, y_train)
spent = time() - start
print('K NEIGHBORS CLASSIFIER - process time: {:.2f} seconds'.format(spent))
# +
start = time()
y_pred = pipeline_knbr.predict(X_test)
udacourse2.fn_scores_report2(y_test,
y_pred,
best_10=True,
verbose=True)
#udacourse2.fn_scores_report(y_test, y_pred)
spent = time() - start
print('process time: {:.2f} seconds'.format(spent))
# -
# Linear Suport Vector Machine, fed by TfidVectorizer:
#
# >- now, the idea is to train another type of machine, a **Support Vector Machine** (SVM)
# >- SVM uses another philosophy, as you create a coordinate space for **vectors**
# >- the space coordinate system can be a **cartesian planes**, or **polar combinations**
# >- the idea is to sepparate data using vectors as **sepparation elements**
# >- in this case, whe use only **linear** elements to make de sepparation
#
# Why **Linear**?
#
# >- the **computational cost** for linear entities on **discrete** computers is really low (if we were using **valved** computers, we could start exploring **non-linear** models with better profit)
# >- now we ned **fit** and **transform** opperations on our vectors provider
# >- it is a **fast** machine (**18.84**seconds), with the amazing Model Accuracy of a bit less than **93%** (one of the features could not be trained!)
# >- when corrected **labels consistencies**, based on our **hierarchical structure**, Model Accuracy raised a bit, reaching **93.6**!
#
# **Version 1.12** update: preparation of a completely different kind of **Machine Learning Classifier** (Support Vector Machine Family)
#
# ---
#
# **Version 1.17** update: for **Linear Support Vector Machine** updated new, **more realistic** metrics based on **10 top** labels:
#
# >- Model Accuracy now is **70.6**%
# >- Precision now is **70.8**%
# >- Recall now is **71.1**%
#
# ---
#
# **Version 1.18** update: for **Linear Support Vector Machine** letting the tokenizer take the same word more than once:
#
# >- Model Accuracy now is **70.5**%
# >- Precision now is **71.9**%
# >- Recall now is **69.7**%
#
# **Version 1.19** update: for Linear Support Vector Machine **deteriorate** a lot when I inserted two really problematic labels for training `related` (everything is labelled as **1**) and `missing_child` (everything is labelled as **0**)
#
# *I just re-inserted this two labels in order to accomplish one of the requisites for project approving at Udacity, that says to "train all the 36 columns". I am a bit angry about it, as it pushed down so much the performance of my project!*
#
# >- Model Accuracy now is **61.2**%
# >- Precision now is **80.4**%
# >- Recall now is **50.3**%
#
# **Version 1.19** update: I really **tried** to avoid both training warnings, just testing and elliminating **untrainable columns** from my labels. But jus to to follow the Udacity requisites for this project, I needed to deactivate these lines of code. So now we have these weird warnings:
#
# - `UserWarning: Label 0 is present in all training example` (this is for `related` column)
#
# - `UserWarning: Label not 9 is present in all training examples` (this is for `missing_child` column)
# +
start = time()
def dummy(doc):
return doc
feats = TfidfVectorizer(analyzer='word',
tokenizer=dummy,
preprocessor=dummy,
token_pattern=None,
ngram_range=(1, 3))
classif = OneVsRestClassifier(LinearSVC(C=2.,
random_state=42))
#don´t use this line, I thought it was necessary to to te sepparation!
#feats = feats.fit_transform(X_train)
pipeline_lnsv = Pipeline([('vect', feats),
('clf', classif)])
pipeline_lnsv.fit(X_train, y_train)
spent = time() - start
print('LINEAR SUPPORT VECTOR MACHINE - process time:{:.2f} seconds'.format(spent))
# -
# If you experience:
#
# *NotFittedError: Vocabulary not fitted or provided*
# [here](https://stackoverflow.com/questions/60472925/python-scikit-svm-vocabulary-not-fitted-or-provided)
# ---
#
# #### Test Area (for Version 1.16 improvement)
#
# I am trying to create new **fancy** metrics for scoring my Classifiers
#
# >- I was taking only the **General Average F1 Score** as metrics, and it seems so pooly detailed
#
#
# I have for most classified labels, according to my `fn_labels_report` function:
#
# 1. related:19928 (75.9%)
# 2. aid_related:10903 (41.5%)
# 3. weather_related:7304 (27.8%)
# 4. direct_report:5080 (19.4%)
# 5. request:4480 (17.1%)
# 6. other_aid:3448 (13.1%)
# 7. food:2930 (11.2%)
# 8. earthquake:2455 (9.4%)
# 9. storm:2448 (9.3%)
# 10. shelter:2319 (8.8%)
# 11. floods:2158 (8.2%)
#
# When I remove **related** (as it will only classify as **"1"** for **All** my dataset, when I remove rows that have **no** classification at all - so, I cannot **train** on them), I will get these new columns as:
#
# 1. aid_related
# 2. weather_related
# 3. direct_report
# 4. request
# 5. other_aid
# 6. food
# 7. earthquake
# 8. storm
# 9. shelter
# 10. floods
#
# Turning them into a list:
#
# `top_labels = ['aid_related', 'weather_related', 'direct_report', 'request', 'other_aid', 'food', 'earthquake', 'storm', 'shelter', 'floods']`
#
# Retrieve their position by name [here](https://stackoverflow.com/questions/13021654/get-column-index-from-column-name-in-python-pandas):
#
# `y_test.columns.get_loc("offer")`
#
# **Version 1.16** update: new `fn_scores_report2` function created
y_pred = pipeline_lnsv.predict(X_test)
udacourse2.fn_scores_report2(y_test,
y_pred,
best_10=True,
verbose=True)
# Don´t use this function! (deprecated)
# +
#y_pred = pipeline_lnsv.predict(X_test)
#udacourse2.fn_scores_report(y_test, y_pred)
# -
# ## VIII. Make a Fine Tunning effort over Classifiers
#
# #### First attempt: Stochastic Gradient Descent
#
# **Grid Search**
#
# `parameters = {'vect__ngram_range': [(1, 1), (1, 2)],`
# `'tfidf__use_idf': (True, False),`
# `'clf__alpha': (1e-2, 1e-3)}`
#
# - use **multiple cores** to process the task
#
# `gs_clf = GridSearchCV(text_clf, parameters, n_jobs=-1)`
#
# `gs_clf = gs_clf.fit(twenty_train.data, twenty_train.target)`
#
# -see the **mean score** of the parameters
#
# `gs_clf.best_score_`
#
# `gs_clf.best_params_`
#
# *Not implemented, by the reason that our SGD effort was abandonned. Only some sketches from my studies for GridSearch on SGD remain here! (source, SKlearn parameters + documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html)*
#
# #### Second attempt: k-Neighbors
#
# >- we can see tunable parameters using the command `Class_k.get_params()`
# >- I tried to tune up for `n_neighbors` and for `p`
# >- it took **74** minutes and **15** seconds to run (so, don´t try it!)
# >- best estimator was **n_neighbors=7** and **p=1** $\rightarrow$ "Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1)" (from SkLearn documentation)
# **Version 1.13** update: implemented **Grid Search** for some sellected Classifiers
#
# **Future implementation**: test other parameters for a better fine-tunning (I don't made an **exaustive fine-tunning**!)
#
# Only uncomment if you really want use this code, it takes too much time to process!
# +
#start = time()
#def dummy(doc):
# return doc
#k-Neighbors don´t use randomization
#Vect_k = CountVectorizer(tokenizer=dummy, preprocessor=dummy)
#Transf_k = TfidfTransformer()
#Class_k = MultiOutputClassifier(KNeighborsClassifier())
#pipeline_knbr = Pipeline([('vect', Vect_k),
# ('tfidf', Transf_k),
# ('clf', Class_k)])
#param_dict = {'clf__estimator__n_neighbors': [3,5,7],
# 'clf__estimator__p': [1,2]}
#estimator = GridSearchCV(estimator=pipeline_knbr,
# param_grid=param_dict,
# n_jobs=-1) #, scoring='roc_auc')
#estimator.fit(X_train, y_train)
#spent = time() - start
#s_min = spent // 60
#print('K NEIGHBORS GRID SEARCH - process time: {:.0f} minutes, {:.2f} seconds ({:.2f}s)'\
# .format(s_min, spent-(s_min*60), spent))
# +
#fit_knbr.best_estimator_
# -
# **Version 1.10** update: Grid Search on Adaboost. As we choose this Classifier as the main classifier for our model, let´s make a **GridSearch** on it too:
#
# >-
# +
#start = time()
#def dummy(doc):
# return doc
#Adaboost makes randomization, so random_state parameter was set
#vect_a = CountVectorizer(tokenizer=dummy, preprocessor=dummy)
#transf_a = TfidfTransformer()
#class_a = MultiOutputClassifier(AdaBoostClassifier(random_state=42))
#pipeline_adab = Pipeline([('vect', vect_a),
# ('tfidf', transf_a),
# ('clf', class_a)])
#param_dict = {'clf__estimator__learning_rate': [0.5, 1.0],
# 'clf__estimator__n_estimators': [50, 80]}
#param_dict = {'clf__estimator__algorithm': ['SAMME.R', 'SAMME'],
# 'clf__estimator__learning_rate': [0.5, 1.0, 2.0],
# 'clf__estimator__n_estimators': [20, 50, 80]}
#estimator = GridSearchCV(estimator=pipeline_adab,
# param_grid=param_dict,
# n_jobs=-1)
#pipeline_adab.fit(X_train, y_train)
#estimator.fit(X_train, y_train)
#spent = time() - start
#s_min = spent // 60
#print('ADABOOST GRID SEARCH - process time: {:.0f} minutes, {:.2f} seconds ({:.2f}s)'\
# .format(s_min, spent-(s_min*60), spent))
# -
# For estimator can could try: (Adaboost documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html))
#
# >- 'estimator__base_estimator': None $\rightarrow$ don´t change it, Adaboost is a **Decision Tree** with depth=1!
# >- 'estimator__algorithm': 'SAMME.R' $\rightarrow$ 'SAMME' is **discrete boosting** (and for our problem, probably it will be better!)
# >- 'estimator__learning_rate': 1.0 $\rightarrow$ n_estimators vs learning_rate... it is a **tradeoff**...
# >- 'estimator__n_estimators': 50 $\rightarrow$ whe can play with **both**!
# *Don´t run it, it´s only to get the parameters for Adaboost!*
# +
#class_a.get_params()
# -
# It took **72**minutes and **56**seconds in my machine to run, and gave me as **best parameters**:
#
# >- learning_rate $\rightarrow$ **0.5**
# >- n_estimators $\rightarrow$ **80**
# +
#estimator.best_estimator_[2].estimator.learning_rate
# +
#estimator.best_estimator_[2].estimator.n_estimators
# -
# **Linear SVC**: new parameter found by using **Grid Search**
#
# - `C=0.5`
#
# - run time for training the Classifier is **4**min **26**sec
# +
start = time()
def dummy(doc):
return doc
feats = TfidfVectorizer(analyzer='word',
tokenizer=dummy,
preprocessor=dummy,
token_pattern=None,
ngram_range=(1, 3))
classif = OneVsRestClassifier(LinearSVC())
pipeline_lnsv = Pipeline([('vect', feats),
('clf', classif)])
param_dict = {'clf__estimator__C': [0.1,0.5,1.0,2.0,5.0]}
estimator = GridSearchCV(estimator=pipeline_lnsv,
param_grid=param_dict,
n_jobs=-1) #, scoring='roc_auc')
estimator.fit(X_train, y_train)
spent = time() - start
s_min = spent // 60
print('LINEAR SUPPORT VECTOR MACHINE GRID SEARCH - process time: {:.0f} minutes, {:.2f} seconds ({:.2f}s)'\
.format(s_min, spent-(s_min*60), spent))
# +
#classif.get_params()
# +
#estimator.best_estimator_
# -
# *NotFittedError: Vocabulary not fitted or provided*
# [here](https://stackoverflow.com/questions/60472925/python-scikit-svm-vocabulary-not-fitted-or-provided)
# ## VIII. Choosing my Classifier
#
# ### Classifiers Training & Tunning Summary
#
#
# | Classifier | Model Accuracy | Time to Train | Observation |
# |:--------------------:|:--------------:|:-------------:|:------------------------------:|
# | Binomial Naive Bayes | less than 82% | 0.68s | 22 labels couldn't be trained! |
# | Random Forest | less than 90% | 11m 44s | 3 labels couldn't be trained! |
# | Adaboost | 93.6% | 100.5s | |
# | k-Neighbors | less than 90% | 0.58s | 3 labels couldn't be trained! |
# | Linear SVM | less than 93% | 26.81s | 2 labels couldn't be trained! |
#
# *thanks for the service Tables Generator [here](https://www.tablesgenerator.com/markdown_tables)
#
# #### In my concept, the rank is
#
# **First** place, Adaboost. It seemed **reliable** and **fast** for this particular task, and it is a neat machine, really easy to understand
#
# **Second** place, Linear SVM. Some of these labels are really **hard** to train and it was really **fast**
#
# **Third** place, k-Neighbors. It is **fast** and seeme so realiable as **Random Forest**, that is **too hard** to train
#
# ---
#
# And I will take... **Linear SVM**!
#
# *Why? just because I cannot **really** believe that some of these labels can be trained!*
#
# The "bad guy" were `tools`, `shops`, `aid centers` and the **real** problem involved is:
#
# >- `shops` $\rightarrow$ 120
# >- `tools` $\rightarrow$ 159
# >- `aid_centers` $\rightarrow$ 309
#
# >- there are so **few** labelled rows for these 3 guys that I really cannot believe that any Machine Learning Classifier can really **train** for them!
# >- and what about **Adaboost**? Well, Adaboost is based on **stumps** algorithm. And by processing the data, it cannot really reach a true **zero**, as the stumps inside them do not allow this kind of thing. So, instead of a **1**, it will give you a **0.999%**, that worth nothing for practical uses
# >- lately I can run more **GridSearch** and over **Linear SVM**. Adaboost don't have so much options for future improvement
#
# So, I will use in my model **Linear SVM**
#
# ---
#
# **Version 1.14** update: filtering for **valid** ones over critical labels
#
# Choosen model changed to **Adaboost**. Why?
#
# >- conting for **valid** labels showed that these labels are in fact **trainable**, but that is not so easy to do it
# >- probably they are pressed to **zero**, as there are much more **false negatives** under these labels
#
# **Future version** - as my labels columns are clearly **hierarchical**:
#
# >- I could break my original dataset into 3 **more specific** datasets, as `infrastructure_related`, `aid_related` and `weather_related`, and include in each one the rows that are **relevant**
# >- In this case, the noise caused by **false negatives** will decrease, turning easier for each training achieve a better score
#
# ---
#
# **Version 1.17** updated: metrics **changed**, so my choice may change too!
#
# New table for **Classifier evaluation** (10 greatest labels):
#
# | Classifier | Precision | Recall | Worst Metrics |
# |:--------------------:|:---------:|:------:|:-------------:|
# | Binomial Naïve Bayes | 85.9% | 26.4% | 65.6% & 0.1% |
# | Random Forest | 79.8% | 60.1% | 62.2% & 8.4% |
# | Adaboost | 77.7% | 58.7% | 48.4% & 20.4% |
# | k-Neighbors | 60.1% | 32.6% | 28.6% & 1.2% |
# | Linear SVM | 70.8% | 71.1% | 43.0% & 32.5% |
#
# *Random Forest is very **slow** to fit!*
# *k-Neighbors is really **slow** to predict!*
#
# So, now I can see a lot of advantage for choosing **Linear SVM**:
#
# >- it is not **slow** for fit/train
# >- I can later explorer other better parameters using **GridSearch**
# >- It **don´t decay** so fast, for labels without so much rows for train
#
# My second choice is **Adaboost**
#
# *If things don´t go pretty well, I have a fancy alternative!*
#
# **Version 1.18**: letting the tokenizer take the same word more than once:
#
# | Classifier | Precision | Recall | Worst Metrics | Observations |
# |:--------------------:|:---------:|:------:|:-------------:|:-----------------------------:|
# | Binomial Naïve Bayes | 86.3% | 26.6% | 64.5% & 0.1% | Imperceptible changes |
# | Random Forest | 79.8% | 59.7% | 61.8% & 9.3% | Recall lowered a bit |
# | Adaboost | 77.3% | 55.8% | 46.1% & 15.9% | Recall lowered a bit |
# | k-Neighbors | 60.5% | 32.2% | 29.5% & 1.9% | Parameters slightly increased |
# | Linear SVM | 70.5% | 71.9% | 44.7% & 35.8% | Parameters slightly increased |
#
# *Fo, I will **keep** my tokenizer letting repeated tokens for each message, as I choose to use **Linear SVM**. If in future, training will turn so slow (as I get more and more messages at my dataset for training), I can go back to the earlier setting (only unique tokens per message)*
#
# ---
#
# **Version 1.19** update: for **Linear SVM** when I inserted two really problematic labels for training `related` (everything is labelled as **1**) and `missing_child` (everything is labelled as **0**)
#
# *I only made this re-insertion for accomplishing the requisites for Udacity project aproval, as they really degradated the training of a SVM. And SVMs are really powerful Classifiers, so it was a pity to lost it!*
#
# *now, my project has as **main** (default) classifier back to **Adaboost**. The LSVM remains in my function, but for using it, you need to use a special parameter. The documentation how to use it is at `train_classifier.py`.*
# Verifying the amount of **positive** data for **few** data on the labels:
#
# - observe that `child_alone` was previously removed from our training dataset
df2 = df[df.columns[5:]]
a = df2.apply(pd.Series.value_counts).loc[1]
a[a < 400]
# +
#mean score of the parameters
#gs_clf.best_score_
#gs_clf.best_params_
# -
# ## IX. Export your model as a pickle file
# 1. Choose your model, with the fine tunning already done (this can be changed later!)
#
# How to deal with picke [here](https://www.codegrepper.com/code-examples/python/save+and+load+python+pickle+stackoverflow)
#
# Pickle documentation [here](https://docs.python.org/3/library/pickle.html#module-pickle)
#
# 2. Final considerations about this model:
#
# >- I choosed **Adaboost** as our Classifier
# >- The explanation for my choice is at the item **above**
#
# ---
#
# **Version 1.18** update: now my Classifier was changed to **Linear SVC**. The explanations for my choice rests **above**
# Trying the **Demo** code, that I found at **Codegreeper.com**
# +
import pickle
dic = {'hello': 'world'}
with open('filename.pkl', 'wb') as pk_writer: #wb is for write+binary
pickle.dump(dic,
pk_writer,
protocol=pickle.HIGHEST_PROTOCOL)
with open('filename.pkl', 'rb') as pk_reader: #rb is for read+binary
dic_unpk = pickle.load(pk_reader)
print (dic == dic_unpk)
# +
file_name = 'classifier.pkl'
with open (file_name, 'wb') as pk_writer:
pickle.dump(pipeline_lnsv, pk_writer)
with open('classifier.pkl', 'rb') as pk_reader: #rb is for read+binary
pipeline_lnsv = pickle.load(pk_reader)
# -
pipeline_lnsv.fit(X_train, y_train)
pipeline_lnsv.predict(X_test)
# ## X. Use the notebook to complete `train.py`
#
# Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
raise Exception('under development')
# +
#import packages
import sys
import math
import numpy as np
import udacourse2 #my library for this project!
import pandas as pd
from time import time
#SQLAlchemy toolkit
from sqlalchemy import create_engine
from sqlalchemy import pool
from sqlalchemy import inspect
#Machine Learning preparing/preprocessing toolkits
from sklearn.model_selection import train_test_split
#Machine Learning Feature Extraction tools
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
#Machine Learning Classifiers
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier
#Machine Learning Classifiers extra tools
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
#pickling tool
import pickle
#only a dummy function, as I pre-tokenize my data
def dummy(doc):
return doc
#########1#########2#########3#########4#########5#########6#########7#########8
def load_data(data_file,
verbose=False):
'''This function takes a path for a MySQL table and returns processed data
for training a Machine Learning Classifier
Inputs:
- data_file (mandatory) - full path for SQLite table - text string
- verbose (optional) - if you want some verbosity during the running
(default=False)
Outputs:
- X - tokenized text X-training - Pandas Series
- y - y-multilabels 0|1 - Pandas Dataframe'''
if verbose:
print('###load_data function started')
start = time()
#1.read in file
#importing MySQL to Pandas - load data from database
engine = create_engine(data_file, poolclass=pool.NullPool) #, echo=True)
#retrieving tables names from my DB
inspector = inspect(engine)
if verbose:
print('existing tables in my SQLite database:', inspector.get_table_names())
connection = engine.connect()
df = pd.read_sql('SELECT * FROM Messages', con=connection)
connection.close()
df.name = 'df'
#2.clean data
#2.1.Elliminate rows with all-blank labels
if verbose:
print('all labels are blank in {} rows'.format(df[df['if_blank'] == 1].shape[0]))
df = df[df['if_blank'] == 0]
if verbose:
print('remaining rows:', df.shape[0])
#Verifying if removal was complete
if df[df['if_blank'] == 1].shape[0] == 0:
if verbose:
print('removal complete!')
else:
raise Exception('something went wrong with rows removal before training')
#2.2.Premature Tokenization Strategy (pre-tokenizer)
#Pre-Tokenizer + not removing provisory tokenized column
#inserting a tokenized column
try:
df = df.drop('tokenized', axis=1)
except KeyError:
print('OK')
df.insert(1, 'tokenized', np.nan)
#tokenizing over the provisory
df['tokenized'] = df.apply(lambda x: udacourse2.fn_tokenize_fast(x['message']), axis=1)
#removing NaN over provisory (if istill exist)
df = df[df['tokenized'].notnull()]
empty_tokens = df[df['tokenized'].apply(lambda x: len(x)) == 0].shape[0]
if verbose:
print('found {} rows with no tokens'.format(empty_tokens))
df = df[df['tokenized'].apply(lambda x: len(x)) > 0]
empty_tokens = df[df['tokenized'].apply(lambda x: len(x)) == 0].shape[0]
if verbose:
print('*after removal, found {} rows with no tokens'.format(empty_tokens))
#I will drop the original 'message' column
try:
df = df.drop('message', axis=1)
except KeyError:
if verbose:
print('OK')
if verbose:
print('now I have {} rows to train'.format(df.shape[0]))
#2.3.Database Data Consistency Check/Fix
#correction for aid_related
df = udacourse2.fn_group_check(dataset=df,
subset='aid',
correct=True,
shrink=False,
shorten=False,
verbose=True)
#correction for weather_related
df = udacourse2.fn_group_check(dataset=df,
subset='wtr',
correct=True,
shrink=False,
shorten=False,
verbose=True)
#correction for infrastrucutre_related
df = udacourse2.fn_group_check(dataset=df,
subset='ifr',
correct=True,
shrink=False,
shorten=False,
verbose=True)
#correction for related(considering that the earlier were already corrected)
df = udacourse2.fn_group_check(dataset=df,
subset='main',
correct=True,
shrink=False,
shorten=False,
verbose=True)
#load to database <-I don't know for what it is
#3.Define features and label arrays (break the data)
#3.1.X is the Training Text Column
X = df['tokenized']
#3.2.y is the Classification labels
#I REMOVED "related" column from my labels, as it is impossible to train it!
y = df[df.columns[4:]]
#y = df[df.columns[5:]]
#remove_lst = []
#for column in y.columns:
# col = y[column]
# if (col == 0).all():
# if verbose:
# print('*{} -> only zeroes training column!'.format(column))
# remove_lst.append(column)
# else:
#print('*{} -> column OK'.format(column))
# pass
#if verbose:
# print(remove_lst)
#y = y.drop(remove_lst, axis=1)
spent = time() - start
if y.shape[1] == 36:
if verbose:
print('y dataset has 36 labels')
print('*dataset breaked into X-Training Text Column and Y-Multilabels')
print('process time:{:.0f} seconds'.format(spent))
else:
raise Exception('something went wrong, dataset has {} labels instead of 36'.format(y.shape[1]))
return X, y
#########1#########2#########3#########4#########5#########6#########7#########8
def build_model(verbose=False):
'''This function builds the Classifier Pipeline, for future fitting
Inputs:
- verbose (optional) - if you want some verbosity during the running
(default=False)
Output:
- model_pipeline for your Classifiear (untrained)
'''
if verbose:
print('###build_model function started')
start = time()
#1.text processing and model pipeline
#(text processing was made at a earlier step, at Load Data function)
feats = TfidfVectorizer(analyzer='word',
tokenizer=dummy,
preprocessor=dummy,
token_pattern=None,
ngram_range=(1, 3))
classif = OneVsRestClassifier(LinearSVC(C=2.,
random_state=42))
model_pipeline = Pipeline([('vect', feats),
('clf', classif)])
#define parameters for GridSearchCV (parameters already defined)
#create gridsearch object and return as final model pipeline (made at pipeline preparation)
#obs: for better performance, I pre-tokenized my data. And GridSearch was runned on Jupyter,
# and the best parameters where adjusted, just to save processing time during code execution.
spent = time() - start
if verbose:
print('*Linear Support Vector Machine pipeline was created')
print('process time:{:.0f} seconds'.format(spent))
return model_pipeline
#########1#########2#########3#########4#########5#########6#########7#########8
def train(X,
y,
model,
verbose=False):
'''This function trains your already created Classifier Pipeline
Inputs:
- X (mandatory) - tokenized data for training - Pandas Series
- y (mandatory) - Multilabels 0|1 - Pandas Dataset
- verbose (optional) - if you want some verbosity during the running
(default=False)
Output:
- trained model'''
if verbose:
print('###train function started')
start = time()
#1.Train test split
#Split makes randomization, so random_state parameter was set
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.25,
random_state=42)
if (X_train.shape[0] + X_test.shape[0]) == X.shape[0]:
if verbose:
print('data split into train and text seems OK')
else:
raise Exception('something went wrong when splitting the data')
#2.fit the model
model.fit(X_train, y_train)
# output model test results
y_pred = model.predict(X_test)
if verbose:
metrics = udacourse2.fn_scores_report2(y_test,
y_pred,
best_10=True,
verbose=True)
else:
metrics = udacourse2.fn_scores_report2(y_test,
y_pred,
best_10=True,
verbose=False)
for metric in metrics:
if metric < 0.6:
raise Exception('something is wrong, model is predicting poorly')
spent = time() - start
if verbose:
print('*classifier was trained!')
print('process time:{:.0f} seconds'.format(spent))
return model
#########1#########2#########3#########4#########5#########6#########7#########8
def export_model(model,
file_name='classifier.pkl',
verbose=False):
'''This function writes your already trained Classifiear as a Picke Binary
file.
Inputs:
- model (mandatory) - your already trained Classifiear - Python Object
- file_name (optional) - the name of the file to be created (default:
'classifier.pkl')
- verbose (optional) - if you want some verbosity during the running
(default=False)
Output: return True if everything runs OK
'''
if verbose:
print('###export_model function started')
start = time()
#1.Export model as a pickle file
file_name = file_name
#writing the file
with open (file_name, 'wb') as pk_writer:
pickle.dump(model, pk_writer)
#reading the file
#with open('classifier.pkl', 'rb') as pk_reader:
# model = pickle.load(pk_reader)
spent = time() - start
if verbose:
print('*trained Classifier was exported')
print('process time:{:.0f} seconds'.format(spent))
return True
#########1#########2#########3#########4#########5#########6#########7#########8
def run_pipeline(data_file='sqlite:///Messages.db',
verbose=False):
'''This function is a caller: it calls load, build, train and save modules
Inputs:
- data_file (optional) - complete path to the SQLite datafile to be
processed - (default='sqlite:///Messages.db')
- verbose (optional) - if you want some verbosity during the running
(default=False)
Output: return True if everything runs OK
'''
if verbose:
print('###run_pipeline function started')
start = time()
#1.Run ETL pipeline
X, y = load_data(data_file,
verbose=verbose)
#2.Build model pipeline
model = build_model(verbose=verbose)
#3.Train model pipeline
model = train(X,
y,
model,
verbose=verbose)
# save the model
export_model(model,
verbose=verbose)
spent = time() - start
if verbose:
print('process time:{:.0f} seconds'.format(spent))
return True
#########1#########2#########3#########4#########5#########6#########7#########8
def main():
'''This is the main Machine Learning Pipeline function. It calls the other
ones, in the correctorder.
'''
data_file = sys.argv[1] # get filename of dataset
run_pipeline(data_file='sqlite:///Messages.db',
verbose=True)
#########1#########2#########3#########4#########5#########6#########7#########8
if __name__ == '__main__':
main()
# -
# `P@k` implementation [here](https://medium.com/analytics-vidhya/metrics-for-multi-label-classification-49cc5aeba1c3#id_token=eyJhbGciOiJSUzI1NiIsImtpZCI6IjgxOWQxZTYxNDI5ZGQzZDNjYWVmMTI5YzBhYzJiYWU4YzZkNDZmYmMiLCJ0eXAiOiJKV1QifQ.eyJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJuYmYiOjE2MzAyNzYxNDYsImF1ZCI6IjIxNjI5NjAzNTgzNC1rMWs2cWUwNjBzMnRwMmEyamFtNGxqZGNtczAwc3R0Zy5hcHBzLmdvb2dsZXVzZXJjb250ZW50LmNvbSIsInN1YiI6IjEwNTAzNjUxNTUwMDU1MTQ1OTkzNSIsImVtYWlsIjoiZXBhc3NldG9AZ21haWwuY29tIiwiZW1haWxfdmVyaWZpZWQiOnRydWUsImF6cCI6IjIxNjI5NjAzNTgzNC1rMWs2cWUwNjBzMnRwMmEyamFtNGxqZGNtczAwc3R0Zy5hcHBzLmdvb2dsZXVzZXJjb250ZW50LmNvbSIsIm5hbWUiOiJFZHVhcmRvIFBhc3NldG8iLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDMuZ29vZ2xldXNlcmNvbnRlbnQuY29tL2EtL0FPaDE0R2pJNmh5V3FSTGNfdHZCYlg4OWxFTEphZ3diMFBYeXJNOGN1YXBLR1E9czk2LWMiLCJnaXZlbl9uYW1lIjoiRWR1YXJkbyIsImZhbWlseV9uYW1lIjoiUGFzc2V0byIsImlhdCI6MTYzMDI3NjQ0NiwiZXhwIjoxNjMwMjgwMDQ2LCJqdGkiOiIzYzYyZThiZDhkYWU4YjU4NWJlZDI4ZGFhYjE5ZDkwY2MyOTFmNjhlIn0.kwd1YjjoxP-RUFHA86RftkGHMMwic3edRM31Yz8sJL9dg0jzPwS2c9peJ9kDuIQK5x8PWvZxhnl-wI32M_D_FvWv5UXad1cYnkuEGnxeo94LPCUam-aOnUvDDpefUEOv8Oe2751C0VH1MrlDiOQxyGcYBIjnr2NtdaN8Y8pm-ZLonqw3zpZO-2Wlkhnrb12ruZmpWD2CbqZCHpNwmYq0bQqCrNp_dCZ9mBjc5xrYN2G8Us7ESZcCnqLLjk_cb6UVV81LFjKkrjGifBsOac-ANoc7TBJQnFW41FISORWL8j84mW7jl8UgEmxrgc8kaFtHm6oC5ptc9YLRBDq1Q93ZBQ)
#
# "Given a list of actual classes and predicted classes, precision at k would be defined as the number of correct predictions considering only the top k elements of each class divided by k"
def patk(actual, pred, k):
#we return 0 if k is 0 because
# we can't divide the no of common values by 0
if k == 0:
return 0
#taking only the top k predictions in a class
k_pred = pred[:k]
#taking the set of the actual values
actual_set = set(actual)
#taking the set of the predicted values
pred_set = set(k_pred)
#taking the intersection of the actual set and the pred set
# to find the common values
common_values = actual_set.intersection(pred_set)
return len(common_values)/len(pred[:k])
#defining the values of the actual and the predicted class
y_true = [1 ,2, 0]
y_pred = [1, 1, 0]
if __name__ == "__main__":
print(patk(y_true, y_pred,3))
# `AP@k` implementation [here](https://medium.com/analytics-vidhya/metrics-for-multi-label-classification-49cc5aeba1c3#id_token=<KEY>)
#
# "It is defined as the average of all the precision at k for k =1 to k"
import numpy as np
import pk
def apatk(acutal, pred, k):
#creating a list for storing the values of precision for each k
precision_ = []
for i in range(1, k+1):
#calculating the precision at different values of k
# and appending them to the list
precision_.append(pk.patk(acutal, pred, i))
#return 0 if there are no values in the list
if len(precision_) == 0:
return 0
#returning the average of all the precision values
return np.mean(precision_)
#defining the values of the actual and the predicted class
y_true = [[1,2,0,1], [0,4], [3], [1,2]]
y_pred = [[1,1,0,1], [1,4], [2], [1,3]]
if __name__ == "__main__":
for i in range(len(y_true)):
for j in range(1, 4):
print(
f"""
y_true = {y_true[i]}
y_pred = {y_pred[i]}
AP@{j} = {apatk(y_true[i], y_pred[i], k=j)}
"""
)
# `MAP@k` implementation [here](https://medium.com/analytics-vidhya/metrics-for-multi-label-classification-49cc5aeba1c3#id_token=<KEY>
#
# "The average of all the values of `AP@k` over the whole training data is known as `MAP@k`. This helps us give an accurate representation of the accuracy of whole prediction data"
import numpy as np
import apk
def mapk(acutal, pred, k):
#creating a list for storing the Average Precision Values
average_precision = []
#interating through the whole data and calculating the apk for each
for i in range(len(acutal)):
average_precision.append(apk.apatk(acutal[i], pred[i], k))
#returning the mean of all the data
return np.mean(average_precision)
#defining the values of the actual and the predicted class
y_true = [[1,2,0,1], [0,4], [3], [1,2]]
y_pred = [[1,1,0,1], [1,4], [2], [1,3]]
if __name__ == "__main__":
print(mapk(y_true, y_pred,3))
# `F1 Samples` implementation [here](https://medium.com/analytics-vidhya/metrics-for-multi-label-classification-49cc5aeba1c3#id_token=<KEY>)
#
# "This metric calculates the F1 score for each instance in the data and then calculates the average of the F1 scores"
from sklearn.metrics import f1_score
from sklearn.preprocessing import MultiLabelBinarizer
def f1_sampled(actual, pred):
#converting the multi-label classification to a binary output
mlb = MultiLabelBinarizer()
actual = mlb.fit_transform(actual)
pred = mlb.fit_transform(pred)
#fitting the data for calculating the f1 score
f1 = f1_score(actual, pred, average = "samples")
return f1
#defining the values of the actual and the predicted class
y_true = [[1,2,0,1], [0,4], [3], [1,2]]
y_pred = [[1,1,0,1], [1,4], [2], [1,3]]
if __name__ == "__main__":
print(f1_sampled(y_true, y_pred))
| ML Pipeline Preparationm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Fb2i1jZ-nHKa"
# # Course Outline
# * Step 0: 載入套件並下載語料
# * Step 1: 將語料讀進來
# * Step 2: Contingency table 和 keyness 計算公式
# * Step 3: 計算詞頻
# * Step 4: 計算 keyness
# * Step 5: 找出 PTT 兩板的 keywords
# * Step 6: 視覺化
# + [markdown] id="n6AjKZUCnNge"
# # Step 0: 載入套件並下載語料
# + id="qTtfmO9iWeJc"
import re # 待會會使用 regular expression
import math # 用來計算 log
import pandas as pd # 用來製作表格
import matplotlib # python 的繪圖套件
import matplotlib.pyplot as plt # 用來畫圖表
from matplotlib.font_manager import FontProperties # 用來顯示中文字型
from wordcloud import WordCloud # 製作文字雲
from google.colab import files # 將輸出結果匯出
# + colab={"base_uri": "https://localhost:8080/"} id="jZBDHdPGlxRW" outputId="4ea297a2-021f-4195-e62d-13ea92fe8bda"
# 用來下載 Google Drive 上的文件
# Colab 已內建此指令
# !pip install gdown
# + colab={"base_uri": "https://localhost:8080/"} id="IxXTq-_Cl1m0" outputId="b4ef0882-40f6-49a5-9b8f-08e3a5d77a99"
# 下載語料 (語料已經過斷詞處理)
# !gdown --id "1q3DAwlRaK9mApM_rtdSlfAvhLRotMAQH" -O "WomenTalk_2020_seg.txt" # 2020 年 WomenTalk 板
# !gdown --id "1PG_b7CBB6QLELEDBiRmAT9q9DlLNxksV" -O "Gossiping_2020_seg.txt" # 2020 年 Gossiping 板
# + colab={"base_uri": "https://localhost:8080/"} id="XS1DRnF4diQZ" outputId="1440a802-21db-4a25-acf1-69b87b41f7c2"
# 如果你想試試不同年份的資料
# !gdown --id "1mbtnbe_vjVbq87VEZY-z7T6QgZ3gpjJ9" -O "Gossiping_2015_seg.txt" # 2015 年 Gossiping 板
# !gdown --id "1QvmzgrelbcfKWCFra7Yegq7FoWqGVfyL" -O "Gossiping_2010_seg.txt" # 2010 年 Gossiping 板
# !gdown --id "1GJycMF7q7tMPf5j4aM-7DAGfIIDH_w0t" -O "Gossiping_2005_seg.txt" # 2005 年 Gossiping 板
# !gdown --id "1FL3bvOmkeqDrgMBWGfoVtxBqvX9R_ebW" -O "WomenTalk_2015_seg.txt" # 2015 年 WomenTalk 板
# !gdown --id "16-XHG9ceyVVWPZ1NSeCyDMoJn0J84L8e" -O "WomenTalk_2010_seg.txt" # 2010 年 WomenTalk 板
# !gdown --id "1MfxuFa9wFjVkknpeXUY19rZbh7jrp64J" -O "WomenTalk_2005_seg.txt" # 2005 年 WomenTalk 板
# + [markdown] id="VJ_IF0VmnWiE"
# # Step 1: 將語料讀進來
# + id="77wF-j1dmSnE"
# 我們將 2020 年 Gossiping 板當作 target corpus
with open('/content/Gossiping_2020_seg.txt') as f:
tgt_content = f.read().strip()
# 將 2020 年 WomenTalk 板當作 reference corpus
with open('/content/WomenTalk_2020_seg.txt') as f:
ref_content = f.read().strip()
# + id="X41NfQTiW9ik"
# 已斷詞的語料是用空白分隔每個詞,所以我們現在要把它們拆開
tgt_corpus = re.split('\s+', tgt_content)
ref_corpus = re.split('\s+', ref_content)
# + [markdown] id="-4c7yvmbnaN8"
# # Step 2: Contingency table 和 keyness 計算公式
# + [markdown] id="dKyK59LZ734Y"
# ## 2.1 Contingency Table
# + [markdown] id="kuqsO2VrUp18"
# * 這是我們接下來計算 keyness 會使用的 contingency table
#
# | | word | other word | total |
# |------------|------------|-----------------|----------|
# | tgt_corpus | a | b | (a+b) |
# | ref_corpus | c | d | (c+d) |
# | total | (a+c) | (b+d) | (a+b+c+d)|
#
# * 有了 contingency table,就可以知道計算 keyness 所需要的 observed value 和 expected value
# O11 = a
# O12 = b
# O21 = c
# O22 = d
# E11 = ((a+b) * (a+c))/(a+b+c+d)
# E12 = ((a+b) * (b+d))/(a+b+c+d)
# E21 = ((c+d) * (a+c))/(a+b+c+d)
# E22 = ((c+d) * (b+d))/(a+b+c+d)
# + [markdown] id="ARTjgrOs8B4W"
# ## 2.2 Keyness 計算公式
# + [markdown] id="g-g4LYdM8CDq"
# * chi-square
# $$\chi^2 = \sum_{i=1}^n \frac {(O_i - E_i)^2}{E_i}$$
#
# * log-likelihood
# $$G^2 = 2 \sum_{i=1}^n O_i \times ln \frac{O_i}{E_i}$$
# + [markdown] id="wfKrVnY8nhRj"
# # Step 3: 計算詞頻
# + [markdown] id="uhHKOViOnjmw"
# 首先,讓我們定義一個函式。透過這個函式,我們可以取得一些之後計算需要的數值。
# + id="EtOrDKg4W_1-"
# make frequency list
def count_freq(corpus):
word_freq = {}
other_word_freq = {}
corpus_size = len(corpus)
# count word_freq
for word in corpus:
if word not in word_freq:
word_freq[word] = 1
else:
word_freq[word] += 1
# count other_word_freq
for key, value in word_freq.items():
other_word_freq[key] = corpus_size - value
return word_freq, other_word_freq, corpus_size
# + [markdown] id="-tjyoOZT8r5Z"
# ## 3.1 練習
# + colab={"base_uri": "https://localhost:8080/"} id="9MlquZ5Gf17u" outputId="8df489ab-a9ad-404b-e6de-bf7449c610f6"
## TODO: 請找出 target corpus 的 corpus size
count_freq(tgt_corpus)[2]
# + colab={"base_uri": "https://localhost:8080/"} id="8Apl-DTGgZYB" outputId="d806bb20-c8df-411a-876e-b03a6f5873d8"
## TODO: 請找出 "減肥" 一詞在 target corpus 中出現了幾次
count_freq(tgt_corpus)[0].get('減肥', 0)
# + colab={"base_uri": "https://localhost:8080/"} id="ETvyaygDlefL" outputId="f74feaf5-0892-4e7f-9099-13499aa37f1f"
## TODO: 請找出 "河蟹" 一詞在 reference corpus 中出現了幾次
count_freq(ref_corpus)[0].get('河蟹', 0)
# + [markdown] id="1oQWy5-b81Yq"
# 利用這個函式,我們就可以知道 contingency table 中各欄位的值。
# + id="IUwUQqEa93UZ"
tgt_freq = count_freq(tgt_corpus)[0]
tgt_other_freq = count_freq(tgt_corpus)[1]
tgt_size = count_freq(tgt_corpus)[2]
ref_freq = count_freq(ref_corpus)[0]
ref_other_freq = count_freq(ref_corpus)[1]
ref_size = count_freq(ref_corpus)[2]
# + [markdown] id="al0uDicWnpVG"
# # Step 4: 計算 keyness
# + [markdown] id="kmbPPGP6nru9"
# 現在,我們要定義第二個函式,幫助我們計算 keyness。
# + id="lVc34MJMXVVc"
tgt_corpus_words = set(tgt_corpus)
ref_corpus_words = set(ref_corpus)
def get_keyness(word):
# 處理不在 2 個 corpus 中的詞
if word not in tgt_corpus_words and word not in ref_corpus_words:
print(f"{word} not found in both corpora")
return {}
# 計算 Observed values
O11 = tgt_freq.get(word, 0.000001) # 為避免數值中有 0 所造成的 error,我們將其換成一個趨近於 0 的數
O12 = tgt_other_freq.get(word, tgt_size)
O21 = ref_freq.get(word, 0.000001)
O22 = ref_other_freq.get(word, ref_size)
word_total = O11 + O21
otherword_total = O12 + O22
total_size = tgt_size + ref_size
## 計算 Expected values
E11 = word_total * tgt_size / total_size
E12 = otherword_total * tgt_size / total_size
E21 = word_total * ref_size / total_size
E22 = otherword_total * ref_size / total_size
## 計算 chi-square value
chi2 = (O11 - E11)**2/E11 + (O12 - E12)**2/E12 + (O21 - E21)**2/E21 + (O22 - E22)**2/E22
## 計算 log-likelihood value
G2 = 2*(O11*math.log(O11/E11) + O21*math.log(O21/E21) + O12*math.log(O12/E12) + O22*math.log(O22/E22))
# 紀錄該詞偏好在哪一個 corpus 中出現
preference = 'tgt_corpus' if O11>E11 else 'ref_corpus'
result = {'word': word, 'pref': preference, 'chi2': chi2, 'G2': G2}
return result
# + [markdown] id="BwjnLaMY9aCx"
# ## 4.1 練習
# + colab={"base_uri": "https://localhost:8080/"} id="p0BXgWQU-L1j" outputId="d627b4d6-f30f-49f3-9deb-3bdd1d27a152"
## TODO: 請找出 "台灣" 的 keyness (以 log-likelihood 計算)
get_keyness('台灣')['G2']
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="b70WbpJ9-Wr5" outputId="20607a79-c0c0-46ff-e7ce-3646a2d60b05"
## TODO: 請找出 "喜歡" 偏好在哪一個 corpus 出現
get_keyness('喜歡')['pref']
# + colab={"base_uri": "https://localhost:8080/"} id="PaqDufSW-fo3" outputId="ed60afef-86c9-4a6d-dd6d-a69a04ddb41a"
## TODO: 搜尋 "最好是啦",會發生什麼事?
get_keyness('最好是啦')
# + [markdown] id="IVYPp7DSn4e-"
# 接著,我們將兩個 corpus 都丟進去算 keyness。
# + id="X5kZ3IU9XYDz"
all_words = set(tgt_corpus + ref_corpus)
keyness = []
for word in all_words:
keyness.append(get_keyness(word))
# + [markdown] id="8MwEwvEXn8pC"
# 現在我們已經知道兩個 corpus 中所有字的 keyness 了!
#
# + [markdown] id="U4xr5OdPn-_9"
# # Step 5: 找出 PTT 兩板的 keywords
# + [markdown] id="mi8W0OKsoBY7"
# 為了知道前十名的 keywords,我們要定義最後一個函式。
# + id="9hnkel8-HidA"
def get_topn(data=None, pref='tgt_corpus', sort_by='G2', n=10):
out = []
for w in data:
if w['pref'] == pref:
out.append(w)
return sorted(out, key=lambda x:x[sort_by], reverse=True)[:n] # 由大到小排序
# + [markdown] id="hsKBAV6uoJCD"
# 這個函式預設將取回 target corpus 中前十名的 keywords。排序預設的 measure 是 log-likelihood 值。
# + [markdown] id="qf_c93b090Th"
# ## 5.1 練習
# + colab={"base_uri": "https://localhost:8080/"} id="k37O79Sdpi1u" outputId="4a921623-b2e9-453c-c4f4-a502351fa47b"
## TODO: 找出 Gossiping 板的前十名 keywords,以 log-likelihood 值排序
get_topn(keyness)
# + colab={"base_uri": "https://localhost:8080/"} id="pVZLI8kiqfYE" outputId="336f60a6-28a3-4e18-d0dd-5dc152daeba7"
## TODO: 找出 WomenTalk 板的前十名 keywords,以 log-likelihood 值排序
get_topn(keyness, pref = 'ref_corpus')
# + colab={"base_uri": "https://localhost:8080/"} id="6oRSWC6Bqohv" outputId="4a528abe-2110-4c9c-a82f-706fdf3aa923"
## TODO: 找出 Gossiping 板的前五名 keywords,以 chi-square 值排序
get_topn(keyness, sort_by = 'chi2', n = 5)
# + colab={"base_uri": "https://localhost:8080/"} id="pEamdrkkq0AH" outputId="c9ea5634-224c-4600-db0e-97b3a61088f8"
## TODO: 找出 WomenTalk 板的前五名 keywords,以 chi-square 值排序
get_topn(keyness, pref = 'ref_corpus', sort_by = 'chi2', n = 5)
# + [markdown] id="qzGO5KAYPLWW"
# # Step 6. 視覺化
# + colab={"base_uri": "https://localhost:8080/"} id="t9iDyp-83owV" outputId="c7e2ffaa-68fb-40fe-fb23-0bb6f472a5bd"
# 讓 Colab 後續繪圖時顯示繁體中文
# 下載台北思源黑體
# !wget -O taipei_sans_tc_beta.ttf https://drive.google.com/uc?id=1eGAsTN1HBpJAkeVM57_C7ccp7hbgSz3_&export=download
# 新增字體
matplotlib.font_manager.fontManager.addfont('taipei_sans_tc_beta.ttf')
# 將 font-family 設為台北思源黑體
matplotlib.rc('font', family = 'Taipei Sans TC Beta')
# + [markdown] id="Awd8gCfx3p6h"
# ## 6.1 表格
# + [markdown] id="x4Wu6y27PusM"
# 以 Gossiping 板前十名 keywords 為例 (以 log-likelihood 值排序),將 `list` 型態的結果轉成 `DataFrame`。
#
# + colab={"base_uri": "https://localhost:8080/", "height": 349} id="4dk4e2tkrnyo" outputId="0f89ad4f-cb07-4787-94a5-ccd4ae989ff7"
tgt_G2_top10 = get_topn(keyness)
tgt_G2_top10_df = pd.DataFrame(tgt_G2_top10)
tgt_G2_top10_df
# + id="VuMmReBOesZR"
# 將 pdDataFrame 轉成表格圖表後輸出檔案
from pandas.plotting import table
figure, axes = plt.subplots(figsize=(15, 5)) # 設定背景大小
axes.xaxis.set_visible(False) # 隱藏 x 座標
axes.yaxis.set_visible(False) # 隱藏 y 座標
axes.set_frame_on(False) # 隱藏格線
table = table(axes, tgt_G2_top10_df, # 製作表格圖表
loc='upper right',
colWidths=[0.18]*len(tgt_G2_top10_df.columns))
table.auto_set_font_size(False) # 將字體大小改為手動
table.set_fontsize(12) # 設定字體大小
table.scale(1.2, 1.2) # 設定表格大小
# 儲存後匯出檔案
plt.savefig('tgt_G2_top10_df.png')
#files.download("tgt_G2_top10_df.png")
# + [markdown] id="DyS9O30kC_8X"
# ### 6.1.1 練習
# + colab={"base_uri": "https://localhost:8080/", "height": 349} id="W6DMnp-DPy8q" outputId="2ac28240-df06-46e6-ab40-dfe19d3020e6"
## TODO: 將 WomenTalk 板前十名 keywords (以 log-likelihood 值排序) 的結果轉成 DataFrame
ref_G2_top10 = get_topn(keyness, pref = 'ref_corpus')
ref_G2_top10_df = pd.DataFrame(ref_G2_top10)
ref_G2_top10_df
# + [markdown] id="8rg8JpLernhT"
# ###【討論問題】
# * Gossiping 板的 keywords 跟 WomenTalk 板有什麼不一樣?造成兩板用詞差異可能的因素有那些?
# * 用 chi-square 和 log-likelihood 所算出的 keyword 結果相似嗎?
# + [markdown] id="jKowz3BZd0yi"
# ## 6.2 長條圖
# + [markdown] id="z_s58nxpR0Qn"
# 讓我們進一步把資料呈現成長條圖。
# + colab={"base_uri": "https://localhost:8080/", "height": 321} id="GdLrfcIvjxX_" outputId="f387cdf2-7494-4019-ae3b-1c622c31caf3"
# 以 Gossiping 板前十名 keywords 為例 (以 log-likelihood 值排序)
tgt_G2_top10_df.plot.bar(x = 'word', y = 'G2')
plt.title('八卦板前十大關鍵詞', fontsize=24) # 標題名稱
plt.xlabel('關鍵詞', fontsize=18) # X軸名稱
plt.ylabel('G2', fontsize=18) # Y軸名稱
# 儲存後匯出檔案
plt.savefig('tgt_G2_top10_bar.png')
plt.show()
#files.download("tgt_G2_top10_bar.png")
# + [markdown] id="zB49JF4Yevbi"
# ### 6.2.1 練習
# + colab={"base_uri": "https://localhost:8080/", "height": 321} id="dHRhzne8VXf8" outputId="fe0ac00c-24f3-44d2-a750-26cf431a634d"
# 畫出 WomenTalk 板前十名 keywords (以 log-likelihood 值排序) 的長條圖
ref_G2_top10_df.plot.bar(x = 'word', y = 'G2')
plt.title('女板前十大關鍵詞', fontsize=24) # 標題名稱
plt.xlabel('關鍵詞', fontsize=18) # X軸名稱
plt.ylabel('G2', fontsize=18) # Y軸名稱
# 儲存後匯出檔案
plt.savefig('ref_G2_top10_bar.png')
plt.show()
#files.download("ref_G2_top10_bar.png")
# + [markdown] id="absXoFAJe-Tb"
# ## 6.3 文字雲
# + [markdown] id="uhjQbFcd2CPK"
# 除了長條圖,我們也可以把資料畫成文字雲。
# + colab={"base_uri": "https://localhost:8080/", "height": 198} id="EqpJ53Psl38b" outputId="f34713d6-e44c-4c8c-d534-39c6c4e98ebc"
# 先將八卦板的 keyness 轉成 dictionary
tgt_dict = {i['word']: i['G2'] for i in keyness if i['pref'] == 'tgt_corpus'}
# 製作八卦板關鍵詞文字雲
wordcloud = WordCloud(font_path = 'taipei_sans_tc_beta.ttf')
wordcloud.generate_from_frequencies(frequencies = tgt_dict)
plt.figure()
plt.imshow(wordcloud)
plt.axis('off')
# 儲存後匯出檔案
plt.savefig('tgt_G2_wordcloud.png')
plt.show()
#files.download("tgt_G2_wordcloud.png")
# + [markdown] id="iOl5khJwfMA6"
# ### 6.3.1 練習
# + colab={"base_uri": "https://localhost:8080/", "height": 198} id="5TjR9zDX0_xf" outputId="9a2580a3-5ab7-4ddb-8111-f6d913330938"
# 先將女板的 keyness 轉成 dictionary
ref_dict = {i['word']: i['G2'] for i in keyness if i['pref'] == 'ref_corpus'}
# 製作女板關鍵詞文字雲
wordcloud = WordCloud(font_path = 'taipei_sans_tc_beta.ttf')
wordcloud.generate_from_frequencies(frequencies = ref_dict)
plt.figure()
plt.imshow(wordcloud)
plt.axis('off')
# 儲存後匯出檔案
plt.savefig('ref_G2_wordcloud.png')
plt.show()
#files.download("ref_G2_wordcloud.png")
| hocor2020/notebook/session-5.2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Goal
# The main purpose of this notebook is to develope the code to read in phenotypes in desired format.
# At the end, I should wrap it up as the function.
import hail as hl
hl.init()
# variant qc filter first
variant_qc_all = hl.read_table('/vol/bmd/yanyul/UKB/variant_qc/imp_all.ht')
# variant_qc_all.count()
variant_qc_all = variant_qc_all.filter(variant_qc_all.variant_qc.AF[0] > 0.001)
variant_qc_all = variant_qc_all.filter(variant_qc_all.variant_qc.AF[1] > 0.001)
variant_qc_all = variant_qc_all.filter(variant_qc_all.variant_qc.p_value_hwe > 1e-10)
# variant_qc_all.count()
# just to load chr22 for testing purpose
mt = hl.import_bgen(
'/vol/bmd/data/ukbiobank/genotypes/v3/ukb_imp_chr22_v3.bgen',
entry_fields = ['dosage'],
index_file_map = {'/vol/bmd/data/ukbiobank/genotypes/v3/ukb_imp_chr22_v3.bgen' : '/vol/bmd/yanyul/UKB/bgen_idx/ukb_imp_chr22_v3.bgen.idx2'},
sample_file = '/vol/bmd/data/ukbiobank/genotypes/v3/ukb19526_imp_chr1_v3_s487395.sample',
variants = variant_qc_all
)
mt.count()
# +
# mt = mt.annotate_cols(eid = mt.s.replace("\_\d+", ""))
# mt = mt.key_cols_by('eid')
# mt = mt.repartition(100)
# -
mt.s.show()
import pandas as pd
import numpy as np
covar_names = 'age_recruitment,sex,pc1,pc2'
pheno_names = 'ht,mcv,mch'
indiv_id = 'eid'
int_names = 'age_recruitment,sex'
str_names = 'eid'
# +
import sys
sys.path.insert(0, '../code/')
from importlib import reload
import my_hail_helper as myhelper
myhelper = reload(myhelper)
# -
covar, trait = myhelper.read_and_split_phenotype_csv(
'../output/query_phenotypes_cleaned_up.csv',
pheno_names = pheno_names,
covar_names = covar_names,
indiv_id = indiv_id,
int_names = int_names,
str_names = str_names
)
covar = covar.rename(columns = {'eid': 's'})
trait = trait.rename(columns = {'eid': 's'})
# #### Now that we've loaded in the full covariate and trait tables
# Here we start to loop over all subsets and build the "list of lists" for traits.
subset_dic = {}
nsubset = 2
for subset_idx in range(1, nsubset + 1):
subset_indiv_list = myhelper.read_indiv_list(f'../output/data_split/British-training-{subset_idx}.txt')
sub_trait = myhelper.subset_by_col(trait, 's', subset_indiv_list)
sub_trait = myhelper.df_to_ht(sub_trait, 's') # hl.Table.from_pandas(sub_trait, key = 's')
# sub_trait = sub_trait.repartition(40)
subset_dic[f'subset_{subset_idx}'] = sub_trait
covar = myhelper.df_to_ht(covar, 's')
mt.describe()
annot_expr_ = {
k : subset_dic[k][mt.s] for k in list(subset_dic.keys())
}
mt = mt.annotate_cols(**annot_expr_)
mt = mt.annotate_cols(covariates = covar[mt.s])
# prepare trait and covar into list or list of lists
subset_list = [ [ mt[f'subset_{i}'][j] for j in mt[f'subset_{i}'] ] for i in range(1, nsubset + 1) ]
subset_names = [ [ f'subset_{i}_x_{j}' for j in mt[f'subset_{i}'] ] for i in range(1, nsubset + 1) ]
covar_list = [ mt.covariates[i] for i in list(mt.covariates.keys()) ]
gwas_out = hl.linear_regression_rows(
y = subset_list,
x = mt.dosage,
covariates = [1] + covar_list,
pass_through = ['varid', 'rsid']
)
gwas_out = gwas_out.annotate_globals(phenotypes = subset_names)
gwas_out.describe()
gwas_out = gwas_out.annotate(
variant = hl.delimit(
hl.array([
gwas_out['locus'].contig,
hl.str(gwas_out['locus'].position),
gwas_out['alleles'][0],
gwas_out['alleles'][1]
]),
delimiter = ':')
)
gwas_out = gwas_out.key_by('variant')
## Hey, this repartition is important
## in the sense that it avoids the unnecessary and repeated sorting caused by key_by
gwas_out = gwas_out.repartition(40)
gwas_out = gwas_out.cache()
phenotypes = gwas_out['phenotypes'].collect()[0]
for i, subset in enumerate(phenotypes):
for j, trait in enumerate(subset):
ht_export = myhelper.gwas_formater_from_neale_lab(gwas_out, i, j)
ht_export.export(f'test_output_with_variant_qc/gwas_{trait}.tsv')
gwas_out.show()
| notebook/prepare_phenotype_and_run_gwas-with_variant_qc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Note:
# - I removed adni from labels, because it created noise in sentence labels.
# +
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import glob
import os
import re
import seaborn as sns
from tqdm import tqdm
import nltk
import random
from nltk.tokenize import word_tokenize,sent_tokenize
import pickle
train_example_names = [fn.split('.')[0] for fn in os.listdir('data/train')]
test_example_names = [fn.split('.')[0] for fn in os.listdir('data/test')]
metadata = pd.read_csv('data/train.csv')
docIdx = train_example_names.copy()
connection_tokens = {'s', 'of', 'and', 'in', 'on', 'for', 'from', 'the', 'act', 'coast', 'future', 'system', 'per'}
# -
# ## Dataset Name Selection
# +
def text_cleaning(text):
text = re.sub('[^A-Za-z]+', ' ', str(text)).strip() # remove unnecessary literals
# remove extra spaces
text = re.sub("\s+"," ", text)
return text.lower().strip()
def is_name_ok(text):
if len([c for c in text if c.isalnum()]) < 4:
return False
tokens = [t for t in text.split(' ') if len(t) > 3]
tokens = [t for t in tokens if not t in connection_tokens]
if len(tokens) < 3:
return False
return True
with open('data/all_preds_selected.csv', 'r') as f:
selected_pred_labels = f.readlines()
selected_pred_labels = [l.strip() for l in selected_pred_labels]
existing_labels = [text_cleaning(x) for x in metadata['dataset_label']] +\
[text_cleaning(x) for x in metadata['dataset_title']] +\
[text_cleaning(x) for x in metadata['cleaned_label']] +\
[text_cleaning(x) for x in selected_pred_labels]
"""to_remove = [
'frequently asked questions', 'total maximum daily load tmd', 'health care facilities',
'traumatic brain injury', 'north pacific high', 'droplet number concentration', 'great slave lake',
'census block groups'
]"""
"""df = pd.read_csv(r'C:\projects\personal\kaggle\kaggle_coleridge_initiative\string_search\data\gov_data.csv')
print(len(df))
df['title'] = df.title.apply(text_cleaning)
titles = list(df.title.unique())
titles = [t for t in titles if not t in to_remove]
df = pd.DataFrame({'title': titles})
df = df.loc[df.title.apply(is_name_ok)]
df = pd.concat([df, pd.DataFrame({'title': existing_labels})], ignore_index= True).reset_index(drop = True)
titles = list(df.title.unique())
df = pd.DataFrame({'title': titles})
df['title'] = df.title.apply(text_cleaning)"""
# Sort labels by length in ascending order
#existing_labels = sorted(list(df.title.values), key = len, reverse = True)
existing_labels = list(set(existing_labels))
existing_labels = sorted(existing_labels, key = len, reverse = True)
existing_labels = [l for l in existing_labels if len(l.split(' ')) < 15]
#del df
#existing_labels.remove('adni')
print(len(existing_labels))
# -
existing_labels[:5]
existing_labels[-5:]
# ## Create dataframe for tokens and targets
# +
def load_train_example_by_name(name):
doc_path = os.path.join('data/train', name + '.json')
with open(doc_path) as f:
data = json.load(f)
return data
def load_test_example_by_name(name):
doc_path = os.path.join('data/test', name + '.json')
with open(doc_path) as f:
data = json.load(f)
return data
# -
# ## Make sentences
# +
def text_cleaning_upper(text):
text = re.sub('[^A-Za-z]+', ' ', str(text)).strip() # remove unnecessary literals
# remove extra spaces
text = re.sub("\s+"," ", text)
return text.strip()
def has_connected_uppercase(tokens):
if len(tokens) < 5:
return False
group_len = 0
n_long_tokens = 0
for token in tokens:
token_lower = token.lower()
if token[0].isupper():
if token_lower not in connection_tokens:
if len(token) > 2:
n_long_tokens += 1
group_len += 1
if group_len > 2 and n_long_tokens > 0:
return True
else:
if token_lower not in connection_tokens:
group_len = 0
n_long_tokens = 0
return False
def sent_has_acronym(tokens):
# Acronym check
for token in tokens:
if len(token) > 3 and token.isupper():
return True
return False
def sent_is_candidate(clean_sentence):
tokens = clean_sentence.split(' ')
if sent_has_acronym(tokens):
return True
else:
return has_connected_uppercase(tokens)
# +
pos_sentences = []
neg_sentences = []
docs_no_pos = []
total_sentences = 0
label_use_counts = {l: 0 for l in existing_labels}
def process_doc(doc_id):
""" Accept sentences with acronyms or uppercase words in succession as candidates.
From those candidates, positives are the ones that contain a label.
"""
global total_sentences
doc_json = load_train_example_by_name(doc_id)
doc_text = ' '.join([sec['text'] for sec in doc_json])
doc_has_pos = False
# Tokenize sentencewise
sentences = sent_tokenize(doc_text)
total_sentences += len(sentences)
for sentence in sentences:
clean_sentence = text_cleaning_upper(sentence)
is_candidate = sent_is_candidate(clean_sentence)
has_label = False
if is_candidate:
clean_sentence_lower = clean_sentence.lower()
for clean_label in existing_labels:
if re.search(r'\b{}\b'.format(clean_label), clean_sentence_lower):
has_label = True
label_use_counts[clean_label] = label_use_counts[clean_label] + 1
break
# Store sentence in list if candidate
# Non-candidate sentences are discarded
if has_label:
pos_sentences.append(sentence)
doc_has_pos = True
elif is_candidate:
neg_sentences.append(sentence)
if not doc_has_pos:
docs_no_pos.append(doc_id)
#process_doc('0026563b-d5b3-417d-bd25-7656b97a044f')
# -
# ## Generate and Save Sentences
# +
import pickle
assert len(docIdx) > 0
pos_sentences = []
neg_sentences = []
docs_no_pos = []
total_sentences = 0
pbar = tqdm(docIdx)
for doc_id in pbar:
process_doc(doc_id)
pbar.set_description(\
f'pos_size: {len(pos_sentences)}, neg_size: {len(neg_sentences)}, no pos label doc: {len(docs_no_pos)}, n_sentences: {total_sentences}')
with open(f'data/selected_sentences/pos.pkl', 'wb') as f:
pickle.dump(pos_sentences, f)
with open(f'data/selected_sentences/neg.pkl', 'wb') as f:
pickle.dump(neg_sentences, f)
print(f'pos size: {len(pos_sentences)}')
print(f'neg size: {len(neg_sentences)}')
# -
pd.Series(label_use_counts).sort_values()
| exp08.1-select_pos_neg_candidates.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bar Graph for Multiple Genes given from RNA Sequencing
# +
import numpy as np
import matplotlib.pyplot as plt
import csv
aortaData = []
aortaDataNumbers = []
cerebellumData = []
cerebellumDataNumbers= []
arteryData = []
arteryDataNumbers = []
with open ("genomicdata.csv") as csvfile:
readCSV = csv.reader(csvfile, delimiter= '\t') #gives access to the CSV file
for col in readCSV:
if col[7] == '':
aortaData.append('0')
else:
aortaData.append(col[7])
if col[13] == '': #cerebellum
cerebellumData.append('0')
else:
cerebellumData.append(col[13])
if col[15] == '': #coronary artery
arteryData.append('0')
else:
arteryData.append(col[15])
aortaDataNumbers = list(map(float, aortaData[19:24]))
cerebellumDataNumbers = list(map(float, cerebellumData[19:24]))
arteryDataNumbers = list(map(float, arteryData[19:24]))
ind = np.arange(len(aortaDataNumbers)) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, aortaDataNumbers, width, color='r') #creates rectangles
rects2 = ax.bar(ind+width, cerebellumDataNumbers, width, color='g') #creates rectangles
rects3 = ax.bar(ind+width*2, arteryDataNumbers, width, color='b') #creates rectangles
ax.set_ylabel('Expression Vector') #Y axis label
ax.set_title('Gene Expressed') #X axis label
ax.set_xticks(ind + width) #the distance between each bar
# ax.legend((rects1[0]), ('Expression Vector of Each Gene Expressed in the Aorta')) #Creates a legend so people know
#what they are looking at
def autolabel(rects): #creates a different label for each bar to show the height
for rect in rects:
height = rect.get_height() #height of each bar
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%d' % int(height),
ha='center', va='bottom') #gives the value
plt.show()
# -
# In this code, I combined what I worked on last week (creating a gene expression bar chart) with the task for this week (creating a gene expression bar chart for multiple genes). The three tissues I chose were the cerebellum, the coronary artery, and the aorta. I originally decided to do a bar chart with two tissues: the cerebellum and the coronary artery. I selected a random set of genes (19-24) to see how tissues differ. After graphing these genes for the cerebellum and the coronary artery, that the expression vectors for all the genes minus the last one (ENSG00000002549 LAP3), the expressions were relatively the same. For ENSG00000002549 LAP3 however, the the coronary artery had signifcantly higher expression for the gene in comparison to the cerebellum. Because of this, I decided to add in the gene expression vectors for the same set of genes for the aorta since the aorta and the coronary artery are both a part of the cardiovascular system. Like I predicted, the aorta also has a significantly higher expression vector for this gene.
# +
import numpy as np
import matplotlib.pyplot as plt
import csv
aortaData = []
aortaDataNumbers = []
cerebellumData = []
cerebellumDataNumbers= []
arteryData = []
arteryDataNumbers = []
with open ("genomicdata.csv") as csvfile:
readCSV = csv.reader(csvfile, delimiter= '\t') #gives access to the CSV file
for col in readCSV:
if col[7] == '':
aortaData.append('0')
else:
aortaData.append(col[7])
if col[13] == '': #cerebellum
cerebellumData.append('0')
else:
cerebellumData.append(col[13])
if col[15] == '': #coronary artery
arteryData.append('0')
else:
arteryData.append(col[15])
aortaDataNumbers = list(map(float, aortaData[73:80]))
cerebellumDataNumbers = list(map(float, cerebellumData[73:80]))
arteryDataNumbers = list(map(float, arteryData[73:80]))
ind = np.arange(len(aortaDataNumbers)) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, aortaDataNumbers, width, color='r') #creates rectangles
rects2 = ax.bar(ind+width, cerebellumDataNumbers, width, color='g') #creates rectangles
rects3 = ax.bar(ind+width*2, arteryDataNumbers, width, color='b') #creates rectangles
ax.set_ylabel('Expression Vector') #Y axis label
ax.set_title('Gene Expressed') #X axis label
ax.set_xticks(ind + width) #the distance between each bar
def autolabel(rects): #creates a different label for each bar to show the height
for rect in rects:
height = rect.get_height() #height of each bar
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%d' % int(height),
ha='center', va='bottom') #gives the value
plt.show()
# -
# In this graph, along with upcoming graphs, I chose a gene that the cerebellum had a significantly high expression for. This gene, number 75 in the array, proved my previous prediction wrong since the cerebellum, the coronary artery, and the aorta all showed significantly higher gene expressions for this gene.
# +
import numpy as np
import matplotlib.pyplot as plt
import csv
aortaData = []
aortaDataNumbers = []
cerebellumData = []
cerebellumDataNumbers= []
arteryData = []
arteryDataNumbers = []
with open ("genomicdata.csv") as csvfile:
readCSV = csv.reader(csvfile, delimiter= '\t') #gives access to the CSV file
for col in readCSV:
if col[7] == '':
aortaData.append('0')
else:
aortaData.append(col[7])
if col[13] == '': #cerebellum
cerebellumData.append('0')
else:
cerebellumData.append(col[13])
if col[15] == '': #coronary artery
arteryData.append('0')
else:
arteryData.append(col[15])
aortaDataNumbers = list(map(float, aortaData[722:727]))
cerebellumDataNumbers = list(map(float, cerebellumData[722:727]))
arteryDataNumbers = list(map(float, arteryData[722:727]))
ind = np.arange(len(aortaDataNumbers)) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, aortaDataNumbers, width, color='r') #creates rectangles for aorta
rects2 = ax.bar(ind+width, cerebellumDataNumbers, width, color='g') #creates rectangles for cerebellum
rects3 = ax.bar(ind+width*2, arteryDataNumbers, width, color='b') #creates rectangles for coronary artery
ax.set_ylabel('Expression Vector') #Y axis label
ax.set_title('Gene Expressed') #X axis label
ax.set_xticks(ind + width) #the distance between each bar
def autolabel(rects): #creates a different label for each bar to show the height
for rect in rects:
height = rect.get_height() #height of each bar
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%d' % int(height),
ha='center', va='bottom') #gives the value
plt.show()
# -
# This graph shows two different things.
#
# The first is what I was expecting in the previous graph. For the gene in spot 722 in the array, the Expression vector for the cerebellum is high, whereas there is no expression vector for the gene in both the coronary artery and for the aorta.
#
# The second is something that I thought of after the results given in the previous graph. For the gene in spot 725 in the array, the gene expression vector for the aorta is significantly higher than the gene expression vector for the coronary artery.
| Bar Graph for Multiple Genes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: blads_recommendation_rrs dsvm
# language: python
# name: blads_recommendation_rrs_dsvm
# ---
# +
# Use the Azure Machine Learning data source package
from azureml.dataprep import datasource
# Use the Azure Machine Learning data collector to log various metrics
from azureml.logging import get_azureml_logger
logger = get_azureml_logger()
# -
# Use Azure Machine Learning history magic to control history collection
# History is off by default, options are "on", "off", or "show"
# # %azureml history on
# +
# This call will load the referenced data source and return a DataFrame.
# If run in a PySpark environment, this call returns a
# Spark DataFrame. If not, it returns a Pandas DataFrame.
df = datasource.load_datasource('ratings.dsource')
# Remove this line and add code that uses the DataFrame
df.head(10)
# +
import pyspark
from pyspark.ml.tuning import *
from pyspark.sql.types import *
# +
from pyspark.ml.recommendation import ALS
als = ALS() \
.setUserCol("userId") \
.setRatingCol("rating") \
.setItemCol("movieId") \
alsModel = als.fit(df)
# -
alsModel.save("./outputs/model")
# !ls outputs/model
from pyspark.ml.recommendation import ALSModel
newModel = ALSModel.load("./outputs/model")
score = newModel.transform(df)
score.take(10)
userRecs = alsModel.recommendForAllUsers(10)
# !ls ./outputs/
# +
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
# -
reloadUserRecs = spark.read.parquet("./outputs/userrecs.parquet")
reloadUserRecs.toPandas().loc[reloadUserRecs.toPandas()['userId']==55]
# +
cSchema = StructType([StructField("userId", IntegerType()),
StructField("itemID", IntegerType()),
StructField("rating", IntegerType()),
StructField("notTime", IntegerType())])
ratings = spark.createDataFrame([
(0, 1, 4, 4),
(0, 3, 1, 1),
(0, 4, 5, 5),
(0, 5, 3, 3),
(0, 7, 3, 3),
(0, 9, 3, 3),
(0, 10, 3, 3),
(1, 1, 4, 4),
(1, 2, 5, 5),
(1, 3, 1, 1),
(1, 6, 4, 4),
(1, 7, 5, 5),
(1, 8, 1, 1),
(1, 10, 3, 3),
(2, 1, 4, 4),
(2, 2, 1, 1),
(2, 3, 1, 1),
(2, 4, 5, 5),
(2, 5, 3, 3),
(2, 6, 4, 4),
(2, 8, 1, 1),
(2, 9, 5, 5),
(2, 10, 3, 3),
(3, 2, 5, 5),
(3, 3, 1, 1),
(3, 4, 5, 5),
(3, 5, 3, 3),
(3, 6, 4, 4),
(3, 7, 5, 5),
(3, 8, 1, 1),
(3, 9, 5, 5),
(3, 10, 3, 3)], cSchema)
# -
input_df = ratings.select("userId")
input_df.toPandas().head(5)
input_df.join(reloadUserRecs, "userId").toPandas()
# +
store_name = "teamtaostorage "
key = "kXp+RFtHAdR4NO53TtyyOcDXvCziwEWT+dYEpvBKIH6k1hQ9+2u4FBMhC/oK/msY/oCBvj+Gr5/PQynyX4rzFQ=="
container = "recommendationhackathon"
spark._jsc.hadoopConfiguration().set('fs.azure.account.key.' + store_name + '.blob.core.windows.net',key)
# -
wasb = "wasb://" + container + "@" + store_name + ".blob.core.windows.net/test.parquet"
userRecs.write.parquet(wasb)
# +
from azureml.api.schema.dataTypes import DataTypes
from azureml.api.schema.sampleDefinition import SampleDefinition
from azureml.api.realtime.services import generate_schema
import pandas
df = pandas.DataFrame(data=[[3.0]],
columns=['userId'])
# Turn on data collection debug mode to view output in stdout
os.environ["AML_MODEL_DC_DEBUG"] = 'true'
# Test the output of the functions
input1 = pandas.DataFrame([[3.0]])
inputs = {"input_df": SampleDefinition(DataTypes.PANDAS, df)}
# -
input1.iloc[0][0]
userRecs.select("userId", "recommendations.movieId").take(10)
# +
# Write configuration
writeConfig = {
"Endpoint" : "https://dcibrecommendationhack.documents.azure.com:443/",
"Masterkey" : "<KEY>3QeJYCSYg==",
"Database" : "recommendation_engine",
"Collection" : "user_recommendations",
"Upsert" : "true"
}
userRecs.select("userId", "recommendations.movieId").write.format("com.microsoft.azure.cosmosdb.spark").options(**writeConfig).save()
# -
MASTER_KEY = '<KEY>
HOST = 'https://dcibrecommendationhack.documents.azure.com:443/'
# +
import pydocumentdb.documents as documents
import pydocumentdb.document_client as document_client
import pydocumentdb.errors as errors
import datetime
client = document_client.DocumentClient(HOST, {'masterKey': MASTER_KEY} )
# +
DATABASE_ID = "recommendation_engine"
COLLECTION_ID = "user_recommendations"
database_link = 'dbs/' + DATABASE_ID
collection_link = database_link + '/colls/' + COLLECTION_ID
def ReadDocument(client, doc_id):
print('\n1.2 Reading Document by Id\n')
# Note that Reads require a partition key to be spcified. This can be skipped if your collection is not
# partitioned i.e. does not have a partition key definition during creation.
doc_link = collection_link + '/docs/' + doc_id
response = client.ReadDocument(doc_link)
print('Document read by Id {0}'.format(doc_id))
print('Account Number: {0}'.format(response.get('account_number')))
def ReadDocuments(client):
print('\n1.3 - Reading all documents in a collection\n')
# NOTE: Use MaxItemCount on Options to control how many documents come back per trip to the server
# Important to handle throttles whenever you are doing operations such as this that might
# result in a 429 (throttled request)
documentlist = list(client.ReadDocuments(collection_link), {'maxItemCount':10})
print('Found {0} documents'.format(documentlist.__len__()))
for doc in documentlist:
print('Document Id: {0}'.format(doc.get('id')))
# -
db = client.ReadDatabase(database_link)
collection = client.ReadCollection(collection_link=collection_link)
# +
# Query them in SQL
query = { 'query': 'SELECT * FROM server s WHERE s.userId = 37' }
options = {}
result_iterable = client.QueryDocuments(collection['_self'], query, options)
results = list(result_iterable);
print(results)
# -
| ratings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="MyyEW_2tWK5J"
# #INITIALIZING DRIVE
# + colab={"base_uri": "https://localhost:8080/"} id="O5UVtk-W1T9k" executionInfo={"status": "ok", "timestamp": 1626367282486, "user_tz": -120, "elapsed": 16987, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgbDS2ixgfA10NNqJ24T4oeCEeSGQD5h_UyI3kH=s64", "userId": "10833650089067527801"}} outputId="692dea45-7e14-4aa5-a433-d46d1a29234d"
from google.colab import drive
drive.mount('/content/gdrive')
#open the specific path
# + [markdown] id="6yOO2ud8WSsh"
# #LOAD DATASET
# + id="gEs75BQB4zzo"
import numpy as np
import scipy.io
# + id="Fh5YN8yI1_9r"
DATASET='AWA2'
PATH='/content/gdrive/MyDrive/tfvaegan/tfavegan/datasets' #change with your path
ATTRIBUTES_PATH = PATH + '/' + DATASET + '/att_splits.mat'
att_splits = scipy.io.loadmat(ATTRIBUTES_PATH)
attrs = att_splits['att'].transpose()
CLASS=50
ATTRIBUTES=85
TEST=10
#parameters of AWA dataset, change for other datasets
# + [markdown] id="P-mZSlY6FWlz"
# #More zeros unseen class split
# + colab={"base_uri": "https://localhost:8080/"} id="iH2SNJCTEl--" executionInfo={"status": "ok", "timestamp": 1626368223154, "user_tz": -120, "elapsed": 21, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgbDS2ixgfA10NNqJ24T4oeCEeSGQD5h_UyI3kH=s64", "userId": "10833650089067527801"}} outputId="bd20ad57-cd42-4b04-d6c7-79d176bb763e"
ax=np.zeros(CLASS)
for i,arr in enumerate(attrs):
ax[i]=np.count_nonzero(arr == 0)
zeros_split=[]
zeros_split.append((-ax).argsort()[:72])
print("More zeros unseen class split: ",zeros_split)
# + [markdown] id="YFijv9WQIIwx"
# #More discriminative unseen class split
# + colab={"base_uri": "https://localhost:8080/"} id="YKT930dBFoPR" executionInfo={"status": "ok", "timestamp": 1626368965200, "user_tz": -120, "elapsed": 341, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgbDS2ixgfA10NNqJ24T4oeCEeSGQD5h_UyI3kH=s64", "userId": "10833650089067527801"}} outputId="f36cf520-849c-4bab-959b-907a0be97003"
rank=[]
sorted_att=[]
attrsT=np.transpose(attrs)
for a in attrsT:
rank.append((a).argsort()[:CLASS].tolist())
sorted_att.append(np.sort(a))
discr=np.zeros(CLASS)
f=np.zeros(CLASS)
for i in range(CLASS):
f[i]=pow((i-(CLASS/2)),2)
for i in range(len(rank)):
for j in range(len(rank[0])):
discr[rank[i][j]]+=f[j]*abs(sorted_att[i][j])
sorted_d=np.sort(discr)
print("More discriminative unseen class split: ",np.where(np.isin(discr,sorted_d[CLASS-TEST:CLASS])==True))
# + [markdown] id="p1v2rAt9JYOI"
# #The furthest unseen class
# + [markdown] id="7tgo4gmdVt-C"
# ###Build the nearness matrix
# + id="zkPWj7djU3RC"
def subtract(colA,colB):
distance=0
for i in range(len(colA)):
distance+=np.abs(colA[i]-colB[i])
return distance
nearness=np.zeros((CLASS,CLASS))
for i in range(CLASS):
for j in range(CLASS):
if(j==i):
k=j+1
else:
k=j
if(k!=CLASS):
if(i<k):
nearness[i][k]=subtract(attrs[i],attrs[k])
else:
nearness[i][k]=nearness[k][i]
# + [markdown] id="X3ov8SH5V228"
# ###Find the split
# + colab={"base_uri": "https://localhost:8080/"} id="IX71oDYIK0Ps" executionInfo={"status": "ok", "timestamp": 1626372150887, "user_tz": -120, "elapsed": 423, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgbDS2ixgfA10NNqJ24T4oeCEeSGQD5h_UyI3kH=s64", "userId": "10833650089067527801"}} outputId="0c244248-f2ea-46f1-ee01-aeaef7e75373"
def get_where(c,p):
arr=np.isin(c,p)
result=np.where(arr==False)
return result[0]
def max_guad(pos,unseen):
min=counter[unseen[0]]-2*nearness[unseen[0]][pos]-nearness[unseen[0]][unseen].sum()
temp=unseen[0]
for x in unseen:
a=(counter[x]-2*nearness[x][pos]-nearness[x][unseen].sum())
if(a<min):
min=a
temp=x
if(min<10000000000000000):
swap[pos]=temp
return min
def fill_gain(seen, unseen, gain):
for i,v in enumerate(nearness[seen]):
m_guad=max_guad(seen[i],unseen)
gain[seen[i]]=v[seen].sum()-nearness[seen[i]][unseen].sum()-m_guad
return gain
def fill_counter(unseen,counter):
for i,v in enumerate(nearness[unseen]):
counter[unseen[i]]=v.sum()-v[unseen].sum()
return counter
swap=np.zeros(CLASS)
z=[]
for i in range(TEST):
z.append(i)
unseen=np.array(z)
all_class=range(CLASS)
seen=get_where(all_class,unseen)
counter=np.zeros(CLASS)
gain=np.zeros(CLASS)
counter=fill_counter(unseen,counter)
gain=fill_gain(seen,unseen,gain)
while (gain[np.argmax(gain)]>0):
max_gain=np.argmax(gain)
index=np.where(seen==max_gain)[0]
seen=np.insert(np.delete(seen,index),index,swap[max_gain])
index=np.where(unseen==swap[max_gain])[0]
unseen=np.insert(np.delete(unseen,index),index,max_gain)
counter=np.zeros(CLASS)
gain=np.zeros(CLASS)
counter=fill_counter(unseen,counter)
gain=fill_gain(seen,unseen,gain)
print('The furthest unseen class split: ',unseen)
# + id="Dpy7j5T5W0xh"
def sottrai(colA,colB):
somma=0
for i in range(len(colA)):
g=np.abs(colA[i]-colB[i])
somma+=g
return somma
attrsT=np.transpose(attrs)
vicinanza=np.zeros((ATTRIBUTI,ATTRIBUTI))
for i in range(ATTRIBUTI):
for j in range(ATTRIBUTI):
if(j==i):
k=j+1
else:
k=j
if(k!=ATTRIBUTI):
vicinanza[k][i]=sottrai(attrsT[i],attrsT[k])
arr=[]
for i,s in enumerate(vicinanza):
temp=s
temp[i]=1000
for j in range(ATTRIBUTI):
arr.append((i,j,temp[j]))
arr1=[]
arr.sort(key = lambda x: x[2] )
for t in range(100):
arr1.append(arr[2*t])
for ar in arr1:
print(ar)
def stats(arr):
a=np.zeros(ATTRIBUTI)
for s in arr:
(i,j,k)=s
a[i]+=1
a[j]+=1
print((-a).argsort()[:10])
print(a)
stats(arr1)
| notebook/Split_generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### This notebook demonstrates the use of post-classification filters created by the MapBiomas team.
# The MapBiomas team provides guidance on post-classification filters, including gap filling, spatial filters, temporal filters, incidence filters, and frequency filters. These filters were implemented in the module [post_classification_filters.py] of the [wri_change_detection repository](https://github.com/wri/rw-dynamicworld-cd/tree/master/wri_change_detection). This notebook is provided to give a tutorial on how to apply these filters to a land cover classification time series over one area of Brazil, classified using Dynamic World.
#
# You can learn more about the MapBiomas project at their [home page](https://mapbiomas.org/). The development of MapBiomas was done by several groups for each biome and cross-cutting theme that occurs in Brazil. You can read more of the methodology in the [Algorithm Theoretical Basis Document (ATBD) Page](https://mapbiomas.org/en/download-of-atbds) on their website, including the main ATBD and appendices for each each biome and cross-cutting themes.
#
# From Section 3.5 of the ATBD, MapBiomas defines post-classification filters,
# "[due] to the pixel-based classification method and the long temporal series, a chain of post-classification filters was applied. The first post-classification action involves the application of temporal filters. Then, a spatial filter was applied followed by a gap fill filter. The application of these filters remove classification noise.
# These post-classification procedures were implemented in the Google Earth Engine platform"
#
# Below is the copy of the licensing for MapBiomas:
# The MapBiomas data are public, open and free through license Creative Commons CC-CY-SA and the simple reference of the source observing the following format:
# "Project MapBiomas - Collection v5.0 of Brazilian Land Cover & Use Map Series, accessed on 12/14/2020 through the link: https://github.com/mapbiomas-brazil/mapbiomas-brazil.github.io"
# "MapBiomas Project - is a multi-institutional initiative to generate annual land cover and use maps using automatic classification processes applied to satellite images. The complete description of the project can be found at http://mapbiomas.org".
# Access here the scientific publication: Souza at. al. (2020) - Reconstructing Three Decades of Land Use and Land Cover Changes in Brazilian Biomes with Landsat Archive and Earth Engine - Remote Sensing, Volume 12, Issue 17, 10.3390/rs12172735.
#
#
# This notebook includes 6 Steps:
# 1. Loading land cover classifications from Dynamic World
# 2. Applying Gap Filling for Clouds
# 3. Applying Temporal Filters
# 4. Applying Spatial Filters
# 5. Applying Incidence Filters
# 5. Applying Frequency Filters
#
# ## Step 0: Load libraries and iniatilize Earth Engine
# +
#Load necessary libraries
import sys
import os
import ee
import geemap
import numpy as np
import pandas as pd
from IPython.display import HTML, display
from ipyleaflet import Map, basemaps
import random
import json
import time
import ast
# relative import for this folder hierarchy, credit: https://stackoverflow.com/a/35273613
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from wri_change_detection import preprocessing as npv
from wri_change_detection import gee_classifier as gclass
from wri_change_detection import post_classification_filters as pcf
# -
# <font size="4">Iniatilize Earth Engine and Google Cloud authentication</font>
#Initialize earth engine
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
# <font size="4">Define a seed number to ensure reproducibility across random processes. This seed will be used in all subsequent sampling as well. We'll also define seeds for sampling the training, validation, and test sets.</font>
num_seed=30
random.seed(num_seed)
# ## Step 1: Load Land Cover Classifications and Label Data
#
#
# <font size="4">
#
# Define land cover classification image collection, with one image for each time period. Each image should have one band representing the classification in that pixel for one time period.</font>
# +
#Load collection
#This collection represents monthly dynamic world classifications of land cover, later we'll squash it to annual
dynamic_world_classifications_monthly = ee.ImageCollection('projects/wings-203121/assets/dynamic-world/v3-5_stack_tests/wri_test_goldsboro')
#Get classes from first image
dw_classes = dynamic_world_classifications_monthly.first().bandNames()
dw_classes_str = dw_classes.getInfo()
full_dw_classes_str = ['No Data']+dw_classes_str
#Get dictionary of classes and values
#Define array of land cover classification values
dw_class_values = np.arange(1,10).tolist()
dw_class_values_ee = ee.List(dw_class_values)
#Create dictionary representing land cover classes and land cover class values
dw_classes_dict = ee.Dictionary.fromLists(dw_classes, dw_class_values_ee)
#Make sure the dictionary looks good
print(dw_classes_dict.getInfo())
# -
# <font size="4">Define color palettes to map land cover</font>
# +
change_detection_palette = ['#ffffff', # no_data=0
'#419bdf', # water=1
'#397d49', # trees=2
'#88b053', # grass=3
'#7a87c6', # flooded_vegetation=4
'#e49535', # crops=5
'#dfc25a', # scrub_shrub=6
'#c4291b', # builtup=7
'#a59b8f', # bare_ground=8
'#a8ebff', # snow_ice=9
'#616161', # clouds=10
]
statesViz = {'min': 0, 'max': 10, 'palette': change_detection_palette};
oneChangeDetectionViz = {'min': 0, 'max': 1, 'palette': ['696a76','ff2b2b']}; #gray = 0, red = 1
consistentChangeDetectionViz = {'min': 0, 'max': 1, 'palette': ['0741df','df07b5']}; #blue = 0, pink = 1
# -
# <font size="4">Gather projection and geometry information from the land cover classifications</font>
# +
projection_ee = dynamic_world_classifications_monthly.first().projection()
projection = projection_ee.getInfo()
crs = projection.get('crs')
crsTransform = projection.get('transform')
scale = dynamic_world_classifications_monthly.first().projection().nominalScale().getInfo()
print('CRS and Transform: ',crs, crsTransform)
geometry = dynamic_world_classifications_monthly.first().geometry().bounds()
# -
# <font size="4">Convert the land cover collection to a multiband image, one band for each year</font>
# + code_folding=[]
#Define years to get annual classifications for
years = np.arange(2016,2020)
#Squash scenes from monthly to annual
dynamic_world_classifications = npv.squashScenesToAnnualClassification(dynamic_world_classifications_monthly,years,method='median',image_name='dw_{}')
#Get image names
dw_band_names = dynamic_world_classifications.aggregate_array('system:index').getInfo()
#Convert to a multiband image and rename using dw_band_names
dynamic_world_classifications_image = dynamic_world_classifications.toBands().rename(dw_band_names)
# -
# <font size="4">
# Load label data to later compare land cover classification to label data. Export points of labelled data in order to compare to classifications later.
# </font>
# +
#only labels for regions in Modesto CA, Goldsboro NC, the Everglades in FL, and one region in Brazil have been
#uploaded to this collection
labels = ee.ImageCollection('projects/wri-datalab/DynamicWorld_CD/DW_Labels')
#Filter to where we have DW classifications
labels_filtered = labels.filterBounds(dynamic_world_classifications_monthly.geometry())
print('Number of labels that overlap classifications', labels_filtered.size().getInfo())
#Save labels projection
labels_projection = labels_filtered.first().projection()
#Define geometry to sample points from
labels_geometry = labels_filtered.geometry().bounds()
#Compress labels by majority vote
labels_filtered = labels_filtered.reduce(ee.Reducer.mode())
#Remove pixels that were classified as no data
labels_filtered = labels_filtered.mask(labels_filtered.neq(0))
#Rename band
labels_filtered = labels_filtered.rename(['labels'])
#Sample points from label image at every pixel
labelPoints = labels_filtered.sample(region=labels_geometry, projection=labels_projection,
factor=1,
seed=num_seed, dropNulls=True,
geometries=True)
#Export sampled points
labelPoints_export_name = 'goldsboro'
labelPoints_assetID = 'projects/wri-datalab/DynamicWorld_CD/DW_LabelPoints_{}'
labelPoints_description = 'DW_LabelPoints_{}'
export_results_task = ee.batch.Export.table.toAsset(
collection=labelPoints,
description = labelPoints_description.format(labelPoints_export_name),
assetId = labelPoints_assetID.format(labelPoints_export_name))
export_results_task.start()
# -
# <font size="4">
# Map years to check them out.</font>
#Map years to check them out!
center = [35.410769, -78.100163]
zoom = 12
Map1 = geemap.Map(center=center, zoom=zoom,basemap=basemaps.Esri.WorldImagery,add_google_map = False)
Map1.addLayer(dynamic_world_classifications_image.select('dw_2016'),statesViz,name='2016 DW LC')
Map1.addLayer(dynamic_world_classifications_image.select('dw_2017'),statesViz,name='2017 DW LC')
Map1.addLayer(dynamic_world_classifications_image.select('dw_2018'),statesViz,name='2018 DW LC')
Map1.addLayer(dynamic_world_classifications_image.select('dw_2019'),statesViz,name='2019 DW LC')
Map1.addLayer(labels_filtered,statesViz,name='Labels')
display(Map1)
# <font size="4">
# Calculate Accuracy and Confusion Matrix for Original Classifications on Label Data</font>
# +
#Load label points
labelPointsFC = ee.FeatureCollection(labelPoints_assetID.format('goldsboro'))
#Save 2019 DW classifications and rename to "dw_classifications"
dw_2019 = dynamic_world_classifications_image.select('dw_2019').rename('dw_classifications')
#Sample the 2019 classifications at each label point
labelPointsWithDW = dw_2019.sampleRegions(collection=labelPointsFC, projection = projection_ee,
tileScale=4, geometries=True)
#Calculate confusion matrix, which we will use for an accuracy assessment
originalErrorMatrix = labelPointsWithDW.errorMatrix('labels', 'dw_classifications')
#Print the confusion matrix with the class names as a dataframe
errorMatrixDf = gclass.pretty_print_confusion_matrix_multiclass(originalErrorMatrix, full_dw_classes_str)
#Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.
print('Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.')
display(errorMatrixDf)
#You can also print further accuracy scores from the confusion matrix, however each one takes a couple minutes
#to load
print('Accuracy',originalErrorMatrix.accuracy().getInfo())
# print('Consumers Accuracy',originalErrorMatrix.consumersAccuracy().getInfo())
# print('Producers Accuracy',originalErrorMatrix.producersAccuracy().getInfo())
# print('Kappa',originalErrorMatrix.kappa().getInfo())
# +
#Calculate the number of changes for each year
for year in years[0:-1]:
year_list = ['dw_{}'.format(year),'dw_{}'.format(year+1)]
num_changes = pcf.calculateNumberOfChanges(dynamic_world_classifications_image.select(year_list), year_list)
num_changes_mean = num_changes.reduceRegion(reducer=ee.Reducer.mean(),
geometry=geometry,
crs=crs, crsTransform=crsTransform,
bestEffort=True,
maxPixels=1e13, tileScale=4)
print('Number of changes from',year,'to',year+1,"{:.4f}".format(num_changes_mean.get('sum').getInfo()))
# -
# ## Begin Applying Filters
#
# <font size="4">
#
# Now that we have prepared all of the necessary variables to do our post-processing, we'll start applying the filters defined by MapBiomas. While the filters are designed to be applied serially, here we'll apply each filter individually (after the gap filling) in order to see the performance of each one on its own, mainly because we only have so many years of Dynamic World. For each filter, we'll apply the filter, then find the overall accuracy against the training data.
# </font>
# ## Step 2: Apply Gap Filling
#
#
# <font size="4">Section 3.5.1. of the ATBD: Gap fill:
# The Gap fill filter was used to fill possible no-data values. In a long time series of severely cloud-affected regions, it is expected that no-data values may populate some of the resultant median composite pixels. In this filter, no-data values (“gaps”) are theoretically not allowed and are replaced by the temporally nearest valid classification. In this procedure, if no “future” valid position is available, then the no-data value is replaced by its previous valid class. Up to three prior years can be used to fill in persistent no-data positions. Therefore, gaps should only exist if a given pixel has been permanently classified as no-data throughout the entire temporal domain.
#
# All code for the Gap Filters was provided by the [Pampa Team](https://github.com/mapbiomas-brazil/pampa) in [this file](https://github.com/mapbiomas-brazil/pampa/blob/master/Step006_Filter_01_gagfill.js), although the same gap fill is applied to all cross-cutting themes and biome groups.
#
# Functions were rewritten in Python and made independent of the land cover classification image. The implementation of the gap fill in the MapBiomas code actually applies both a forward no-data filter and a backwards no-data filter.
#
# For the demo Dynamic World classifications in this notebook, none of the years have any missing data! Therefore we'll introduce some fake missing data areas in order to demonstrate the gap filling.</font>
# +
#Introducing no data pixels for some years
dw_2016_with_gaps = dynamic_world_classifications_image.select('dw_2016').mask(dynamic_world_classifications_image.select('dw_2016').neq(ee.Image.constant(3)))
dw_2017_with_gaps = dynamic_world_classifications_image.select('dw_2017').mask(dynamic_world_classifications_image.select('dw_2017').neq(ee.Image.constant(5)))
dw_2019_with_gaps = dynamic_world_classifications_image.select('dw_2019').mask(dynamic_world_classifications_image.select('dw_2019').neq(ee.Image.constant(1)))
dw_with_gaps = dw_2016_with_gaps.addBands(dw_2017_with_gaps).addBands(dynamic_world_classifications_image.select('dw_2018')).addBands(dw_2019_with_gaps)
dw_with_gaps = dw_with_gaps.rename(dw_band_names)
#Apply gap filtering
gap_filled = pcf.applyGapFilter(dw_with_gaps, dw_band_names)
# -
# <font size="4">Map the before and after to see the affects of the gap filtering</font>
#Map years to check them out!
center = [35.410769, -78.100163]
zoom = 12
Map2 = geemap.Map(center=center, zoom=zoom,basemap=basemaps.Esri.WorldImagery,add_google_map = False)
Map2.addLayer(dw_with_gaps.select('dw_2016'),statesViz,name='2016 DW LC')
Map2.addLayer(gap_filled.select('dw_2016'),statesViz,name='2016 Gap Filled')
Map2.addLayer(dw_with_gaps.select('dw_2017'),statesViz,name='2017 DW LC')
Map2.addLayer(gap_filled.select('dw_2017'),statesViz,name='2017 Gap Filled')
Map2.addLayer(dw_with_gaps.select('dw_2018'),statesViz,name='2018 DW LC')
Map2.addLayer(gap_filled.select('dw_2018'),statesViz,name='2018 Gap Filled')
Map2.addLayer(dw_with_gaps.select('dw_2019'),statesViz,name='2019 DW LC')
Map2.addLayer(gap_filled.select('dw_2019'),statesViz,name='2019 Gap Filled')
display(Map2)
# <font size="4">Calculate accuracy and confusion matrix for gap filled classifications on label data</font>
# +
#Load label points
labelPointsFC = ee.FeatureCollection(labelPoints_assetID.format('goldsboro'))
#Save 2019 post-filtered DW classifications and rename to "dw_filterd_classifications"
classifications_filtered_2019 = gap_filled.select('dw_2019').rename('dw_gap_filled_classifications')
#Sample the 2019 classifications at each label point
labelPointsWithFilteredDW = classifications_filtered_2019.sampleRegions(collection=labelPointsFC,
projection = projection_ee,
tileScale=4, geometries=True)
#Calculate confusion matrix, which we will use for an accuracy assessment
filteredErrorMatrix = labelPointsWithFilteredDW.errorMatrix('labels', 'dw_gap_filled_classifications')
#Print the confusion matrix with the class names as a dataframe
errorMatrixDf = gclass.pretty_print_confusion_matrix_multiclass(filteredErrorMatrix, full_dw_classes_str)
#Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.
print('Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.')
display(errorMatrixDf)
#You can also print further accuracy scores from the confusion matrix, however each one takes a couple minutes
#to load
print('Accuracy',filteredErrorMatrix.accuracy().getInfo())
# print('Consumers Accuracy',originalErrorMatrix.consumersAccuracy().getInfo())
# print('Producers Accuracy',originalErrorMatrix.producersAccuracy().getInfo())
# print('Kappa',originalErrorMatrix.kappa().getInfo())
# +
#Calculate the number of changes for each year
for year in years[0:-1]:
year_list = ['dw_{}'.format(year),'dw_{}'.format(year+1)]
num_changes = pcf.calculateNumberOfChanges(gap_filled.select(year_list), year_list)
num_changes_mean = num_changes.reduceRegion(reducer=ee.Reducer.mean(),
geometry=geometry,
crs=crs, crsTransform=crsTransform,
bestEffort=True,
maxPixels=1e13, tileScale=4)
print('Number of changes from',year,'to',year+1,"{:.4f}".format(num_changes_mean.get('sum').getInfo()))
# -
# ## Step 3: Apply Temporal Filters
#
# <font size="4">
# <br>
# Section 3.5.3. of the ATBD: Temporal filter:
# "The temporal filter uses sequential classifications in a three-to-five-years unidirectional moving window to identify temporally non-permitted transitions. Based on generic rules (GR), the temporal filter inspects the central position of three to five consecutive years, and if the extremities of the consecutive years are identical but the centre position is not, then the central pixels are reclassified to match its temporal neighbour class. For the three years based temporal filter, a single central position shall exist, for the four and five years filters, two and there central positions are respectively considered.
# Another generic temporal rule is applied to extremity of consecutive years. In this case, a three consecutive years window is used and if the classifications of the first and last years are different from its neighbours, this values are replaced by the classification of its matching neighbours."
#
# All code for the Temporal Filters was provided by the [Pampa Team](https://github.com/mapbiomas-brazil/pampa) in [this file](https://github.com/mapbiomas-brazil/pampa/blob/master/Step006_Filter_03_temporal.js)
#
# Functions were rewritten in Python and made independent of the land cover classification image.
#
# The MapBiomas implementation of the temporal filters includes the ability to perform the temporal filtering for one land cover class at a time.</font>
# +
#Load classifications into an image that will be filtered
temporally_filtered = dynamic_world_classifications_image
#Get a list of land cover values to apply the filters
class_dictionary = dw_classes_dict.getInfo()
order_of_values = [class_dictionary.get('trees'),class_dictionary.get('crops'),class_dictionary.get('built_area'),
class_dictionary.get('grass'),class_dictionary.get('scrub'),class_dictionary.get('bare_ground'),
class_dictionary.get('flooded_vegetation'),class_dictionary.get('water'),class_dictionary.get('snow_and_ice')]
#Loop through order_of_values and apply temporal filters, in order applied by MapBiomas in https://github.com/mapbiomas-brazil/pampa/blob/master/Step006_Filter_03_temporal.js
#We'll first apply the filter to the first year
#Then apply the filter for the final year
#Then apply the 3 year window, 4 year window, and 5 year window
for i in np.arange(len(order_of_values)):
id_class = order_of_values[i]
temporally_filtered = pcf.applyMask3first(temporally_filtered, id_class, dw_band_names)
for i in np.arange(len(order_of_values)):
id_class = order_of_values[i]
temporally_filtered = pcf.applyMask3last(temporally_filtered, id_class, dw_band_names)
for i in np.arange(len(order_of_values)):
id_class = order_of_values[i]
temporally_filtered = pcf.applyWindow3years(temporally_filtered, id_class, dw_band_names)
for i in np.arange(len(order_of_values)):
id_class = order_of_values[i]
temporally_filtered = pcf.applyWindow4years(temporally_filtered, id_class, dw_band_names)
for i in np.arange(len(order_of_values)):
id_class = order_of_values[i]
temporally_filtered = pcf.applyWindow5years(temporally_filtered, id_class, dw_band_names)
# +
#Map before and after along with a layer to see pixels that changed
changed_with_temporal_filter = temporally_filtered.select('dw_2017').neq(dynamic_world_classifications_image.select('dw_2017'))
center = [35.410769, -78.100163]
zoom = 12
Map3 = geemap.Map(center=center, zoom=zoom,basemap=basemaps.Esri.WorldImagery,add_google_map = False)
Map3.addLayer(dynamic_world_classifications_image.select('dw_2016'),statesViz,name='2016 LC')
Map3.addLayer(dynamic_world_classifications_image.select('dw_2017'),statesViz,name='2017 LC')
Map3.addLayer(dynamic_world_classifications_image.select('dw_2018'),statesViz,name='2018 LC')
Map3.addLayer(dynamic_world_classifications_image.select('dw_2019'),statesViz,name='2018 LC')
Map3.addLayer(temporally_filtered.select('dw_2016'),statesViz,name='2016 Post Filter')
Map3.addLayer(temporally_filtered.select('dw_2017'),statesViz,name='2017 Post Filter')
Map3.addLayer(temporally_filtered.select('dw_2018'),statesViz,name='2018 Post Filter')
Map3.addLayer(temporally_filtered.select('dw_2019'),statesViz,name='2019 Post Filter')
Map3.addLayer(changed_with_temporal_filter,oneChangeDetectionViz,name='LC Classes in 2017 that changed after filter')
#Grey areas show no change with the filter, red areas show change with the filter
display(Map3)
# -
# <font size="4">Calculate accuracy and confusion matrix for temporally filtered classifications on label data</font>
# +
#Load label points
labelPointsFC = ee.FeatureCollection(labelPoints_assetID.format('goldsboro'))
#Save 2019 post-filtered DW classifications and rename to "dw_filterd_classifications"
classifications_filtered_2019 = temporally_filtered.select('dw_2019').rename('dw_temp_filt_classifications')
#Sample the 2019 classifications at each label point
labelPointsWithFilteredDW = classifications_filtered_2019.sampleRegions(collection=labelPointsFC,
projection = projection_ee,
tileScale=4, geometries=True)
#Calculate confusion matrix, which we will use for an accuracy assessment
filteredErrorMatrix = labelPointsWithFilteredDW.errorMatrix('labels', 'dw_temp_filt_classifications')
#Print the confusion matrix with the class names as a dataframe
errorMatrixDf = gclass.pretty_print_confusion_matrix_multiclass(filteredErrorMatrix, full_dw_classes_str)
#Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.
print('Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.')
display(errorMatrixDf)
#You can also print further accuracy scores from the confusion matrix, however each one takes a couple minutes
#to load
print('Accuracy',filteredErrorMatrix.accuracy().getInfo())
# print('Consumers Accuracy',originalErrorMatrix.consumersAccuracy().getInfo())
# print('Producers Accuracy',originalErrorMatrix.producersAccuracy().getInfo())
# print('Kappa',originalErrorMatrix.kappa().getInfo())
# +
#Calculate the number of changes for each year
for year in years[0:-1]:
year_list = ['dw_{}'.format(year),'dw_{}'.format(year+1)]
num_changes = pcf.calculateNumberOfChanges(temporally_filtered.select(year_list), year_list)
num_changes_mean = num_changes.reduceRegion(reducer=ee.Reducer.mean(),
geometry=geometry,
crs=crs, crsTransform=crsTransform,
bestEffort=True,
maxPixels=1e13, tileScale=4)
print('Number of changes from',year,'to',year+1,"{:.4f}".format(num_changes_mean.get('sum').getInfo()))
# -
# ## Step 4: Apply Spatial Filters
# <font size="4">
# <br>
# Section 3.5.2. of the ATBD: Spatial filter:
# Spatial filter was applied to avoid unwanted modifications to the edges of the pixel groups (blobs), a spatial filter was built based on the “connectedPixelCount” function. Native to the GEE platform, this function locates connected components (neighbours) that share the same pixel value. Thus, only pixels that do not share connections to a predefined number of identical neighbours are considered isolated. In this filter, at least five connected pixels are needed to reach the minimum connection value. Consequently, the minimum mapping unit is directly affected by the spatial filter applied, and it was defined as 5 pixels (~0.5 ha).
#
# All code for the spatial filter was provided within the [intregration-toolkit](https://github.com/mapbiomas-brazil/integration-toolkit) that is used to combine land cover classifications from each biome and cross-cutting theme team. The direct code was provided in [this file](https://github.com/mapbiomas-brazil/integration-toolkit/blob/master/mapbiomas-integration-toolkit.js).
#
# Functions were rewritten in Python and made independent of the land cover classification image.
#
# The spatial filters are applied for each land cover class defined by the user. For each class the user can define the minimum number of connected pixels needed to not filter out the cluster. If the number of connected pixels is too small, the central pixel is replaced by the mode of the time series.</font>
# +
#Define a list of dictionaries, where each dictionary contains 'classValue' representing the value of the land cover
#class and 'minSize' representing the minimum connectedPixelCount needed to not be replaced by the filter
# no_data=0
# water=1
# trees=2
# grass=3
# flooded_vegetation=4
# crops=5
# scrub_shrub=6
# builtup=7
# bare_ground=8
# snow_ice=9
# clouds=10
filterParams = [
{'classValue': 1, 'minSize': 5},
{'classValue': 2, 'minSize': 5},
{'classValue': 3, 'minSize': 5},
{'classValue': 4, 'minSize': 5},
{'classValue': 5, 'minSize': 10},
{'classValue': 6, 'minSize': 5},
{'classValue': 7, 'minSize': 3},
{'classValue': 8, 'minSize': 5},
{'classValue': 9, 'minSize': 10},
]
#Load classifications into an image that will be filtered
spatially_filtered = dynamic_world_classifications_image
#Define empty list to append outputted images from spatial filter
spatial_filter_output = []
#Loop through years
for band in dw_band_names:
#Apply spatial filter for one year using the filterParams
out_image = pcf.applySpatialFilter(spatially_filtered.select(band), filterParams)
#Append result to list
spatial_filter_output.append(out_image)
#Convert list to image collection, then to multiband image
spatially_filtered = ee.ImageCollection(spatial_filter_output).toBands().rename(dw_band_names)
# +
#Map the before and after!
#The spatial filter depends on the scale, so to see the final results, reproject the image to the original 10 m resolution
changed_with_spatial_filter = dynamic_world_classifications_image.select('dw_2017').neq(spatially_filtered.select('dw_2017').reproject(crs='EPSG:3857', scale=10))
center = [35.410769, -78.100163]
zoom = 12
Map4 = geemap.Map(center=center, zoom=zoom,basemap=basemaps.Esri.WorldImagery,add_google_map = False)
Map4.addLayer(dynamic_world_classifications_image.select('dw_2017'),statesViz,name='2017 LC')
Map4.addLayer(spatially_filtered.select('dw_2017').reproject(crs='EPSG:3857', scale=10),statesViz,name='2017 LC Post Spatial Filter')
Map4.addLayer(changed_with_spatial_filter,oneChangeDetectionViz,name='Changed with spatial filter')
#Grey areas show no change with the filter, red areas show change with the filter
display(Map4)
# -
# <font size="4">Calculate accuracy and confusion matrix for spatially filtered classifications on label data</font>
# +
#Load label points
labelPointsFC = ee.FeatureCollection(labelPoints_assetID.format('goldsboro'))
#Save 2019 post-filtered DW classifications and rename to "dw_filterd_classifications"
classifications_filtered_2019 = spatially_filtered.select('dw_2019').rename('dw_temp_spatial_filt_classifications')
#Sample the 2019 classifications at each label point
labelPointsWithFilteredDW = classifications_filtered_2019.sampleRegions(collection=labelPointsFC,
projection = projection_ee,
tileScale=4, geometries=True)
#Calculate confusion matrix, which we will use for an accuracy assessment
filteredErrorMatrix = labelPointsWithFilteredDW.errorMatrix('labels', 'dw_temp_spatial_filt_classifications')
#Print the confusion matrix with the class names as a dataframe
errorMatrixDf = gclass.pretty_print_confusion_matrix_multiclass(filteredErrorMatrix, full_dw_classes_str)
#Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.
print('Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.')
display(errorMatrixDf)
#You can also print further accuracy scores from the confusion matrix, however each one takes a couple minutes
#to load
print('Accuracy',filteredErrorMatrix.accuracy().getInfo())
# print('Consumers Accuracy',originalErrorMatrix.consumersAccuracy().getInfo())
# print('Producers Accuracy',originalErrorMatrix.producersAccuracy().getInfo())
# print('Kappa',originalErrorMatrix.kappa().getInfo())
# -
# ## Step 5 Apply Incidence Filter
# <font size="4">
# <br>
# Section 3.5.5. of the ATBD: Incident Filter
# "An incident filter were applied to remove pixels that changed too many times in the 34 years of time spam. All pixels that changed more than eight times and is connected to less than 6 pixels was replaced by the MODE value of that given pixel position in the stack of years. This avoids changes in the border of the classes and helps to stabilize originally noise pixel trajectories. Each biome and cross-cutting themes may have constituted customized applications of incident filters, see more details in its respective appendices."
#
# This was not clearly implemented in the MapBiomas code, so this filter was coded by the WRI Team. The incidence filter finds all pixels that changed more than numChangesCutoff times and is connected to less than connectedPixelCutoff pixels, then replaces those pixels with the MODE value of that given pixel position in the stack of years.
# </font>
# +
#Load classifications into an image that will be filtered
incident_filtered = dynamic_world_classifications_image
#Calculate the number of changes in each pixel before the incidence filter
num_changes = pcf.calculateNumberOfChanges(dynamic_world_classifications_image, dw_band_names)
#Apply incidence filter
incident_filtered = pcf.applyIncidenceFilter(incident_filtered, dw_band_names, dw_classes_dict,
numChangesCutoff = 2, connectedPixelCutoff=6)
#Calculate the number of changes in each pixel before the incidence filter
num_changes_post_incidence = pcf.calculateNumberOfChanges(incident_filtered, dw_band_names)
#Calculate the difference in the number of changes before and after the filter
changed_from_incidence = num_changes.neq(num_changes_post_incidence)
# -
#Map the results!
numChangesViz = {'min': 0, 'max': 3, 'palette': ['131b7a','04ecff']}; #gray = 0, red = 1co
center = [35.410769, -78.100163]
zoom = 12
Map5 = geemap.Map(center=center, zoom=zoom,basemap=basemaps.Esri.WorldImagery,add_google_map = False)
Map5.addLayer(num_changes,numChangesViz,name='Number of Changes Pre Filter')
Map5.addLayer(num_changes_post_incidence,numChangesViz,name='Number of Changes Post Filter')
Map5.addLayer(changed_from_incidence,oneChangeDetectionViz,name='Changed with Filter')
display(Map5)
# <font size="4">Calculate accuracy and confusion matrix for incidence filtered classifications on label data</font>
# +
#Load label points
labelPointsFC = ee.FeatureCollection(labelPoints_assetID.format('goldsboro'))
#Save 2019 post-filtered DW classifications and rename to "dw_filterd_classifications"
classifications_filtered_2019 = incident_filtered.select('dw_2019').rename('dw_temp_incidence_filt_classifications')
#Sample the 2019 classifications at each label point
labelPointsWithFilteredDW = classifications_filtered_2019.sampleRegions(collection=labelPointsFC,
projection = projection_ee,
tileScale=4, geometries=True)
#Calculate confusion matrix, which we will use for an accuracy assessment
filteredErrorMatrix = labelPointsWithFilteredDW.errorMatrix('labels', 'dw_temp_incidence_filt_classifications')
#Print the confusion matrix with the class names as a dataframe
errorMatrixDf = gclass.pretty_print_confusion_matrix_multiclass(filteredErrorMatrix, full_dw_classes_str)
#Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.
print('Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.')
display(errorMatrixDf)
#You can also print further accuracy scores from the confusion matrix, however each one takes a couple minutes
#to load
print('Accuracy',filteredErrorMatrix.accuracy().getInfo())
# print('Consumers Accuracy',originalErrorMatrix.consumersAccuracy().getInfo())
# print('Producers Accuracy',originalErrorMatrix.producersAccuracy().getInfo())
# print('Kappa',originalErrorMatrix.kappa().getInfo())
# -
# ## Step 6: Apply Frequency Filter
# <font size="4">
# <br>
# Section 3.5.5. of the ATBD: Frequency Filter
# "This filter takes into consideration the occurrence frequency throughout the entire time series. Thus, all class occurrence with less than given percentage of temporal persistence (eg. 3 years or fewer out of 33) are filtered out. This mechanism contributes to reducing the temporal oscillation associated to a given class, decreasing the number of false positives and preserving consolidated trajectories. Each biome and cross-cutting themes may have constituted customized applications of frequency filters, see more details in their respective appendices."
#
# This was not clearly implemented in the MapBiomas code, so this filter was coded by the WRI Team. All class occurrence with less than given percentage of temporal persistence (eg. 3 years or fewer out of 33) are replaced with the mode value of that given pixel position in the stack of years.
# </font>
# +
#Load classifications into an image that will be filtered
frequency_filtered = dynamic_world_classifications_image
#Define filterParams that defines the class name and the minimum number of occurances that need to occur
filterParams = {'water':2,
'trees': 2,
'grass': 2,
'flooded_vegetation':2,
'crops': 2,
'scrub': 2,
'built_area': 2,
'bare_ground': 2,
'snow_and_ice': 2}
filterParams = ee.Dictionary(filterParams)
#Apply frequency filter
frequency_filtered = pcf.applyFrequencyFilter(frequency_filtered, dw_band_names,
dw_classes_dict, filterParams)
# +
#Get binary images of the land cover classifications for the current year
binary_class_images = npv.convertClassificationsToBinaryImages(dynamic_world_classifications_image, dw_classes_dict)
#Get the frequency of each class through the years by reducing the image collection to an image
class_frequency = binary_class_images.reduce(ee.Reducer.sum().unweighted()).rename(filterParams.keys().getInfo())
#Get binary images of the land cover classifications for the current year
post_binary_class_images = npv.convertClassificationsToBinaryImages(frequency_filtered, dw_classes_dict)
#Get the frequency of each class through the years by reducing the image collection to an image
post_class_frequency = post_binary_class_images.reduce(ee.Reducer.sum().unweighted()).rename(filterParams.keys())
changed_from_frequency_filter = class_frequency.neq(post_class_frequency)
#Map the results!
numChangesViz = {'min': 0, 'max': 3, 'palette': ['131b7a','04ecff']}; #gray = 0, red = 1co
center = [35.410769, -78.100163]
zoom = 12
Map4 = geemap.Map(center=center, zoom=zoom,basemap=basemaps.Esri.WorldImagery,add_google_map = False)
Map4.addLayer(class_frequency.select('grass'),numChangesViz,name='Number of Occurrences Pre Filter')
Map4.addLayer(post_class_frequency.select('grass'),numChangesViz,name='Number of Occurrences Post Filter')
Map4.addLayer(changed_from_frequency_filter.select('grass'),oneChangeDetectionViz,name='Changed with Filter')
display(Map4)
# -
# <font size="4">Calculate accuracy and confusion matrix for frequency filtered classifications on label data</font>
# +
#Load label points
labelPointsFC = ee.FeatureCollection(labelPoints_assetID.format('goldsboro'))
#Save 2019 post-filtered DW classifications and rename to "dw_filterd_classifications"
classifications_filtered_2019 = frequency_filtered.select('dw_2019').rename('dw_temp_frequency_filt_classifications')
#Sample the 2019 classifications at each label point
labelPointsWithFilteredDW = classifications_filtered_2019.sampleRegions(collection=labelPointsFC,
projection = projection_ee,
tileScale=4, geometries=True)
#Calculate confusion matrix, which we will use for an accuracy assessment
filteredErrorMatrix = labelPointsWithFilteredDW.errorMatrix('labels', 'dw_temp_frequency_filt_classifications')
#Print the confusion matrix with the class names as a dataframe
errorMatrixDf = gclass.pretty_print_confusion_matrix_multiclass(filteredErrorMatrix, full_dw_classes_str)
#Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.
print('Axis 1 (the rows) of the matrix correspond to the actual values, and Axis 0 (the columns) to the predicted values.')
display(errorMatrixDf)
#You can also print further accuracy scores from the confusion matrix, however each one takes a couple minutes
#to load
print('Accuracy',filteredErrorMatrix.accuracy().getInfo())
# print('Consumers Accuracy',originalErrorMatrix.consumersAccuracy().getInfo())
# print('Producers Accuracy',originalErrorMatrix.producersAccuracy().getInfo())
# print('Kappa',originalErrorMatrix.kappa().getInfo())
# -
| demo_notebooks/MapBiomas_Spatial_Temporal_Filters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Tutorial 3: Synapses and Networks
#
# ## Neuronal connectivity
#
# Neurons are connected at specific sites called **synapses**. Usually, the axon of a **presynaptic** neuron will make contact with the dendrite (or soma) of a **postsynaptic** in what's called a **chemical synapse**: **neurotransmitters** are transferred via the tiny gap between the two neurons (**synaptic cleft**) thanks to biochemical processes. This generates a change in membrane potential at the postsynaptic site, called **postsynaptic potential**. **Electrical synapses**, also known as **gap junctions**, can also be present -- in this case, specialized proteins make a direct electrical connection between neurons. [1]
#
# <center><img src="img/synapse.png" width="300"></center>
#
# <center>from [2]</center>
#
# In this tutorial we will connect two or more neurons in NEST to create our first neuronal networks. First, we will learn the basic commands following [3] and [4], and then apply these to a well-known balanced network known as the **Brunel network**. By the end of this tutorial, you should be able to understand that a network's stability and activity behavior is deeply influenced by its parametrization.
#
# ### Sources:
#
# >[1] <NAME> and <NAME>, "Spiking Neuron Models". Cambridge University Press, 2002
#
# >[2] <NAME> and <NAME>, "Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems". The MIT Press, 2001
#
# >[3] [PyNEST tutorial: Part 2, Populations of Neurons](https://nest-simulator.readthedocs.io/en/latest/tutorials/pynest_tutorial/part_2_populations_of_neurons.html)
#
# >[4] [PyNEST tutorial: Part 3, Connecting Networks with Synapses](https://nest-simulator.readthedocs.io/en/latest/tutorials/pynest_tutorial/part_3_connecting_networks_with_synapses.html)
# + [markdown] slideshow={"slide_type": "-"}
# ## Introduction: synapses in NEST
#
# All synapse types included in NEST can be found in the following link:
#
# >[5] [NEST Docs: All synapse models](https://nest-simulator.readthedocs.io/en/latest/models/synapses.html)
#
# Alternatively, you can verify all synapse models present in NEST using the following command:
# + slideshow={"slide_type": "slide"}
import nest
nest.Models('synapses')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Synapse types
#
# The simplest synapse type is the `static_synapse`. In this case, the **synaptic weight**, a measure of how strong is the influence of the presynaptic neuron on the postsynaptic neuron, is static, i.e. does not change over time. The synaptic transmission, however, is not instantaneous, and hence a **synaptic delay** is defined as the time between the presynaptic neuron activation (**action potential**) and the moment the postsynaptic potential is generated.
#
# In NEST, we can check the default values for all parameters using the `GetDefaults` function:
# + slideshow={"slide_type": "slide"}
nest.GetDefaults('static_synapse')
# + [markdown] slideshow={"slide_type": "slide"}
# Biological synapses, however, are constantly being created, destroyed and modified. Many models have been created to describe all of these processes. On synaptic modification, one of the most common models is the **spike-time dependent plasticity**. In this example, the synaptic weight changes according to the temporal order between pre and postsynaptic spike times.
#
# In NEST, the most common STDP mechanisms are implemented in `stdp_synapse`. The change for **normalized** synaptic weights is described by
#
# \begin{equation}
# \Delta w = \begin{cases} - \lambda f_{-}(w) \times K_-(\Delta t) & \text{if $\Delta t \leq 0$,} \\
# \lambda f_{+}(w) \times K_+(\Delta t) & \text{if $\Delta t > 0$,} \end{cases}
# \end{equation}
#
# where $\Delta t \equiv t_{post} - t_{pre}$ and the temporal filter is defined as $K_{(+,-)}(\Delta t) = \exp(-|\Delta t| / \tau_{(+,-)})$. The update functions
#
# \begin{equation}
# f_{+}(w) = (1-w)^{\mu_{+}} \text{ and } f_{-}(w) = \alpha w^{\mu_{-}}
# \end{equation}
#
# create **synaptic potentiation** (stronger weights) when **causal spiking** is detected ($\Delta t > 0$), otherwise generating **synaptic depression** (weaker weights). This rule is also known as **temporally asymmetric Hebbian plasticity** and has been thoroughly studied under different parametrizations:
#
# | STDP Type | Parametrization | Ref. |
# |---------------------|----------------------------------|------|
# | multiplicative STDP | $\mu_{+}=\mu_{-}=1.0$ | [7] |
# | additive STDP | $\mu_{+}=\mu_{-}=0.0$ | [8] |
# | Guetig STDP | $\mu_{+}=\mu_{-}=[0.0, 1.0]$ | [6] |
# | van Rossum STDP | $\mu_{+}=0.0, \mu_{-} = 1.0$ | [9] |
#
# ### Sources:
#
# > [6] G<NAME> al. (2003). Learning input correlations through nonlinear temporally asymmetric hebbian plasticity. Journal of Neuroscience, 23:3697-3714 DOI: https://doi.org/10.1523/JNEUROSCI.23-09-03697.2003
#
# > [7] <NAME>, <NAME>, <NAME> (2001). Equilibrium properties of temporally asymmetric Hebbian plasticity. Physical Review Letters, 86:364-367. DOI: https://doi.org/10.1103/PhysRevLett.86.364
#
# > [8] <NAME>, <NAME>, <NAME> (2000). Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience 3(9):919-926. DOI: https://doi.org/10.1038/78829
#
# > [9] <NAME>, <NAME>, <NAME> (2000). Stable Hebbian learning from spike timing-dependent plasticity. Journal of Neuroscience, 20(23):8812-8821. DOI: https://doi.org/10.1523/JNEUROSCI.20-23-08812.2000
# -
# Again, we can check the default values for all parameters of `stdp_synapse` using the `GetDefaults` function:
# + slideshow={"slide_type": "slide"}
nest.GetDefaults('stdp_synapse')
# -
# You may have noticed that `tau_minus` is absent from the above list. For STDP synaptic models, the time constant of the depressing window of STDP is exceptionally a parameter of the **post-synaptic neuron**.
# + slideshow={"slide_type": "slide"}
nest.Create("iaf_psc_alpha", params={"tau_minus": 30.0})
# -
# To change the default value of an **accessible** parameter, we can use the function `SetDefaults`.
#
# Note that *only some parameters listed by `GetDefaults` are changeable*. Please verify the details of the synapse type you want to use beforehand. [[5]](https://nest-simulator.readthedocs.io/en/latest/models/synapses.html)
# + slideshow={"slide_type": "slide"}
nest.SetDefaults("stdp_synapse",{"tau_plus": 15.0})
# -
# Customized variants of a synapse model can be created using `CopyModel()`, and can be used anywhere that a built-in model name can be used.
# + slideshow={"slide_type": "-"}
nest.CopyModel("stdp_synapse","layer1_stdp_synapse",{"Wmax": 90.0})
# + [markdown] slideshow={"slide_type": "-"}
# When connecting multiple neurons, connectivity rules can be defined. Besides simple set-ups like `one_to_one` and `all_to_all`, sparse methods like `fixed_indegree`, `fixed_outdegree`, `fixed_total_number` and `pairwise_bernoulli` are also available. Please check [[3]](https://nest-simulator.readthedocs.io/en/latest/tutorials/pynest_tutorial/part_2_populations_of_neurons.html) to learn more details about creating and connecting neuron populations.
# + slideshow={"slide_type": "-"}
epop1 = nest.Create("iaf_psc_delta", 10, params={"tau_m": 30.0})
epop2 = nest.Create("iaf_psc_delta", 10)
K = 5
conn_dict = {"rule": "fixed_indegree", "indegree": K}
syn_dict = {"model": "stdp_synapse", "alpha": 1.0}
nest.Connect(epop1, epop2, conn_dict, syn_dict)
# + [markdown] slideshow={"slide_type": "-"}
# Synaptic parameters can also be randomly distributed by assigning a dictionary to the parameter. This should contain the target distribution and its optional parameters, as listed below:
#
# | Distributions | Keys |
# |---------------|------------------|
# | `normal` | `mu`, `sigma` |
# | `lognormal` | `mu`, `sigma` |
# | `uniform` | `low`, `high` |
# | `uniform_int` | `low`, `high` |
# | `binomial` | `n`, `p` |
# | `exponential` | `lambda` |
# | `gamma` | `order`, `scale` |
# | `poisson` | `lambda` |
# + slideshow={"slide_type": "-"}
neuron = nest.Create("iaf_psc_alpha")
alpha_min = 0.1
alpha_max = 2.
w_min = 0.5
w_max = 5.
syn_dict = {"model": "stdp_synapse",
"alpha": {"distribution": "uniform", "low": alpha_min, "high": alpha_max},
"weight": {"distribution": "uniform", "low": w_min, "high": w_max},
"delay": 1.0}
nest.Connect(epop1, neuron, "all_to_all", syn_dict)
# + [markdown] slideshow={"slide_type": "-"}
# Synapse information can be retrieved from its origin, target and synapse model using `GetConnections()`:
# + slideshow={"slide_type": "-"}
nest.GetConnections(epop1, target=epop2, synapse_model="stdp_synapse")
# + [markdown] slideshow={"slide_type": "-"}
# We can then extract the data using `GetStatus()`. Specific information can be retrieved by providing a list of desired parameters.
# + slideshow={"slide_type": "-"}
conns = nest.GetConnections(epop1, synapse_model="stdp_synapse")
conn_vals = nest.GetStatus(conns, ["target","weight"])
# -
# ---
#
# ## Example: connecting two neurons
#
# In this example, we will create and connect two neurons, A and B. Neuron A is of type `iaf_psc_delta` and receives external current $I_e = 376.0 pA$. This current is sufficient to elicit a spike every $\approx$ 50ms. Neuron B is solely connected to neuron A and hence can only spike if the input from A is strong enough. Let's observe how B can be influenced by A.
#
# First, verify the activity of neuron A running the block of code below.
# +
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# reset kernel for new example
nest.ResetKernel()
neuron_A = nest.Create("iaf_psc_delta", 1, {"I_e": 376.0})
multimeter_A = nest.Create("multimeter", params={"withtime": True, "record_from":["V_m"]})
spikedetector_A = nest.Create("spike_detector", params={"withgid": True, "withtime": True})
nest.Connect(multimeter_A, neuron_A)
nest.Connect(neuron_A, spikedetector_A)
nest.Simulate(300.0)
# plot membrane potential and spiking activity
plt.rcParams['figure.dpi'] = 300
fig, ax = plt.subplots(2, 1, sharex=True, sharey=False)
multimeter_A_readout = nest.GetStatus(multimeter_A)[0]
V_A = multimeter_A_readout["events"]["V_m"]
t_A = multimeter_A_readout["events"]["times"]
spikedetector_A_readout = nest.GetStatus(spikedetector_A, keys="events")[0]
event_A = spikedetector_A_readout["senders"]
te_A = spikedetector_A_readout["times"]
ax[0].set_ylabel('V (mV)')
ax[0].plot(t_A, V_A, color="tab:blue", label="A")
ax[0].legend()
ax[1].set_xlabel('time (ms)')
ax[1].set_ylabel('Spike times')
ax[1].plot(te_A, event_A, ".")
ax[1].set_yticks([1])
ax[1].set_yticklabels(["A"])
plt.show()
# -
# Now let's create neuron B as `iaf_psc_delta` with no external current:
# +
neuron_B = nest.Create("iaf_psc_delta", 1, {"I_e": 0.0})
multimeter_B = nest.Create("multimeter", params={"withtime": True, "record_from":["V_m"]})
spikedetector_B = nest.Create("spike_detector", params={"withgid": True, "withtime": True})
nest.Connect(multimeter_B, neuron_B)
nest.Connect(neuron_B, spikedetector_B)
# -
# We'll use the function below to easily plot the activity of the two neurons at the same time:
def plot2neurons(multimeter_A, multimeter_B, spikedetector_A, spikedetector_B):
multimeter_A_readout = nest.GetStatus(multimeter_A)[0]
V_A = multimeter_A_readout["events"]["V_m"]
t_A = multimeter_A_readout["events"]["times"]
multimeter_B_readout = nest.GetStatus(multimeter_B)[0]
V_B = multimeter_B_readout["events"]["V_m"]
t_B = multimeter_B_readout["events"]["times"]
spikedetector_A_readout = nest.GetStatus(spikedetector_A, keys="events")[0]
event_A = spikedetector_A_readout["senders"]
te_A = spikedetector_A_readout["times"]
spikedetector_B_readout = nest.GetStatus(spikedetector_B, keys="events")[0]
event_B = spikedetector_B_readout["senders"]
te_B = spikedetector_B_readout["times"]
plt.rcParams['figure.dpi'] = 300
fig, ax = plt.subplots(2, 1, sharex=True, sharey=False)
ax[0].set_ylabel('V (mV)')
ax[0].plot(t_A, V_A, color="tab:blue", label="A")
ax[0].plot(t_B, V_B, color="tab:orange", label="B")
ax[0].legend()
ax[1].set_ylim(0,5)
ax[1].set_yticks([1,4])
ax[1].set_yticklabels(["A","B"])
ax[1].set_xlabel('time (ms)')
ax[1].set_ylabel('Spike times')
ax[1].plot(te_A, event_A, ".", color="tab:blue")
ax[1].plot(te_B, event_B, ".", color="tab:orange")
plt.show()
nest.Simulate(300.0)
plot2neurons(multimeter_A, multimeter_B, spikedetector_A, spikedetector_B)
# Neuron B is still inactive. We need to connect both neurons:
# +
nest.Connect(neuron_A, neuron_B, {"rule": "one_to_one"}, {"model": "static_synapse"})
nest.Simulate(300.0)
plot2neurons(multimeter_A, multimeter_B, spikedetector_A, spikedetector_B)
# -
# ### Suggest at least 3 alterations we can make in this example to elicit spiking activity from B.
#
# Run the code below and check if B spikes using the spike time raster (bottommost graph).
#
# Additionally, check what happens if the neuron model is different: for example, change `iaf_psc_delta` to `hh_psc_alpha`.
#
# Finally, connect B to A reciprocally and check their dynamics. How can you describe their behavior?
# +
nest.ResetKernel()
neuron_A = nest.Create("iaf_psc_delta", 1, {"I_e": 376.0})
multimeter_A = nest.Create("multimeter", params={"withtime": True, "record_from":["V_m"]})
spikedetector_A = nest.Create("spike_detector", params={"withgid": True, "withtime": True})
nest.Connect(multimeter_A, neuron_A)
nest.Connect(neuron_A, spikedetector_A)
neuron_B = nest.Create("iaf_psc_delta", 1, {"I_e": 0.0})
multimeter_B = nest.Create("multimeter", params={"withtime": True, "record_from":["V_m"]})
spikedetector_B = nest.Create("spike_detector", params={"withgid": True, "withtime": True})
nest.Connect(multimeter_B, neuron_B)
nest.Connect(neuron_B, spikedetector_B)
nest.Connect(neuron_A, neuron_B, {"rule": "one_to_one"}, {"model": "static_synapse", "weight": 1.})
nest.Simulate(300.0)
plot2neurons(multimeter_A, multimeter_B, spikedetector_A, spikedetector_B)
# -
| Tutorial-3_Synapses-and-Networks/Synapses.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 1: Life Satisfaction
#
# This notebook contains the code for chapter 1 of the Hands-on Machine Learning with Scikit-Learn, Keras & Tensorflow book.
# +
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsRegressor
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# -
# ## Global configuration
# +
BASE_PATH = "../data/"
BETTER_LIFE_INDEX_DATA_FILE = BASE_PATH + "oecd_bli_2015.csv"
GDP_PER_CAPITA_DATA_FILE = BASE_PATH + "gdp_per_capita.csv"
RANDOM_SEED = 42
# -
np.random.seed(RANDOM_SEED)
# ## Load data
oecd_bli = pd.read_csv(BETTER_LIFE_INDEX_DATA_FILE, thousands=',')
gdp_per_capita = pd.read_csv(
GDP_PER_CAPITA_DATA_FILE,
thousands=',',
delimiter='\t',
encoding='latin1',
na_values="n/a",
)
# ## Prepare data
def prepare_life_satisfaction_data(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
life_satisfaction = prepare_life_satisfaction_data(oecd_bli, gdp_per_capita)
X = np.c_[life_satisfaction["GDP per capita"]]
y = np.c_[life_satisfaction["Life satisfaction"]]
# ## Vizualize data
life_satisfaction.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
# ## Train <ins>linear regression</ins> model
lr_model = LinearRegression()
# %%time
lr_model = lr_model.fit(X, y)
# ## Predict <ins>linear regression</ins>
lr_model.predict([[22587]])
# ## Train <ins>k nearest neighbors</ins> model
knn_model = KNeighborsRegressor(n_neighbors=3)
# %%time
knn_model = knn_model.fit(X, y)
# ## Predict <ins>k nearest neighbors</ins>
knn_model.predict([[22587]])
# # Exercises
# 1. How would you define Machine Learning?
# **Solution**
#
# Machine learning is giving a computer the ability to learn without the need for it to be explicitly programmed.
# 2. Can you name four types of problems where it shines?
# **Solution**
#
# 1. Problems that require a lot of hand-tuning.
# 2. Complex problems that cannot be solved with traditional methods.
# 3. Fluctuating environments.
# 4. Insights about large amounts of data.
# 3. What is a labeled training set?
# **Solution**
#
# A dataset or part of a dataset that is used to train a supervised machine learning model. The dataset contains the value that the model needs to predict and a set of extra features used to predict this value.
# 4. What are the two most common supervised tasks?
# **Solution**
#
# 1. Classification.
# 2. Regression.
# 5. Can you name four common unsupervised tasks?
# **Solution**
#
# 1. Clustering.
# 2. Anomaly & novelty dection.
# 3. Visualization & dimension reduction.
# 4. Association rule learning.
# 6. What type of Machine Learning algorithm would you use to allow a robot to walk in various unknown terrains?
# **Solution**
#
# Reinforcement Learning.
# 7. What type of algorithm would you use to segment your customers into multiple groups?
# **Solution**
#
# Clustering.
# 8. Would you frame the problem of spam detection as a supervised learning problem or an unsupervised learning problem?
# **Solution**
#
# Supervised.
# 9. What is an online learning system?
# **Solution**
#
# An online learning system is a system where a machine learning model is continuously trained on new data.
# 10. What is out-of-core learning?
# **Solution**
#
# Out-of-core learning is the process of training a machine leraning model is trained in batches, usually because the training data is too large to fit in memory.
# 11. What type of learning algorithm relies on a similarity measure to make predictions?
# **Solution**
#
# An instance-based algorithm.
# 12. What is the difference between a model parameter and a learning algorithm’s hyperparameter?
# **Solution**
#
# A model parameter changes when a model is trained, a hyperparameter is set to a fixed value before training.
# 13. What do model-based learning algorithms search for? What is the most common strategy they use to succeed? How do they make predictions?
# **Solution**
#
# Model-based learning algorithms search for patterns in the data and correlations between various attributes of the data.
#
# The most common strategy model-based algorithms use to succeed is:
#
# 1. Study data.
# 2. Select model.
# 3. Train model & find parameters that minimize the the cost function.
# 4. Apply model to new data.
#
# Predictions are made by passing input data to the model, then a prediction is made based on this data and the model parameters.
# 14. Can you name four of the main challenges in Machine Learning?
# **Solution**
#
# 1. Insufficient quantity of data.
# 2. Non representative data.
# 3. Poor data quality.
# 4. Irrelevant features.
# 15. If your model performs great on the training data but generalizes poorly to new instances, what is happening? Can you name three possible solutions?
# **Solution**
#
# Then the model is overfitting, this problem can be solved by:
#
# 1. Selecting a simpler model, reducing the attributes in the training data or constraining the model
# 2. Collect more training data
# 3. Reduce noise in training data
# 16. What is a test set and why would you want to use it?
# **Solution**
#
# A test set is a subset of the dataset that isn't used to train the model. The test is used to evaluate the model on data it has never seen before after it is trained and optimized.
# 17. What is the purpose of a validation set?
# **Solution**
#
# A validation set is a subset of the training set that is used optimize the model (e.g. trying out different hyperparameters or testing different learning algorithms) or evaluate multiple model to select the best one.
# 18. What can go wrong if you tune hyperparameters using the test set?
# **Solution**
#
# If you tune the hyperparameters using the test set the model will be optimized only for that particular set. So it will not generalize well to new data.
# 19. What is repeated cross-validation and why would you prefer it to using a single validation set?
# **Solution**
#
# Repeated cross-validation is the process of splitting up the validation set into $n$ groups, then each model is trained on all data minus one validation set that is used to evaluate the model.
#
# By using a single validation set you run the risk of the model becoming optimized for only one validation set so it will not generalize well to new data. Therefore it is prefered to use repeated cross-validation.
| notebooks/chapter-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Javascript library in a notebook: pig
#
# Tries [pig](https://github.com/schlosser/pig.js) in a notebook. Example from [index.html](https://github.com/schlosser/pig.js/blob/master/test/index.html).
from jyquickhelper import RenderJS
css = None
libs = ['http://www.xavierdupre.fr/js/pig/pig.min.js']
script = """
var imageData = [
{"filename": "nb_c3.thumb.png", "aspectRatio": 1},
{"filename": "nb_viz.thumb.png", "aspectRatio": 1},
{"filename": "nb_svg.thumb.png", "aspectRatio": 1},
{"filename": "nb_vis.thumb.png", "aspectRatio": 1},
{"filename": "nb_mermaid.thumb.png", "aspectRatio": 1},
];
var pig = new Pig(imageData, {
urlForSize: function(filename, size) {
return 'http://www.xavierdupre.fr/app/jyquickhelper/helpsphinx/_images/' + filename;
},
containerId: "__ID__",
}).enable();
"""
jr = RenderJS(script, css=css, libs=libs)
jr
print(jr._repr_html_())
# Not very successful.
| _doc/notebooks/nb_pig.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: keras2 (conda, py3)
# language: python
# name: keras2
# ---
# +
import os
import pandas as pd
import numpy as np
from rdkit import Chem
from rdkit.Chem import AllChem, Draw
from rdkit.Chem.Draw import IPythonConsole
#import cPickle as pickle
import pickle
import matplotlib.pyplot as plt
# %matplotlib inline
#Grow as needed
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
from keras.models import load_model
# -
# # Load model
# This model is quite bad, probably due the size limited, non-filtered and non-standardized prior dataset used during training (and my lack of patience during training)
name = "models/enum2enum_val_loss_0.1885"
# +
#model = load_model("%s_full.h5"%name)
# +
#For some versions of Keras it was necessary to Preload the module
#from keras.layers import CuDNNLSTM as LSTM
# -
mol2lat_model = load_model("%s_mol2lat.h5"%name)
#lat2stat_model = load_model("%s_lat2stat.h5"%name)
#sample_model = load_model("%s_sample_model.h5"%name)
# +
import h5py
import numpy as np
import ast
h5f = h5py.File('data/chembl_mols_as_binstrings.h5', 'r')
print(list(h5f.items()))
info = ast.literal_eval(h5f['info'].value.decode("utf-8"))
print(info.keys())
print(len(info['charset']))
print(info['maxlen'])
charset = info['charset']
maxlen = info['maxlen']
mols = h5f['mols'][0:95000]
# -
mols_test = h5f['mols'][95000:]
from molvecgen import HetSmilesGenerator
from molvecgen import SmilesVectorizer
smilesvec1 = SmilesVectorizer(canonical=True, augment=False, maxlength=maxlen, charset=charset, binary=True)
#smilesvec2 = SmilesVectorizer(canonical=False, augment=True, maxlength=maxlen, charset=charset, binary=True, leftpad=False)
test_smi_i = smilesvec1.transform(mols_test)
latent_vectors = mol2lat_model.predict(test_smi_i)#, batch_size=24)
#Variation in the latent space of first 10 molecules
_ = plt.plot(latent_vectors[0:10].T)
plt.plot(latent_vectors.std(axis=1))
lat2stat_model = load_model("%s_lat2stat.h5"%name)
states_init = lat2stat_model.predict(latent_vectors[0:1])
_ = plt.plot(states_init[0].T)
_ = plt.plot(states_init[1].T)
#_ = plt.plot(states_init[2].T)
#_ = plt.plot(states_init[3].T)
sample_model = load_model("%s_sample_model.h5"%name)
[l.name for l in sample_model.layers]
def latent_to_smiles(latent):
#decode states and set Reset the LSTM cells with them
states = lat2stat_model.predict(latent)
sample_model.get_layer("LSTM1_decoder").reset_states(states=[states[0],states[1]])
sample_model.get_layer("LSTM2_decoder").reset_states(states=[states[2],states[3]])
#Prepare the input char
startidx = smilesvec1._char_to_int[smilesvec1.startchar]
samplevec = np.zeros((1,1,smilesvec1.dims[-1]))
samplevec[0,0,startidx] = 1
smiles = ""
#Loop and predict next char
for i in range(1000):
o = sample_model.predict(samplevec)
sampleidx = np.argmax(o)
samplechar = smilesvec1._int_to_char[sampleidx]
if samplechar != smilesvec1.endchar:
smiles = smiles + smilesvec1._int_to_char[sampleidx]
samplevec = np.zeros((1,1,smilesvec1.dims[-1]))
samplevec[0,0,sampleidx] = 1
else:
break
return smiles
i=9 #Select Molecule index
print(Chem.MolToSmiles(Chem.Mol(mols_test[i])))
reconstructed = latent_to_smiles(latent_vectors[i:i+1])
print(reconstructed)
Chem.Mol(mols_test[i])
Chem.MolFromSmiles(reconstructed)
# +
# How many well formed
x_latent = mol2lat_model.predict(test_smi_i[:1000])
wrong = 0
for i in range(100):
smiles = latent_to_smiles(x_latent[i:i+1])
mol = Chem.MolFromSmiles(smiles)
if mol:
pass
else:
print(smiles)
wrong = wrong + 1
print("%0.2F percent wrongly formatted smiles"%(wrong/float(1000)*100))
# +
#Interpolation
#Interpolation test in latent_space
i = 1
j= 10
latent1 = x_latent[j:j+1]
latent0 = x_latent[i:i+1]
mols1 = []
ratios = np.linspace(0,1,25)
for r in ratios:
#print r
rlatent = (1.0-r)*latent0 + r*latent1
smiles = latent_to_smiles(rlatent)
mol = Chem.MolFromSmiles(smiles)
if mol:
mols1.append(mol)
else:
print(smiles)
Draw.MolsToGridImage(mols1, molsPerRow=5)
# -
#Sample around the latent wector
i = 500
latent = x_latent[i:i+1]
scale = 0.7
mols = []
for i in range(20):
latent_r = latent + scale*(np.random.randn(latent.shape[1])) #TODO, try with
smiles = latent_to_smiles(latent_r)
mol = Chem.MolFromSmiles(smiles)
if mol:
mols.append(mol)
else:
print(smiles)
Draw.MolsToGridImage(mols, molsPerRow=5)
# +
#Sampling with a temperature rescaling of the probability output before multinomial sampling
def latent_to_smiles(latent, temp=0.0):
#decode states and set Reset the LSTM cells with them
states = lat2stat_model.predict(latent)
sample_model.get_layer("LSTM1_decoder").reset_states(states=[states[0],states[1]])
sample_model.get_layer("LSTM2_decoder").reset_states(states=[states[2],states[3]])
#Prepare the input char
startidx = smilesvec1._char_to_int[smilesvec1.startchar]
samplevec = np.zeros((1,1,smilesvec1.dims[-1]))
samplevec[0,0,startidx] = 1
smiles = ""
#Loop and predict next char
for i in range(100):
o = sample_model.predict(samplevec)
#Rescale o according to temperature
if temp > 0:
nextCharProbs = np.log(o) / temp
nextCharProbs = np.exp(nextCharProbs)
nextCharProbs = nextCharProbs / nextCharProbs.sum() - 1e-8 # Re-normalize for float64 to make exactly 1.0.
#print nextCharProbs.sum()
sampleidx = np.random.multinomial(1, nextCharProbs.squeeze(), 1).argmax()
else:
sampleidx = np.argmax(o)
samplechar = smilesvec1._int_to_char[sampleidx]
if samplechar != smilesvec1.endchar:
smiles = smiles + smilesvec1._int_to_char[sampleidx]
samplevec = np.zeros((1,1,smilesvec1.dims[-1]))
samplevec[0,0,sampleidx] = 1
else:
break
return smiles
# +
# Sample with temperature scaling
t = 0.2
i = 0
lat = x_latent[i:i+1]
mols = []
labels = []
for i in range(15):
smiles = latent_to_smiles(lat, temp=t)
mol = Chem.MolFromSmiles(smiles)
if mol:
mols.append(mol)
labels.append(smiles)
else:
print("Malformed: %s"%smiles)
Draw.MolsToGridImage(mols, molsPerRow=5, legends=labels)
# +
#Testing fraction malformed with temperature sampling
mols = []
labels = []
mol_i = 1
test_mol = mols_test[mol_i]
lat = x_latent[mol_i:mol_i+1]
malformed = 0
t = 0.2
for i in range(100):
smiles = latent_to_smiles(lat, temp=t)
mol = Chem.MolFromSmiles(smiles)
if mol:
mols.append(mol)
labels.append(smiles)
else:
malformed = malformed +1
#print "Malformed: %s"%smiles
print(malformed)
| HeteroEncoder_analyse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# db = int(input('Database ID (2 for 4 chamber and 17 for short axis): '))
# basedir = input('Base directory (e.g. D:/ML_data/PAH): ')
# scale = int(input('Scale (16, 8, 4, or -1): '))
# mask_id = int(input('Mask ID (1-5): '))
# level = int(input('Preprocessing level (1-4): '))
# -
import os
import h5py
import numpy as np
import pandas as pd
# ### Step 0: Converting .mat file to .npy (ndarray) file
def mat2npy(basedir, db):
fname = 'PAH1DB%s.mat' % db
print('Converting %s to ndarray' % fname)
data_path = os.path.join(basedir, fname)
f = h5py.File(data_path, 'r')
data = f['data'][()].transpose()
out_path = os.path.join(basedir, 'PAH1DB%s.npy' % db)
np.save(out_path, data)
# data_ = torch.from_numpy(data).to_sparse()
# out_path = os.path.join(basedir, 'RegPAH1DB%s.hdf5' % db)
# f = h5py.File(out_path, "w")
# dest = f.create_dataset()
labels = f['labels'][()].reshape(-1)
# max_dist = f['maxDists'][()].reshape(-1)
# df = pd.DataFrame(data={'Label': labels, 'Max dist': max_dist})
df = pd.DataFrame(data={'Label': labels})
csv_path = os.path.join(basedir, 'info_DB%s.csv' % db)
df.to_csv(csv_path)
print('Completed!')
# +
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
interact_matmul(mat2npy, db=widgets.Dropdown(
options=[('Four chamber', 2), ('Short axis',17)],
value=2,
description='Database',
disabled=False,
), basedir=input('Base directory (e.g. D:/ML_data/PAH): '))
# -
# ## Step 1: Registration
# +
def load_data(basedir, db):
data_path = os.path.join(basedir, 'PAH1DB%s.npy' % db)
data = np.load(data_path)
return data
def load_landmark(basedir, db):
reg_fpath = os.path.join(basedir, 'regMRI4ChFull.xlsx')
if db == 2:
sheet_name = '4ch'
# sheet_name = 0
col_names = ['ID', 'Group', 'mitral ann X', 'mitral ann Y',
'LVEDV apex X', 'LVEDV apex Y', 'Spinal X', 'Spinal Y']
elif db == 17:
sheet_name = 'SA'
col_names = ['ID', 'Group', 'inf insertion point X', 'insertion point Y',
'sup insertion point X', 'sup insertion point Y', 'RV inf X', 'RV inf Y']
reg_df = pd.read_excel(reg_fpath, sheet_name=sheet_name, usecols=col_names)
return reg_df
# +
import sys
sys.path.append('..')
from kale.prepdata.prep_cmr import regMRI
def proc_reg(basedir, db, sample_id=1007):
print('Performing registration...')
data = load_data(basedir, db)
reg_df = load_landmark(basedir, db)
reg_id = np.where(reg_df['ID'] == sample_id)[0][0]
data_reg, max_dist = regMRI(data, reg_df, reg_id)
out_path = os.path.join(basedir, 'RegPAH1DB%s.npy' % db)
np.save(out_path, data_reg)
info_file = os.path.join(basedir, 'info_DB%s.csv' % db)
if os.path.exists(info_file):
info_df = pd.read_csv(info_file, index_col=0)
else:
info_df = pd.DataFrame(data={'Label': reg_file['Group'].values})
info_df['ID'] = reg_file['ID']
info_df['Max Dist'] = max_dist
info_df.to_csv(info_file, columns=['ID', 'Label', 'Max Dist'], index=False)
print('Registration Completed')
# +
from ipywidgets import interact_manual
import ipywidgets as widgets
interact_manual(proc_reg, db=widgets.Dropdown(
options=[('Four chamber (2)', 2), ('Short axis (17)',17)],
value=2,
description='Database',
disabled=False,),
basedir=input('Base directory (e.g. D:/ML_data/PAH): '),
sample_id=int(input('Target sample ID used for regisitration (1007): '))
)
# -
# ## Step 2: Rescaling
# +
import sys
sys.path.append('..')
from kale.prepdata.prep_cmr import rescale_cmr
def proc_rescale(basedir, db, scale=-1):
data_path = os.path.join(basedir, 'RegPAH1DB%s.npy' % db)
data = np.load(data_path)
out_dir = os.path.join(basedir, 'DB%s' % db)
print('Rescaling data ...')
if not os.path.exists(out_dir):
os.mkdir(out_dir)
if scale == -1:
for scale_ in [16, 8, 4]:
print('Scale: 1/%s' % scale_)
data_ = rescale_cmr(data, scale=scale_)
out_path = os.path.join(out_dir, 'NoPrs%sDB%s.npy' % (scale_, db))
np.save(out_path, data_)
else:
print('Scale: 1/%s' % scale)
data_ = rescale_cmr(data, scale=scale)
out_path = os.path.join(out_dir, 'NoPrs%sDB%s.npy' % (scale, db))
np.save(out_path, data_)
print('Completed!')
# +
from ipywidgets import interact_manual
import ipywidgets as widgets
interact_manual(proc_rescale, db=widgets.Dropdown(
options=[('Four chamber (2)', 2), ('Short axis (17)',17)],
value=2,
description='Database',
disabled=False,),
basedir=input('Base directory (e.g. D:/ML_data/PAH): '),
scale=widgets.Dropdown(
options=[('16', 16), ('8', 8), ('4', 4), ('-1 (All of above)', -1)],
value=4,
description='Scale',
disabled=False,)
)
# -
# ## Step 3: Preprocessing
# +
import sys
sys.path.append('..')
from kale.prepdata.prep_cmr import cmr_proc
# if scale == -1:
# for scale_ in [16, 8, 4]:
# cmr_proc(basedir, db, scale_, mask_id, level, save_data=True)
# else:
# cmr_proc(basedir, db, scale, mask_id, level, save_data=True)
interact_manual(cmr_proc, db=widgets.Dropdown(
options=[('Four chamber (2)', 2), ('Short axis (17)',17)],
value=2,
description='Database',
disabled=False,),
basedir=input('Base directory (e.g. D:/ML_data/PAH): '),
scale=widgets.Dropdown(
options=[('16', 16), ('8', 8), ('4', 4), ('-1 (All of above)', -1)],
value=4,
description='Scale',
disabled=False,),
mask_id=widgets.Dropdown(
options=[('1', 1), ('2', 2), ('3', 3), ('4', 4), ('5', 5), ('6', 6), ('7', 7), ('8', 8)],
value=5,
description='Mask ID:',
disabled=False,),
level=widgets.Dropdown(
options=[('1', 1), ('2', 2), ('3', 3), ('4', 4)],
value=1,
description='Preprocssing level:',
disabled=False,),
)
# -
# # Classification
# +
import sys
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import SelectFromModel
from sklearn.svm import SVC, LinearSVC
from sklearn.linear_model import LogisticRegression, RidgeClassifier, Lasso
from sklearn.model_selection import StratifiedShuffleSplit, GridSearchCV
default_grid = [
{'select__estimator__C': np.logspace(-2, 2, 5)},
{'clf__C': np.logspace(-3, 2, 6), 'clf__kernel': ['linear']},
{'clf__C': np.logspace(-3, 2, 6), 'clf__gamma': np.logspace(-4, -1, 3),
'clf__kernel': ['rbf']},
]
# clf = Pipeline([
# ('feature_selection', SelectFromModel(LinearSVC(penalty="l1"))),
# ('classification', RandomForestClassifier())
# ])
class _Classifier(BaseEstimator, TransformerMixin):
def __init__(self, clf='SVC', param_grid=default_grid, cv=None, n_split=10, test_size=0.2, n_jobs=1):
if clf == 'SVC':
# _clf = Pipeline([('select', SelectFromModel(LinearSVC(penalty='l1', loss='hinge'))),
_clf = Pipeline([('select', SelectFromModel(estimator=LogisticRegression(penalty='l1', solver='liblinear'))),
('clf', SVC(max_iter=10000, probability=True))])
elif clf == 'LR':
_clf = Pipeline([('select', SelectFromModel(Lasso())),
('clf', LogisticRegression(max_iter=10000))])
elif clf == 'Ridge':
_clf = Pipeline([('select', SelectFromModel(Lasso())),
('clf', RidgeClassifier(max_iter=10000))])
else:
print('Invalid Classifier')
sys.exit()
print(param_grid)
if cv is None:
cv = StratifiedShuffleSplit(n_splits=n_split, test_size=test_size,
train_size=1 - test_size, random_state=144)
self.search = GridSearchCV(_clf, param_grid, n_jobs=n_jobs, cv=cv, iid=False)
def fit(self, X, y):
self.search.fit(X, y)
self.clf = self.search.best_estimator_
self.clf.fit(X, y)
return self
def predict(self, X):
return self.clf.predict(X)
def predict_proba(self, X):
return self.clf.predict_proba(X)
# +
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import accuracy_score, roc_auc_score
from tensorly.base import fold, unfold
def label_binarizer(y):
y_ = np.zeros(y.shape)
y_[np.where(y != 0)] = 1
return y_
def evaluate_(X, y, kfold=10, random_state=144, return_auc=True):
skf = StratifiedKFold(n_splits=kfold, random_state=random_state)
res = {'fold_accs': [], 'fold_aucs': [], 'acc': None,'auc': None}
y_pred = np.zeros(y.shape)
y_dec = np.zeros(y.shape)
for train, test in skf.split(X, y):
clf = _Classifier()
clf.fit(X[train], y[train])
y_pred[test] = clf.predict(X[test])
res['fold_accs'].append(accuracy_score(y[test], y_pred[test]))
if return_auc:
y_dec[test] = clf.predict_proba(X[test])[:, 1]
res['fold_aucs'].append(roc_auc_score(y[test], y_dec[test]))
res['acc'] = accuracy_score(y, y_pred)
if return_auc:
res['auc'] = roc_auc_score(y, y_dec)
return res
# +
import sys
# sys.path.append('...')
from kale.embed.mpca import MPCA
def main_(basedir, db, scale, mask_id, level):
print('Main Experiemnts for Scale: 1/%s, Mask ID: %s, Processing level: %s' % (scale, mask_id, level))
data_path = '%s/DB%s/PrepData' % (basedir, db)
fname = 'PrS%sM%sL%sDB%s.npy' % (scale, mask_id, level, db)
X = np.load(os.path.join(data_path, fname))
info_df = pd.read_csv(os.path.join(basedir, 'info_DB%s.csv' % db))
y = info_df['Label'].values
y_ = label_binarizer(y)
# Peform MPCA dimension reduction
mpca = MPCA()
mpca.fit(X)
Xmpc = mpca.transform(X)
X_ = unfold(Xmpc, mode=-1).real
# Evaluating
res = evaluate_(X_, y_)
print('Accuracy:', res['acc'], 'AUC:', res['auc'])
# +
from ipywidgets import interact_manual
import ipywidgets as widgets
interact_manual(main_, db=widgets.Dropdown(
options=[('Four chamber (2)', 2), ('Short axis (17)',17)],
value=2,
description='Database',
disabled=False,),
basedir=input('Base directory (e.g. D:/ML_data/PAH): '),
scale=widgets.Dropdown(
options=[('16', 16), ('8', 8), ('4', 4), ('-1 (All of above)', -1)],
value=4,
description='Scale',
disabled=False,),
mask_id=widgets.Dropdown(
options=[('1', 1), ('2', 2), ('3', 3), ('4', 4), ('5', 5), ('6', 6), ('7', 7), ('8', 8)],
value=5,
description='Mask ID:',
disabled=False,),
level=widgets.Dropdown(
options=[('1', 1), ('2', 2), ('3', 3), ('4', 4)],
value=1,
description='Preprocssing level:',
disabled=False,),
)
# -
# ## Landmark Visulaisation
# +
import tkinter
from matplotlib.backends.backend_tkagg import (
FigureCanvasTkAgg, NavigationToolbar2Tk)
# Implement the default Matplotlib key bindings.
from matplotlib.backend_bases import key_press_handler
from matplotlib.figure import Figure
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
# -
def sub_img_mark(basedir, db, sub, slice_):
data = load_data(basedir, db)
reg_df = load_landmark(basedir, db)
sub_idx = np.where(reg_df['ID'] == sub)[0][0]
sub_img = data[..., slice_, sub_idx]
land_marks = reg_file.iloc[sub_idx, 2:]
return sub_img, land_marks
# ### Display landmarks
# +
def disp_mark(basedir, db, sub, slice_):
sub_img, land_marks = sub_img_mark(basedir, db, sub, slice_)
marks = land_marks.values.reshape((-1, 2))
mark_name = land_marks.index.values.reshape((-1, 2))
n_marks = marks.shape[0]
root = tkinter.Tk()
root.wm_title("Subject %s Slice %s" % (sub, slice_))
root.image = sub_img
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111)
im = ax.imshow(root.image)
for i in range(n_marks):
ix = marks[i, 0]
iy = marks[i, 1]
print('%s: %s, %s: %s' % (mark_name[i, 0], ix, mark_name[i, 1], iy))
ax.plot(ix,iy, marker='o', markersize=8, markerfacecolor=(1, 1, 1, 0.1),markeredgewidth=1.5, markeredgecolor='r')
plt.show()
# canvas = FigureCanvasTkAgg(fig, master=root)
# canvas.draw()
# canvas.get_tk_widget().pack(side=tkinter.TOP, fill=tkinter.BOTH, expand=1)
# toolbar = NavigationToolbar2Tk(canvas, root)
# toolbar.update()
# canvas.get_tk_widget().pack(side=tkinter.TOP, fill=tkinter.BOTH, expand=1)
# +
from ipywidgets import interact_manual
import ipywidgets as widgets
interact_manual(disp_mark, db=widgets.Dropdown(
options=[('Four chamber (2)', 2), ('Short axis (17)',17)],
value=2,
description='Database',
disabled=False,),
basedir=input('Base directory (e.g. D:/ML_data/PAH): '),
sub=int(input('Subject ID (e.g. 1005):')),
slice_=int(input('Slice:')),
)
# -
# ### Interactive Marking (Get coords manually)
# +
def onclick(event):
global ix, iy
ix, iy = event.xdata, event.ydata
# print('%s click: button=%d, x=%d, y=%d, xdata=%f, ydata=%f' %
# ('double' if event.dblclick else 'single', event.button,
# event.x, event.y, event.xdata, event.ydata))
print('%s click: button=%d, x=%f, y=%f' %
('double' if event.dblclick else 'single',
event.button, event.xdata, event.ydata))
# ax = fig.add_subplot(111)
ax.plot(ix,iy, marker='o', markersize=8, markerfacecolor=(1, 1, 1, 0.1),markeredgewidth=1.5, markeredgecolor='r')
canvas.draw()
global coords
coords.append((ix, iy))
# if len(coords) == 2:
# fig.canvas.mpl_disconnect(cid)
return coords
def _quit():
root.quit() # stops mainloop
root.destroy() # this is necessary on Windows to prevent
# Fatal Python Error: PyEval_RestoreThread: NULL tstate
def hand_mark(basedir, db, sub, slice_):
sub_img, land_marks = sub_img_mark(basedir, db, sub, slice_)
global root, fig, im, ax, canvas, coords
root = tkinter.Tk()
root.wm_title("Subject %s Slice %s" % (sub, slice_))
# fig = Figure(figsize=(5, 4), dpi=100)
# t = np.arange(0, 3, .01)
# fig.add_subplot(111).plot(t, 2 * np.sin(2 * np.pi * t))
# root.image = plt.imread('index.png')
# root.image = plt.imshow(sub_img, cmap='gray', vmin=0, vmax=255)
root.image = sub_img
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111)
im = ax.imshow(root.image)
canvas = FigureCanvasTkAgg(fig, master=root) # A tk.DrawingArea.
canvas.draw()
canvas.get_tk_widget().pack(side=tkinter.TOP, fill=tkinter.BOTH, expand=1)
toolbar = NavigationToolbar2Tk(canvas, root)
toolbar.update()
canvas.get_tk_widget().pack(side=tkinter.TOP, fill=tkinter.BOTH, expand=1)
coords = []
cid = fig.canvas.mpl_connect('button_press_event', onclick)
button = tkinter.Button(master=root, text="Quit", command=_quit)
button.pack(side=tkinter.TOP)
tkinter.mainloop()
# +
from ipywidgets import interact_manual
import ipywidgets as widgets
interact_manual(hand_mark, db=widgets.Dropdown(
options=[('Four chamber (2)', 2), ('Short axis (17)',17)],
value=2,
description='Database',
disabled=False,),
basedir=input('Base directory (e.g. D:/ML_data/PAH): '),
sub=int(input('Subject ID (e.g. 1005):')),
slice_=int(input('Slice:')),
)
# -
# ### Update coords and save to file
def update_coords(basedir, db, sub, mark_names, mark_values):
mark_names = mark_names.split(',')
mark_values = mark_values.split(',')
n_marks = len(mark_names)
if n_marks == len(mark_values):
reg_df = load_landmark(basedir, db)
sub_idx = np.where(reg_df['ID'] == sub)[0][0]
for i in range(len(mark_names)):
reg_df.loc[sub_idx, mark_names[i]] = int(mark_values[i])
out_fname = 'new_regDB%s.csv' % db
reg_df.to_csv(os.path.join(basedir, out_fname))
print('Completed, new landmark file %s saved to %s' % (out_fname, basedir))
else:
print('Number of landmark names and values are not consistant!!')
sys.exit()
# +
from ipywidgets import interact_manual
import ipywidgets as widgets
interact_manual(update_coords, db=widgets.Dropdown(
options=[('Four chamber (2)', 2), ('Short axis (17)',17)],
value=2,
description='Database',
disabled=False,),
basedir=input('Base directory (e.g. D:/ML_data/PAH): '),
sub=int(input('Subject ID (e.g. 1005):')),
mark_names=input('Landmark Names (separate by comma, e.g. Spinal X,Spinal Y): '),
mark_values=input('New landmark values (separate by comma): ')
)
| examples/cmri_mpca/CMR_PAH.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from __future__ import print_function
import nilmtk
import matplotlib.pyplot as plt
# %matplotlib inline
# First, load the UKDALE dataset into NILMTK. Here we are loading the HDF5 version of UKDALE which you can download by following [the instructions on the UKDALE website](http://www.doc.ic.ac.uk/~dk3810/data/index.html#download_hdf).
dataset = nilmtk.DataSet('/data/ukdale.h5')
# Next, to speed up processing, we'll set a "window of interest" so NILMTK will only consider one month of data.
dataset.set_window("2014-06-01", "2014-07-01")
# Get the ElecMeter associated with the Fridge in House 1:
BUILDING = 1
elec = dataset.buildings[BUILDING].elec
fridge = elec['fridge']
# Now load the activations:
activations = fridge.get_activations()
print("Number of activations =", len(activations))
activations[1].plot()
plt.show()
| notebooks/extract_activations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import nibabel as nb
# %pylab inline
from os.path import join
import seaborn as sn
import nighres
import nilearn as nl
from nilearn import plotting
import math
import matplotlib.pyplot as plt
import os
from os.path import join
from glob import glob
import pathos.multiprocessing as multiprocessing
from functools import partial
import subprocess
import pandas as pd
from sklearn.linear_model import LinearRegression
# +
def get_sub_data(in_dir, sub_id):
"""
Loads an individual subject's data for all modalities
- in_dir {str} : sdsdsds
"""
img1 = nb.load(join(in_dir,'{}_FA_reg.nii.gz'.format(sub_id)))
img2 = nb.load(join(in_dir,'{}_MD_reg.nii.gz'.format(sub_id)))
img3 = nb.load(join(in_dir,'{}_MTsat.nii.gz'.format(sub_id)))
img4 = nb.load(join(in_dir,'{}_PDsat.nii.gz'.format(sub_id)))
img5 = nb.load(join(in_dir,'{}_R1.nii.gz'.format(sub_id)))
img6 = nb.load(join(in_dir,'{}_R2s_OLS.nii.gz'.format(sub_id)))
d1 = img1.get_data()
d2 = img2.get_data()
d3 = img3.get_data()
d4 = img4.get_data()
d5 = img5.get_data()
d6 = img6.get_data()
# d = [d1,d2,d3,d4,d5,d6]
# m = [mk>0 for mk in d]
# mask = np.ones_like(m[0])
# for iii in m:
# mask = mask*iii
d = np.stack((d1,d2,d3,d4,d5,d6),axis=3)
mask = np.prod(d>0,axis=3).astype(bool)
return {'data':d,'mask':mask,'img':img1}
# -
# # Creating White Matter Segmentations
# +
out_dir = '/data/neuralabc/carfra/QuantMetComp/source/masks_created/'
in_dir = '/data/neuralabc/source/MPI_CBS/MPM_DTI/source/'
spm_dir = '/data/neuralabc/source/MPI_CBS/MPM_DTI/processing/segmentations_MTSat_SPM/'
mgdm_dir = '/data/neuralabc/carfra/QuantMetComp/source/masks_created/'
all_dirs = glob(in_dir+'*')
sub_ids = [os.path.basename(x) for x in all_dirs]
# -
def w_gMatterSeg(spm_dir,mgdm_dir,out_dir,sub_id):
out_dir = join(out_dir,sub_id)
spm_dir = join(spm_dir,sub_id)
mgdm_dir = join(mgdm_dir,sub_id,'MGDM')
#wm
mgdm = nb.load(join(mgdm_dir,sub_id+'_mgdm-lbls.nii.gz'))
spm = nb.load(join(spm_dir,sub_id+'_WM.nii.gz'))
mgdmData = mgdm.get_data()
spmData = spm.get_data()
mask1 = (np.logical_or((mgdmData==47),(mgdmData==48))).astype(float)
mask2 = (spmData>0.5).astype(float)
mask = mask1[:,:,:,0]*mask2
wm = nb.Nifti1Image(mask,affine=spm.affine,header=spm.header)
wm.to_filename(join(out_dir,'WM.nii.gz'))
#gm and subcortical
spm = nb.load(join(spm_dir,sub_id+'_GM.nii.gz'))
spmData = spm.get_data()
mask1 = ((mgdmData==26)|(mgdmData==27)|(mgdmData==36)|(mgdmData==37)|(mgdmData==32)|(mgdmData==33)|
(mgdmData==40)|(mgdmData==41)|(mgdmData==38)|(mgdmData==39)).astype(float)
mask2 = ((mgdmData==36)|(mgdmData==37)|(mgdmData==32)|(mgdmData==33)|
(mgdmData==40)|(mgdmData==41)|(mgdmData==38)|(mgdmData==39)).astype(float)
mask3 = (spmData>0.95).astype(float)
mask = mask1[:,:,:,0]*mask3
gm = nb.Nifti1Image(mask,affine=spm.affine,header=spm.header)
gm.to_filename(join(out_dir,'GM.nii.gz'))
mask = mask2[:,:,:,0]*mask3
s_gm = nb.Nifti1Image(mask,affine=spm.affine,header=spm.header)
s_gm.to_filename(join(out_dir,'subcortex.nii.gz'))
subprocess.call(["fslmaths", ##filtering with gaussian kernael to then remove random spaces and dots
join(out_dir,'subcortex.nii.gz'),
"-fmean",
join(out_dir,'subcortex.nii.gz')])
s_gm_data = nb.load(join(out_dir,'subcortex.nii.gz')).get_data()
s_gm_data[s_gm_data>0.6] = 1
s_gm_data[s_gm_data<=0.6] = 0
s_gm = nb.Nifti1Image(s_gm_data,affine=spm.affine,header=spm.header)
s_gm.to_filename(join(out_dir,'subcortex.nii.gz'))
subprocess.call(["fslmaths",
join(out_dir,'subcortex.nii.gz'),
"-fillh",
join(out_dir,'subcortex.nii.gz')])
w_gMatterSeg(spm_dir,mgdm_dir,out_dir,sub_ids[0])
# +
import time
now = time.time()
for iiii in range(20):
pool = multiprocessing.ProcessingPool(nodes=5)
sub_ids_part = sub_ids[5*(iiii):5*(iiii+1)]
extr = partial(w_gMatterSeg,spm_dir,mgdm_dir,out_dir)
pool.map(extr,sub_ids_part)
pool.close()
#Needed to completely destroy the pool so that pathos doesn't reuse
pool.clear()
extr = partial(w_gMatterSeg,spm_dir,mgdm_dir,out_dir)
pool.map(extr,[sub_ids[100]])
pool.close()
#Needed to completely destroy the pool so that pathos doesn't reuse
pool.clear()
then = time.time()
print(then-now)
# -
# # Multiple regression
# +
reg_dir = '/data/neuralabc/carfra/QuantMetComp/processing/MPM/MPM_correlations/GM_vs_WM/'
data_WM = pd.read_csv(join(reg_dir,'WM.csv'), index_col=0)
data_GM = pd.read_csv(join(reg_dir,'GM.csv'), index_col=0)
#reg_dir = '/data/neuralabc/carfra/QuantMetComp/processing/MPM/MPM_correlations/Cortical_vs_subcortical/'
#data_scort = pd.read_csv(join(reg_dir,'subcortical_GM.csv'), index_col=0)
#data_cort = pd.read_csv(join(reg_dir,'cortical_sheath.csv'), index_col=0)
# +
name = ['FA','MD','MTsat','PDsat','R1','$R2^*$']
from scipy.stats import zscore
from scipy import stats
data_WM = data_WM.apply(zscore)
data_GM = data_GM.apply(zscore)
df_WM = data_WM[name[2:6]]
df_GM = data_GM[name[2:6]]
X_WM = df_WM.values.reshape(-1, 4)
X_GM = df_GM.values.reshape(-1, 4)
# +
reg_WM_FA = LinearRegression() # create object
reg_WM_MD = LinearRegression()
reg_GM_FA = LinearRegression()
reg_GM_MD = LinearRegression()
reg_WM_FA.fit(X_WM, data_WM[name[0]].values.reshape(-1, 1)) # perform linear regression
reg_WM_MD.fit(X_WM, data_WM[name[1]].values.reshape(-1, 1))
reg_GM_FA.fit(X_GM, data_GM[name[0]].values.reshape(-1, 1))
reg_GM_MD.fit(X_GM, data_GM[name[1]].values.reshape(-1, 1))
# -
#coefficients
print(reg_WM_FA.coef_)
print(reg_WM_MD.coef_)
print(reg_GM_FA.coef_)
print(reg_GM_MD.coef_)
print(reg_WM_FA.intercept_)
print(reg_WM_MD.intercept_)
print(reg_GM_FA.intercept_)
print(reg_GM_MD.intercept_)
#R squares
print(reg_WM_FA.score(X_WM, data_WM[name[0]].values.reshape(-1, 1)))
print(reg_WM_MD.score(X_WM, data_WM[name[1]].values.reshape(-1, 1)))
print(reg_GM_FA.score(X_GM, data_GM[name[0]].values.reshape(-1, 1)))
print(reg_GM_MD.score(X_GM, data_GM[name[1]].values.reshape(-1, 1)))
data_WM
# +
out_d = '/data/neuralabc/carfra/QuantMetComp/processing/MPM/WM'
in_path = '/data/neuralabc/source/MPI_CBS/MPM_DTI/source/'
mask_d = '/data/neuralabc/carfra/QuantMetComp/source/masks_created/'
#data_dir =
sub_id = sub_ids[0]
out_dir = join(out_d,sub_id)
in_dir = join(in_path,sub_id)
if not os.path.exists(out_dir):
os.makedirs(out_dir)
print('Created your directory: {}'.format(out_dir))
dd = get_sub_data(in_dir,sub_id)
d = dd['data']
mask_file = nb.load(join(mask_d,sub_id,'WM.nii.gz'))
mask = mask_file.get_data().astype(bool)
for iii in np.arange(d.shape[-1]):
data = d[...,iii][mask]
if iii == 0:
df = pd.DataFrame({name[iii]:data})
else:
df[name[iii]] = data
xVars = df[name[2:6]].apply(zscore).values.reshape(-1, 4)
fa_pred = linear_regressor.predict(xVars)
pred = np.zeros_like(mask).astype(float)
pred[mask] = fa_pred[:,0]
file = nb.Nifti1Image(pred,affine=mask_file.affine,header=mask_file.header)
file.to_filename(join(out_dir,'FA_predicted.nii.gz'))
fa_file = nb.load(join(in_dir,sub_id+'_FA_reg.nii.gz'))
fa_d = fa_file.get_data()
real = np.zeros_like(mask).astype(float)
real[mask] = fa_d[mask]
file = nb.Nifti1Image(real,affine=fa_file.affine,header=fa_file.header)
file.to_filename(join(out_dir,'FA_real.nii.gz'))
# -
# # Mapping masked data
#
# +
m_dir = '/data/neuralabc/carfra/QuantMetComp/source/masks_created/'
in_dir = '/data/neuralabc/source/MPI_CBS/MPM_DTI/source/'
o_dir = '/data/neuralabc/carfra/QuantMetComp/processing/MPM/MPM_correlations/metrics/'
all_dirs = glob(in_dir+'*')
sub_ids = [os.path.basename(x) for x in all_dirs]
# -
type_mask = "subcortex"
for sub_id in sub_ids:
mask_dir = join(m_dir,sub_id,type_mask+'.nii.gz')
mask = nb.load(mask_dir).get_data()
data_dirs = glob(join(in_dir,sub_id,'*.nii.gz'))
out_dir = join(o_dir,sub_id,type_mask)
if os.path.exists(join(o_dir,sub_id,'subcortical_GM')):
for ii in glob(join(o_dir,sub_id,'subcortical_GM/*')):
os.remove(ii)
os.rmdir(join(o_dir,sub_id,'subcortical_GM'))
if not os.path.exists(out_dir):
os.makedirs(out_dir)
for data in data_dirs:
data_f = nb.load(data)
data_d = data_f.get_data()
masked_d = data_d*mask
file = nb.Nifti1Image(masked_d,affine=data_f.affine,header=data_f.header)
file.to_filename(join(out_dir,os.path.basename(data)))
| 2_WM_Segmentations_and_correlations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
# + pycharm={"name": "#%%\n"}
MC = 100
nColumns = 11
factor = np.linspace(0,1,nColumns)
# + pycharm={"name": "#%%\n"}
CN = 48
Folder = 'deltaTau_angle/'
rmse_vec = np.zeros((nColumns,MC))
tau_los_vec = np.zeros((nColumns,MC))
tau_nLos_vec = np.zeros((nColumns,MC))
tau_los_est_vec = np.zeros((nColumns,MC))
theta_los_vec = np.zeros((nColumns,MC))
theta_nLos_vec = np.zeros((nColumns,MC))
# + pycharm={"name": "#%%\n"}
for ii in range(nColumns):
results = []
for jj in range(MC):
iteration = ii
pars = np.array([CN, iteration, jj]).astype('str')
file_results = Folder + pars[0] + '_' + pars[1] + '_' + pars[2] + '.pkl'
results.append(pd.read_pickle(file_results))
data_concat = pd.concat(results)
rmse_vec[ii,:] = data_concat['rmse']
tau_los_vec[ii,:] = data_concat['tau_los']
tau_nLos_vec[ii,:] = data_concat['tau_nlos']
theta_los_vec[ii,:] = data_concat['theta_los']
theta_nLos_vec[ii,:] = data_concat['theta_nlos']
tau_los_est_vec[ii,:] = data_concat['tau_los_est']
# + pycharm={"name": "#%%\n"}
import matplotlib.pyplot as plt
# + pycharm={"name": "#%%\n"}
fig = plt.figure(figsize=(6, 4))
plt.plot(factor,np.mean(rmse_vec,axis=1))
plt.ylim((0,0.5))
plt.ylabel('RMSE(m)')
plt.xlabel('\Delta_tau / T_C')
plt.grid()
plt.show()
plt.savefig('RMS.jpeg', dpi=fig.dpi)
# + pycharm={"name": "#%%\n"}
rmse_dict = {'factor':factor,
'rmse':np.mean(rmse_vec,axis=1)}
rmse_df = pd.DataFrame(data=rmse_dict)
# + pycharm={"name": "#%%\n"}
rmse_df.to_csv('Latex_data/rmse_angle_latex.csv')
# + pycharm={"name": "#%%\n"}
| simulations/load_paths_deta_tau_angle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Nbconvert: Export notebooks to other formats
#
# Notebook files (`.ipynb`) store your code, notes and output in an editable, executable form. But what if you want to send a notebook to your supervisor, who doesn't have Jupyter installed? Or publish it on your blog?
#
# **Nbconvert** is a tool to convert notebooks to other file formats, such as:
#
# * HTML web pages
# * PDF documents (generated by Latex)
# * Python scripts (`.py`) which you can run without Jupyter
# + [markdown] slideshow={"slide_type": "slide"}
# Nbconvert is integrated into the notebook editor application. If you're reading this in the notebook application, click *File -> Download as*:
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# You can also use nbconvert from the command line:
#
# jupyter nbconvert --to html 'Using Nbconvert.ipynb'
#
# Nbconvert is also a Python library, so you can write Python code to convert notebooks. There are [more details in the docs](http://nbconvert.readthedocs.io/en/latest/nbconvert_library.html).
# + [markdown] slideshow={"slide_type": "slide"}
# ### Challenge 1
#
# Make a `.html` file from this notebook, and open it up in your web browser.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Challenge 2
#
# Convert the notebook to a format like Markdown, reStructuredText, or LaTeX. What happens to the plot below, which is an image embedded in the notebook?
# -
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-5, 5)
plt.plot(x, x**2)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Challenge 3
#
# We've added some metadata to this notebook to divide it up into slides. Can you use nbconvert to make a slideshow?
#
# <div class="alert alert-info">Use `--help` with command line tools to get more information. You could also [read the docs](http://nbconvert.readthedocs.io/en/latest/index.html) or use a search engine.</div>
# -
| intro_python/python_tutorials/jupyter-notebook_intro/Using Nbconvert.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/johnreyes96/artificial-intelligence/blob/master/src/main/python/SupportVectorMachines.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="k6QsJCvRYfTc" outputId="7e6e5411-75d9-4d42-8050-6584319a505a"
# Support vector machines (SVMs)
from sklearn import svm
X = [[0, 0], [1, 1]]
y = [0, 1]
clf = svm.SVC()
clf.fit(X, y)
# + colab={"base_uri": "https://localhost:8080/"} id="be5oDNS6Yhy4" outputId="494c065b-ff9f-4c76-b5d9-b44139df91c0"
clf.support_vectors_
| src/main/python/SupportVectorMachines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Statistical Outlier Removal filter
#
# 본 챕터에서는 Noise 제거 방법 중 하나인 Statistical Outlier Removal filter에 대하여 다루고 있습니다.
#
# 자세한 내용은 [Removing outliers using a Statistical OutlierRemoval filter](http://pointclouds.org/documentation/tutorials/statistical_outlier.php#statistical-outlier-removal)를 참고 하시면 됩니다.
# %load_ext watermark
# %watermark -d -v -p pcl,numpy
# +
# -*- coding: utf-8 -*-
from __future__ import print_function
import pcl
import numpy as np
import random
import os
os.chdir("/workspace/3D_People_Detection_Tracking")
# -
from include.visualization_helper import *
# %matplotlib inline
# ## do_statistical_outlier_filtering 정의
#
# 입력
# - pcl_data : point cloud
# - mean_k : 분석시 참고할 주변 점의 수
# - tresh : Noise로 판단시 사용할 거리 정보
#
# 출력
# - point cloud
def do_statistical_outlier_filtering(pcl_data,mean_k,tresh):
'''
:param pcl_data: point could data subscriber
:param mean_k: number of neighboring points to analyze for any given point
:param tresh: Any point with a mean distance larger than global will be considered outlier
:return: Statistical outlier filtered point cloud data
eg) cloud = do_statistical_outlier_filtering(cloud,10,0.001)
: https://github.com/fouliex/RoboticPerception
'''
outlier_filter = pcl_data.make_statistical_outlier_filter()
outlier_filter.set_mean_k(mean_k)
outlier_filter.set_std_dev_mul_thresh(tresh)
return outlier_filter.filter()
# ## 랜덤 Point Cloud 생성
# +
cloud = pcl.PointCloud()
points = np.zeros((5, 3), dtype=np.float32)
RAND_MAX = 1024.0
for i in range(0, 5):
points[i][0] = 1024 * random.random () / RAND_MAX
points[i][1] = 1024 * random.random () / RAND_MAX
points[i][2] = 1024 * random.random () / RAND_MAX
cloud.from_array(points)
# +
print("Number of Points : {}".format(cloud.size))
for i in range(0, cloud.size):
print ('x: ' + str(cloud[i][0]) + ', y : ' + str(cloud[i][1]) + ', z : ' + str(cloud[i][2]))
if (cloud.size!=0):
visualization2D_xyz(cloud.to_array())
# -
# ## do_statistical_outlier_filtering 수행
mean_k = 10
tresh = 0.001
cloud = do_statistical_outlier_filtering(cloud,mean_k,tresh)
# +
print("Number of Points : {}".format(cloud.size))
for i in range(0, cloud.size):
print ('x: ' + str(cloud[i][0]) + ', y : ' + str(cloud[i][1]) + ', z : ' + str(cloud[i][2]))
if (cloud.size!=0):
visualization2D_xyz(cloud.to_array())
| docs/500-noise-filter/510-Statistical_filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: UT
# language: python
# name: ut
# ---
# # Drought Prediction in the Mediterranean
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.decomposition import PCA
from sklearn.preprocessing import KBinsDiscretizer, Binarizer
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split
#from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix, f1_score, precision_score, plot_confusion_matrix
from matplotlib import pyplot as plt
from imblearn.over_sampling import RandomOverSampler
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE
from collections import Counter
# %matplotlib inline
import numpy as np
import pandas as pd
import pickle
# +
# Train/Test Split - Just run once
X_tas = np.load('Data/tas_train.npy')
X_psl = np.load('Data/psl_train.npy')
X_hf = np.load('Data/heatflux_train.npy')
y=np.load('Data/nao_index_train.npy')
X = np.concatenate((X_tas, X_psl, X_hf),axis=1)
y=y.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
np.save('Data/train/X_train_wHF.npy', X_train)
np.save('Data/test/X_test_wHF.npy', X_test)
np.save('Data/train/y_train_wHF.npy', y_train)
np.save('Data/test/y_test_wHF.npy', y_test)
# -
# # Exploratory Data Analysis
#Load training data
X_train = np.load('Data/train/X_train_wHF.npy')
y_train = np.load('Data/train/y_train_wHF.npy')
print(X_train.shape)
print(y_train.shape)
# Data are standardized by column
X_df = pd.DataFrame(X_train)
print("Range of means: ",X_df.mean().max() - X_df.mean().min())
print("Range of stdevs: ",X_df.std().max() - X_df.std().min())
# No missing data (synthetic dataset, so none are expected)
X_df.isna().sum().sum()
# +
# PCA - Virtually all variance is explained in the first 200 PCs of press and the first 500 PCs of temp
num_components = [50,100,150,200,250,300,350,400,500]
explained_variance_temp = []
explained_variance_press = []
explained_variance_flux = []
for i in num_components:
pca_temp = PCA(n_components=i)
pca_press = PCA(n_components=i)
pca_flux = PCA(n_components=i)
X_tas_pca = pca_temp.fit_transform(X_train[:,0:2322])
X_psl_pca = pca_press.fit_transform(X_train[:,2322:4645])
X_hf_pca = pca_flux.fit_transform(X_train[:,4645:])
explained_variance_temp.append(pca_temp.explained_variance_ratio_.sum())
explained_variance_press.append(pca_press.explained_variance_ratio_.sum())
explained_variance_flux.append(pca_flux.explained_variance_ratio_.sum())
plt.plot(num_components,explained_variance_temp,'go',label="Temp")
plt.plot(num_components,explained_variance_press,"r^",label="Press")
plt.plot(num_components,explained_variance_flux,"b+",label="Heat Flux")
plt.title("PCA Explained Variance")
plt.xlabel("Num Components")
plt.ylabel("% Explained Variance")
plt.legend(loc="lower right")
plt.show()
# -
np.histogram(y_train, bins=20);
plt.title("NAO Index Histogram")
plt.xlabel("Index")
plt.ylabel("Frequency")
plt.hist(y_train,bins=20)
plt.show()
# #### Investigate Correlation between target variable and each coordinate
data = pd.DataFrame(X_train)
data.corrwith(pd.Series(y_train.reshape(720,))).hist(bins=15);
# # Preprocessing
# +
# Transform Training Data with PCA (~95% cutoff)
pca_temp = PCA(n_components=125)
pca_press = PCA(n_components=50)
pca_flux = PCA(n_components=350)
# After feedback from <NAME>, reduced the number of PCs.
# pca_temp = PCA(n_components=15)
# pca_press = PCA(n_components=15)
X_tas_pca = pca_temp.fit_transform(X_train[:,0:2322])
X_psl_pca = pca_press.fit_transform(X_train[:,2322:4645])
X_hf_pca = pca_flux.fit_transform(X_train[:,4645:])
X_train_pca = np.concatenate((X_tas_pca,X_psl_pca,X_hf_pca),axis=1)
# -
# Calculate reconstruction error
X_tas_projected = pca_temp.inverse_transform(X_tas_pca)
X_psl_projected = pca_press.inverse_transform(X_psl_pca)
X_hf_projected = pca_flux.inverse_transform(X_hf_pca)
loss_temp = ((X_train[:,0:2322]-X_tas_projected)**2).mean()
loss_press = ((X_train[:,2322:4645]-X_psl_projected)**2).mean()
loss_hf = ((X_train[:,4645:]-X_hf_projected)**2).mean()
print('MSE Temp: '+str(loss_temp))
print('MSE Press: '+str(loss_press))
print('MSE Heat Flux: '+str(loss_hf))
# +
y_train_bin = np.sign(y_train).reshape(-1,)
# kbin = KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='kmeans')
# kbin = KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='quantile')
# y_train_cat= kbin.fit_transform(y_train.reshape(-1,1)).reshape(-1,)
# print(pd.Series(y_train_cat).value_counts())
# print(kbin.bin_edges_)
# -
# # Random Forest Classifier
# ## Binary Classification (positive or negative NAOI)
rf = RandomForestClassifier(random_state=1337)
# HPO without PCA
## Random Forest ##
parameters = {'max_depth':(10, 100), 'min_samples_split':[5, 15], 'criterion':['entropy','gini']}
clf = GridSearchCV(rf, parameters,scoring='f1')
clf.fit(X_train,y_train_bin)
print(clf.best_score_)
print(clf.best_params_)
# Best F1 Score: 0.6484744900788914
#
# {'criterion': 'entropy', 'max_depth': 100, 'min_samples_split': 15}
results = pd.DataFrame(clf.cv_results_)
results
# HPO with PCA
## Random Forest ##
parameters = {'max_depth':(10, 100), 'min_samples_split':[5, 15], 'criterion':['entropy','gini']}
clf = GridSearchCV(rf, parameters,scoring='f1')
clf.fit(X_train_pca,y_train_bin)
print(clf.best_score_)
print(clf.best_params_)
# Best F1 Score: 0.6783414519943268
#
# {'criterion': 'entropy', 'max_depth': 10, 'min_samples_split': 15}
# Equal if not better performance when using PCA transform
results = pd.DataFrame(clf.cv_results_)
results
# Persist RF wPCA Model
with open('Data/RF_Bin_wPCA_wHF', 'wb') as f:
pickle.dump(clf, f)
# ## Testing
# Load Test Data
X_test = np.load('Data/test/X_test_wHF.npy')
y_test = np.load('Data/test/y_test_wHF.npy')
#transform test data
X_tas_pca_test = pca_temp.transform(X_test[:,0:2322])
X_psl_pca_test = pca_press.transform(X_test[:,2322:4645])
X_hf_pca_test = pca_flux.transform(X_test[:,4645:])
X_test_pca = np.concatenate((X_tas_pca_test,X_psl_pca_test,X_hf_pca_test),axis=1)
y_test_bin = np.sign(y_test)
plot_confusion_matrix(clf, X_test_pca, y_test_bin)
y_pred = clf.predict(X_test_pca)
print('F1 score: '+str(f1_score(y_test_bin, y_pred, average='binary')))
np.unique(y_test_bin, return_counts=True)
# # Try XGBoost
import xgboost as xgb
y_train_bin_xg = Binarizer().fit_transform(y_train)
y_test_bin_xg = Binarizer().fit_transform(y_test)
dtrain = xgb.DMatrix(X_train_pca, label=y_train_bin_xg)
dtest = xgb.DMatrix(X_test_pca, label=y_test_bin_xg)
param = {'max_depth':100, 'eta':1, 'objective': 'binary:logistic'}
param['eval_metric'] = 'auc'
evallist = [(dtest,'eval'),(dtrain,'train')]
num_round = 20
bst = xgb.train(param, dtrain, num_round, evallist)
y_pred = bst.predict(xgb.DMatrix(X_test_pca))
y_pred_bin = y_pred>=0.5
print('F1 score: '+str(f1_score(y_test_bin_xg, y_pred_bin, average='binary')))
| withHeatFlux.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## MNIST
#
# MNIST stuff.
# ## Get Data
#
# from Azure?
# +
from adlfs import AzureBlobFileSystem
container_name = "datasets"
storage_options = {"account_name": "azuremlexamples"}
# -
fs = AzureBlobFileSystem(**storage_options)
fs
files = fs.ls(f"{container_name}/mnist")
files
# ## Create a LightningDataModule
#
# This is tricky! Not!
# +
import gzip
import numpy as np
import pytorch_lightning as pl
from adlfs import AzureBlobFileSystem
from torch.utils.data import DataLoader
from sklearn.preprocessing import OneHotEncoder
class AzureMLMNISTDataModule(pl.LightningModule):
def __init__(self, batch_size: int = 10):
super().__init__()
self.batch_size = batch_size
def setup(self, stage=None):
data_dir = "datasets/mnist"
storage_options = {"account_name": "azuremlexamples"}
fs = AzureBlobFileSystem(**storage_options)
files = fs.ls(data_dir)
train_len = 60000
test_len = 10000
for f in files:
if "train-images" in f:
self.X_train = self._read_images(
gzip.open(fs.open(f)), train_len
)
elif "train-labels" in f:
self.y_train = self._read_labels(
gzip.open(fs.open(f)), train_len
)
elif "images" in f:
self.X_test = self._read_images(
gzip.open(fs.open(f)), test_len
)
elif "labels" in f:
self.y_test = self._read_labels(
gzip.open(fs.open(f)), test_len
)
self.ohe = OneHotEncoder().fit(self.y_train.reshape(-1, 1))
self.mnist_train = list(
zip(
self.X_train,
self.ohe.transform(self.y_train.reshape(-1, 1)).toarray(),
)
)
self.mnist_test = list(
zip(
self.X_test,
self.ohe.transform(self.y_test.reshape(-1, 1)).toarray(),
)
)
def _read_images(self, f, images):
image_size = 28
f.read(16) # magic
buf = f.read(image_size * image_size * images)
data = np.frombuffer(buf, dtype=np.uint8).astype(np.float32)
data = data.reshape(images, image_size, image_size, 1)
return data
def _read_labels(self, f, labels):
f.read(8) # magic
buf = f.read(1 * labels)
labels = np.frombuffer(buf, dtype=np.uint8).astype(np.int64)
return labels
def train_dataloader(self):
return DataLoader(self.mnist_train, batch_size=self.batch_size)
def test_dataloader(self):
return DataLoader(self.mnist_test, batch_size=self.batch_size)
# -
mnist = AzureMLMNISTDataModule()
mnist.setup()
# +
import matplotlib.pyplot as plt
for batch in mnist.mnist_test:
x, y = batch
plt.imshow(x.squeeze())
print(f"Label: {y}")
break
# -
# ## Fun Time!
#
# the work pays off?
# +
import torch
import pytorch_lightning as pl
from torch import nn
from torch.nn import functional as F
class System1(pl.LightningModule):
def __init__(self, batch_size):
# magic
super().__init__()
self.batch_size = batch_size
self.net = nn.Sequential(
nn.Linear(28 * 28, 128),
nn.ReLU(),
nn.Linear(128, 256),
nn.ReLU(),
nn.Linear(256, 10),
nn.Softmax(),
)
def forward(self, x):
x = self.net(x)
return x
def training_step(self, batch, batch_idx):
x, y = batch
x = x.view(self.batch_size, -1)
y = y.view(self.batch_size, -1)
y_hat = self.forward(x)
loss = F.binary_cross_entropy(y_hat, y.float())
self.log("train_loss", loss)
acc = (y_hat == y.float()).sum() / len(y_hat)
self.log("train_acc", acc)
return {"loss": loss, "acc": acc}
def test_step(self, batch, batch_idx):
x, y = batch
x = x.view(self.batch_size, -1)
y = y.view(self.batch_size, -1)
y_hat = self.forward(x)
loss = F.binary_cross_entropy(y_hat, y.float())
self.log("test_loss", loss)
acc = (y_hat == y.float()).sum() / len(y_hat)
self.log("test_acc", acc)
return {"loss": loss, "acc": acc}
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
# -
system1 = System1(mnist.batch_size)
system1
# %%time
trainer = pl.Trainer(max_epochs=100)
trainer.fit(system1, mnist.train_dataloader())
# %%time
trainer.test(system1, mnist.test_dataloader())
| dev/mnist/remote.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [Источник](https://github.com/georgeliu1998/tf_and_colab/blob/master/tf_and_colab.ipynb)
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/georgeliu1998/tf_and_colab/blob/master/tf_and_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="bEf39ec2oxnk"
# ## Set up the Environment
# + colab={} colab_type="code" id="8ca5cfMGBv5S"
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/", "height": 187} colab_type="code" id="xZ5mZWagHNgF" outputId="b7b73435-5e5c-4853-be15-2c3329f2af92"
# Check TensorFlow versions
# !pip show tensorflow
# + [markdown] colab_type="text" id="l5KlNi7Yo47w"
# ## Colab Essentials
# + [markdown] colab_type="text" id="mLa4liIS11kb"
# #### Opening up a New Noetbook
#
# For the very first time using Colab, you can visit [here](https://colab.research.google.com/). Once you have a notebook created, it'll be saved in your Google Drive. You can access it by visiting your Google Drive page, then either double click on the file name or right click and Open with Colab.
#
# + [markdown] colab_type="text" id="Cq4M4kphfWsh"
# #### Shortcuts
#
# - Run cell: "Shift + Eneter"
# - Delete cell: "Ctrl + M, then D"
# - Undo: "Ctrl + Shift + Z"
# - Convert to code cell: "Ctrl + M, then Y"
# - Convert to markdown cell: "Ctrl + M, then M"
# - Save notebook: "Ctrl + S"
# - Open up the shortcut screen: "Ctrl + M, then H"
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="mAqaMlh0fpcd" outputId="9d754336-f4fc-4ca7-a7a4-93bb673ed03a"
# A random cell to be deleted
print("Colab -> is cool")
# + [markdown] colab_type="text" id="vRBQ2gp2tBPJ"
# #### Connecting with GitHub
#
# "File" --> "Save a copy in GitHub"
# + [markdown] colab_type="text" id="R01sFrg5DupU"
# #### Enable GPU support
#
# "Runtime - Change runtime type - Hardware accelerator"
# + [markdown] colab_type="text" id="sXReAo4YpL86"
# ## Several Quick Examples
# + [markdown] colab_type="text" id="anlrl0tjqjRN"
# #### Constant and Variable
# + colab={} colab_type="code" id="DbcIXRyv8cze"
# Define two TensorFlow constants
a = tf.constant(1, name='a_var')
b = tf.constant(2, name='b_bar')
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="VVn8yWcfaRdS" outputId="fa07235b-334e-4592-b338-2ebbea820b98"
# See what the constant a is
a
# + colab={} colab_type="code" id="oHRDFr3AlR7c"
# Define a variable c
c = tf.Variable(a + b)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="D0Y86dwwaTyK" outputId="c0fed1dc-8f17-4e70-ec99-147719101f9f"
# Check out c
c
# + [markdown] colab_type="text" id="NB4C5ijDqx4M"
# #### Session
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="yCSC-2dZlT6g" outputId="bcd35e15-ed26-428a-8f22-7389bd1718ac"
# Initialize all variables and run the computational graph
init = tf.global_variables_initializer()
with tf.Session() as session:
session.run(init)
print(session.run(c))
# + [markdown] colab_type="text" id="GakUyY5vlzsY"
# #### Placeholder
#
# Now let's show the use of placeholder. We'll first use a parabola equation as below:
#
# $y = a x^2+bx+c$
#
# Here, imagine x, instead of just one number, it's a list of numbers (vector). To calculate the corresponding y value for each x value, we can use placeholder.
# + colab={} colab_type="code" id="rYzKUwolbj7U"
# Initialize the coefficients as constants
a = tf.constant(1, dtype=tf.float32)
b = tf.constant(-20, dtype=tf.float32)
c = tf.constant(-100, dtype=tf.float32)
# + colab={} colab_type="code" id="w_Rx6po9bjzU"
# Initialize x as a placeholder since we need to feed the data for it
x = tf.placeholder(dtype=tf.float32)
# + colab={} colab_type="code" id="JPBcHbY0bjnV"
# Set up the computational graph
y = a * (x ** 2) + b * x + c
# + colab={} colab_type="code" id="Dl_kL56aQ2ZK"
# Provide the feed in data for x
x_feed = np.linspace(-10, 30, num=10)
# + colab={} colab_type="code" id="_usMBj99Cg0x"
# Start and run a session
with tf.Session() as sess:
results = sess.run(y, feed_dict={x: x_feed})
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="7FHFb7KkTikf" outputId="b191a1c0-63f1-474a-b991-9a74040a9be6"
print(results)
# + [markdown] colab_type="text" id="cGjt7YwTpTYZ"
# ## A Mini Project
# + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" id="4Grz7gwKtpI4" outputId="ba351836-1952-4c35-ca73-d3aa308d3cea"
# A simple example taken from TensorFlow Guide: https://www.tensorflow.org/guide/low_level_intro
# Define the placeholders
x = tf.placeholder(dtype=tf.float32, shape=(None, 1))
y_true = tf.placeholder(dtype=tf.float32, shape=(None, 1))
# Create the model
linear_model = tf.layers.Dense(units=1,
bias_initializer=tf.constant_initializer(1))
y_pred = linear_model(x)
# Define the loss function
loss = tf.losses.mean_squared_error(labels=y_true,
predictions=y_pred)
# Define the optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
#Initialize all variables (weights and biases in the defined layer above)
init = tf.global_variables_initializer()
# Provide the training examples
x_values = np.array([[1], [2], [3], [4]])
y_values = np.array([[0], [-1], [-2], [-3]])
with tf.Session() as sess:
sess.run(init)
# Do 1000 rounds of training
for i in range(1000):
_, loss_value = sess.run((train, loss), feed_dict={x: x_values, y_true: y_values})
print(loss_value, end='\r')
weights = sess.run(linear_model.weights)
bias = sess.run(linear_model.bias)
preds = sess.run(y_pred,
feed_dict={x: x_values})
print("The weight is: ", weights)
print('\r')
print("The bias is: ", bias)
print('\r')
print("The predictions are: \n", preds)
# + colab={} colab_type="code" id="7ilYm8Wu5dv9"
# Get the weight
w = weights[0].tolist()[0][0]
# + colab={} colab_type="code" id="aD0BmeBQ59rE"
# Get the bias
b = weights[1].tolist()[0]
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="vD25uBEw6G4q" outputId="ffe112b8-9b6e-494a-eb46-ff4e86d4adf0"
# Make predictions based on the weight and bias
x_values * w + b
| Lectures notebooks/(Lectures notebooks) netology Machine learning/14. Introduction to neural networks/tf_and_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="OUMmqbXU9-nh" colab_type="text"
# Даны значения зарплат из выборки выпускников: 100, 80, 75, 77, 89, 33, 45, 25, 65, 17, 30, 24, 57, 55, 70, 75, 65, 84, 90, 150. Посчитать (желательно без использования статистических методов наподобие std, var, mean) среднее арифметическое, среднее квадратичное отклонение, смещенную и несмещенную оценки дисперсий для данной выборки.
# + id="noJbzt4W97hD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5d3f3ef9-2424-48c8-ae31-85070e3d6da9"
import math
def srednee(arr):
summa = sum(arr)
length = len(arr)
return summa/length
def disp_nesm(arr):
mean = srednee(arr)
chislitel = 0
for i in arr:
chislitel += pow((i-mean), 2)
znamenatel = len(arr)-1
return chislitel/znamenatel
def disp_sm(arr):
mean = srednee(arr)
chislitel = 0
for i in arr:
chislitel += pow((i-mean), 2)
znamenatel = len(arr)
return chislitel/znamenatel
def srednee_otklonenie(arr):
return math.sqrt(disp_nesm(arr))
zp = [100, 80, 75, 77, 89, 33, 45, 25, 65, 17, 30, 24, 57, 55, 70, 75, 65, 84, 90, 150]
print(f'Среднее арифметическое {srednee(zp)}, Среднее отклонение {srednee_otklonenie(zp)}, Смещенная оценка дисперсии {disp_sm(zp)}, Несмещенная оценка дисперсии {disp_nesm(zp)}')
# + [markdown] id="W8tzv5ohCZmW" colab_type="text"
# В первом ящике находится 8 мячей, из которых 5 - белые. Во втором ящике - 12 мячей, из которых 5 белых. Из первого ящика вытаскивают случайным образом два мяча, из второго - 4. Какова вероятность того, что 3 мяча белые?
# + id="Jaa7Oc9WDV5k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a60d0b20-5aa1-4430-adec-d36a730a02a5"
from math import factorial
def C(n, k): #C
return int(factorial(n) / (factorial(k) * factorial(n - k)))
#два белых из первого и один белый из второго
P1 = (C(5,2)/C(8,2))*(C(5,1)*C(7,3)/C(12,4))
#Один белый из первого и два белых из второго
P2 = (C(5,1)*C(3,1)/C(8,2))*(C(5,2)*C(7,2)/C(12,4))
#Два черных из первого и три белых из второго
P3 = (C(3,2)/C(8,2))*(C(5,3)*C(7,1)/C(12,4))
P = (P1+P2+P3)*100
print(f'Вероятность {P}%')
# + [markdown] id="U9WMyUcCOUG-" colab_type="text"
# На соревновании по биатлону один из трех спортсменов стреляет и попадает в мишень. Вероятность попадания для первого спортсмена равна 0.9, для второго — 0.8, для третьего — 0.6. Найти вероятность того, что выстрел произведен: a). первым спортсменом б). вторым спортсменом в). третьим спортсменом.
# + id="E1x-JlH8Ridf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="db4a22b6-938c-4d7b-874f-218d9819191c"
PA = 1/3 * 0.9 + 1/3 * 0.8 + 1/3*0.6 #по формуле полной вероятности
P_B123 = 1/3
P1 = P_B123 * 0.9 / PA
P2 = P_B123 * 0.8 / PA
P3 = P_B123 * 0.6 / PA
print(f"Вероятность, что первый {P1}, второй {P2}, третий {P3}")
# + [markdown] id="wBJmaf3ESFMD" colab_type="text"
# В университет на факультеты A и B поступило равное количество студентов, а на факультет C студентов поступило столько же, сколько на A и B вместе. Вероятность того, что студент факультета A сдаст первую сессию, равна 0.8. Для студента факультета B эта вероятность равна 0.7, а для студента факультета C - 0.9. Студент сдал первую сессию. Какова вероятность, что он учится: a). на факультете A б). на факультете B в). на факультете C?
# + id="vJuXFCfPVrlL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="14c24ac8-a427-473c-f9c0-a7c6a9242199"
x=1 #чтобы не ругался питон
ver_A = 0.8
ver_B = 0.7
ver_C = 0.9
A_students = x
B_students = x
C_students = 2*x
total_students = A_students+B_students+C_students
PA = A_students/total_students
PB = B_students/total_students
PC = C_students/total_students
P_sdal = ver_A * PA + ver_B * PB + ver_C* PC
P_studiesA = PA * ver_A / P_sdal
P_studiesB = PB * ver_B / P_sdal
P_studiesC = PC * ver_C / P_sdal
print(f'Учится на факультете A {P_studiesA}, факультете B {P_studiesB}, факультете C {P_studiesC}')
# + [markdown] id="7ukM0MDUXWWV" colab_type="text"
# Устройство состоит из трех деталей. Для первой детали вероятность выйти из строя в первый месяц равна 0.1, для второй - 0.2, для третьей - 0.25. Какова вероятность того, что в первый месяц выйдут из строя: а). все детали б). только две детали в). хотя бы одна деталь г). от одной до двух деталей?
# + id="s5V6JUwZkBDM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="624af88a-94bf-4b55-c548-ab2a85e07842"
p1=0.1
p2=0.2
p3=0.25
q1=1-p1#обратная вероятность
q2=1-p2
q3=1-p3
#все детали
p_all = p1*p2*p3
#только две детали
p_only_two = p1*p2*q3 + q1*p2*p3 + p1*q2*p3
#хотя бы одна деталь
p_one_or_more = 1 - q1*q2*q3
#от одной до двух деталей
p_one_or_two = p1*q2*q3 + q1*p2*q3 + q1*q2*p3 + p_only_two
print(f'Все детали {p_all}, только две {p_only_two}, хотя бы одна деталь {p_one_or_more}, от одной до двух деталей {p_one_or_two}')
| lesson3/lesson_3_DZ.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Extract Medals data using Pandas
# +
import requests
import pandas as pd
url = 'https://olympics.com/tokyo-2020/paralympic-games/en/results/all-sports/medal-standings.htm'
html = requests.get(url).content
medals_list = pd.read_html(html)
len(medals_list)
# -
#0's entry in the medals_list contains the overall medal table
medals = medals_list[0]
medals.head()
# +
#Name columns properly
medals.rename(columns={'Unnamed: 2' : 'Gold', 'Unnamed: 3' : 'Silver',
'Unnamed: 4': 'Bronze', 'RankbyTotal': 'Rank by Total'},
inplace=True)
#Drop the 'NPCCode' column
medals.drop(columns=['NPCCode'], inplace=True)
#Take a look at the modified table
medals.head()
# -
#Export data to a cvs file
medals.to_csv('Medals.csv', index=False)
# # Extract Athletes data using Selenium
# +
import time
import re
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
url = 'https://olympics.com/tokyo-2020/paralympic-games/en/results/all-sports/medal-standings.htm'
driver.get(url)
time.sleep(2)
#Cookies
not_accept_xpath = '//*[@id="onetrust-pc-btn-handler"]'
not_accept = driver.find_element_by_xpath(not_accept_xpath)
not_accept.click()
time.sleep(2)
cookies_xpath = '//*[@id="onetrust-pc-sdk"]/div[3]/div[1]/button[1]'
cookies = driver.find_element_by_xpath(cookies_xpath)
cookies.click()
time.sleep(3)
#Athletes tab
athletes_button_xp = '/html/body/section[2]/div/header/div[2]/div[1]/div/div[2]/nav/ul[1]/li[6]/a'
athletes_button = driver.find_element_by_xpath(athletes_button_xp)
athletes_button.click()
time.sleep(3)
#Extract html of the athletes table
athletes_table_xp = '//*[@id="entries-table"]'
athletes_table = driver.find_element_by_xpath(athletes_table_xp)
athletes_html_full = athletes_table.get_attribute('outerHTML')
#Remove undesireble text patterns in the table
remove_pattern_1 = r';[^>]+;'
remove_pattern_2 = r'<span class="d-md-none">[^>]+</span>'
remove_pattern = r'|'.join((remove_pattern_1, remove_pattern_2))
athletes_html_clean = re.sub(remove_pattern, '', athletes_html_full)
#Record 1st page data
athletes_p1_list = pd.read_html(athletes_html_clean)
athletes_p1 = athletes_p1_list[0]
athletes_p1.index = athletes_p1.index + 1 + 20*0
athletes_list = [athletes_p1]
time.sleep(2)
#Record pages 2-227
for i in range(2, 228):
j = i
if i > 5:
j = 5
if i == 226:
j = 6
if i == 227:
j = 7
next_page_xpath = f'//*[@id="entries-table_paginate"]/ul/li[{j+1}]/a'
next_page = driver.find_element_by_xpath(next_page_xpath)
next_page.click()
athletes_html_full = athletes_table.get_attribute('outerHTML')
athletes_html_clean = re.sub(remove_pattern, '', athletes_html_full)
athletes_nextp_list = pd.read_html(athletes_html_clean)
athletes_nextp = athletes_nextp_list[0]
athletes_nextp.index = athletes_nextp.index + 1 + 20*(i-1)
athletes_list.append(athletes_nextp)
time.sleep(2)
athletes_final = pd.concat(athletes_list)
# -
#Check out the resulting table
athletes_final.head()
#Export data to a cvs file
athletes_final.to_csv('Athletes.csv', index=False)
# # Extract Gender by Discipline data using pandas
# +
import requests
import pandas as pd
url = 'https://olympics.com/tokyo-2020/paralympic-games/en/results/all-sports/entries-by-discipline.htm'
html = requests.get(url).content
gender_list = pd.read_html(html)
len(gender_list)
# -
#0's entry in the medals_list contains the overall medal table
gender = gender_list[0]
gender.tail()
# +
#Remove redundant multiindexing in column titles
gender.columns = ['Discipline', 'F', 'M', 'Total']
#Remove the last row
gender = gender[:-1]
#Display data
gender.style.set_properties(**{'text-align': 'right'})
# -
#Export data to a cvs file
gender.to_csv('GenderByDiscipline.csv', index=False)
# # Extract Coaches data using Selenium
# +
import re
import time
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
url = 'https://olympics.com/tokyo-2020/paralympic-games/en/results/all-sports/coaches.htm'
driver.get(url)
time.sleep(2)
#Cookies
not_accept_xpath = '//*[@id="onetrust-pc-btn-handler"]'
not_accept = driver.find_element_by_xpath(not_accept_xpath)
not_accept.click()
time.sleep(2)
cookies_xpath = '//*[@id="onetrust-pc-sdk"]/div[3]/div[1]/button[1]'
cookies = driver.find_element_by_xpath(cookies_xpath)
cookies.click()
time.sleep(3)
#Extract html of the coaches table
coaches_table_xp = '//*[@id="mainContainer"]/div/div[1]/div[1]/div[3]'
coaches_table = driver.find_element_by_xpath(coaches_table_xp)
coaches_html_full = coaches_table.get_attribute('outerHTML')
#Remove undesireble text patterns in the table
remove_pattern_1 = r';[^>]+;'
remove_pattern_2 = r'<span class="d-md-none">[^>]+</span>'
remove_pattern = r'|'.join((remove_pattern_1, remove_pattern_2))
coaches_html_clean = re.sub(remove_pattern, '', coaches_html_full)
#Record 1st page data
coaches_p1_list = pd.read_html(coaches_html_clean)
coaches_p1 = coaches_p1_list[0]
coaches_p1.index = coaches_p1.index + 1 + 20*0
coaches_list = [coaches_p1]
time.sleep(2)
#Record pages 2-5
for i in range(2, 6):
next_page_xpath = f'//*[@id="entries-table_paginate"]/ul/li[{i+1}]/a'
next_page = driver.find_element_by_xpath(next_page_xpath)
next_page.click()
coaches_html_full = coaches_table.get_attribute('outerHTML')
coaches_html_clean = re.sub(remove_pattern, '', coaches_html_full)
coaches_nextp_list = pd.read_html(coaches_html_clean)
coaches_nextp = coaches_nextp_list[0]
coaches_nextp.index = coaches_nextp.index + 1 + 20*(i-1)
coaches_list.append(coaches_nextp)
time.sleep(2)
coaches_final = pd.concat(coaches_list)
coaches_final.rename(columns={'Event': 'Gender'}, inplace=True)
# -
coaches_final
#Export data to a cvs file
coaches_final.to_csv('Coaches.csv', index=False)
# # Extract Medalists data using Selenium
# +
import re
import time
import requests
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
url = 'https://olympics.com/tokyo-2020/paralympic-games/en/results/all-sports/multi-medalists.htm'
driver.get(url)
time.sleep(2)
#Cookies
not_accept_xpath = '//*[@id="onetrust-pc-btn-handler"]'
not_accept = driver.find_element_by_xpath(not_accept_xpath)
not_accept.click()
time.sleep(2)
cookies_xpath = '//*[@id="onetrust-pc-sdk"]/div[3]/div[1]/button[1]'
cookies = driver.find_element_by_xpath(cookies_xpath)
cookies.click()
time.sleep(3)
#Extract html of the multi-medallists table
mmedallists_table_xp = '//*[@id="multi_medallist"]/div[2]'
mmedallists_table = driver.find_element_by_xpath(mmedallists_table_xp)
mmedallists_html_full = mmedallists_table.get_attribute('innerHTML')
#Record 1st page data
mmedallists_p1_list = pd.read_html(mmedallists_html_full)
mmedallists_p1 = mmedallists_p1_list[0]
mmedallists_p1.index = mmedallists_p1.index + 1 + 20*0
mmedallists_list = [mmedallists_p1]
time.sleep(2)
# +
import re
import time
import requests
import pandas as pd
url = 'https://olympics.com/tokyo-2020/paralympic-games/en/results/all-sports/multi-medalists.htm'
mmedallists_html_full = requests.get(url).content
mmedallists = pd.read_html(mmedallists_html_full)
mmedallists[0]
# -
# +
#Clean up the data frame
remove_pattern_1 = r'Sport Class:[^>]+>[^>]+>'
remove_pattern_2 = r'<span class="d-md-none">[^>]+</span>'
remove_pattern_3 = r'<abbr class="noc"\s[^<]+</abbr>'
remove_pattern = r'|'.join((remove_pattern_1, remove_pattern_2, remove_pattern_3))
mmedallists_html_clean_1 = re.sub(remove_pattern, '', mmedallists_html_full.decode("utf-8"))
replace_pattern_gold = r'<img class="medal-icon"\s[^\s]+\salt="Gold Medal">'
replace_pattern_silver = r'<img class="medal-icon"\s[^\s]+\salt="Silver Medal">'
replace_pattern_bronze = r'<img class="medal-icon"\s[^\s]+\salt="Bronze Medal">'
mmedallists_html_clean_2 = re.sub(replace_pattern_gold, 'Gold', mmedallists_html_clean_1)
mmedallists_html_clean_3 = re.sub(replace_pattern_silver, 'Silver', mmedallists_html_clean_2)
mmedallists_html_clean = re.sub(replace_pattern_bronze, 'Bronze', mmedallists_html_clean_3)
mmedallists_clean = pd.read_html(mmedallists_html_clean)
mmedallists = mmedallists_clean[0]
mmedallists.head(10)
# -
#Group identical raws
idet_cols = ['Rank', 'Name', 'Sport', 'Total']
mmedallists['Event'] = mmedallists.groupby(idet_cols)['Event'].transform(lambda x: '; '.join(x))
mmedallists['Medal'] = mmedallists.groupby(idet_cols)['Medal'].transform(lambda x: '; '.join(x))
mmedallists.drop_duplicates(inplace=True, ignore_index=True)
mmedallists
#Count Gold, Silver, and Bronze
mmedallists = mmedallists.assign(Gold = mmedallists['Medal'].str.count('Gold'))
mmedallists = mmedallists.assign(Silver = mmedallists['Medal'].str.count('Silver'))
mmedallists = mmedallists.assign(Bronze = mmedallists['Medal'].str.count('Bronze'))
mmedallists
#Remove redundant columns
mmedallists.drop(columns=['Event', 'Medal'], inplace=True)
mmedallists = mmedallists.reindex(columns=['Rank', 'Name', 'Sport', 'Gold', 'Silver', 'Bronze', 'Total'])
mmedallists
#Export data to a cvs file
mmedallists.to_csv('MultiMedallists.csv', index=False)
# +
#Script to look for matches
switch_pattern = re.compile(r'<abbr class="noc" title=("[^"]+")>[^<]</abbr>')
mmedallists_html_clean = re.sub(remove_pattern, '', mmedallists_html_full.decode("utf-8"))
matches = remove_pattern.finditer(mmedallists_html_full.decode("utf-8"))
for match in matches:
print(match)
# +
mmedallists_clean = pd.read_html(mmedallists_html_clean)
mmedallists = mmedallists_clean[0]
mmedallists.head(10)
# -
switch_pattern
mmedallists_html_full = requests.get(url).content
mmedallists = pd.read_html(mmedallists_html_full)
mmedallists[0]
switch_pattern = re.compile(r'<abbr class="noc" title=("[^"]+")>([^<]+)</abbr>')
mmedallists_html_clean = switch_pattern.sub(r'<abbr class="noc" title=\2>\1</abbr>', mmedallists_html_full.decode("utf-8"))
# +
mmedallists_clean = pd.read_html(mmedallists_html_clean)
mmedallists = mmedallists_clean[0]
mmedallists.head(10)
# -
mmedallists = mmedallists.assign(Team = mmedallists['Name'].str.extract('"([^"]+)"'))
mmedallists
| Scraping/DataExtraction/.ipynb_checkpoints/ExtractDataScript-Copy1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
import re
import math
import string
import pandas as pd
import csv
from collections import Counter
from __future__ import division
TEXT = open('data/big.txt').read()
def tokens(text):
return re.findall('[a-z]+', text.lower())
WORDS = tokens(TEXT)
COUNTS_BIG = Counter(WORDS)
# with open('count_1w.csv', mode='r') as counts_1:
# reader = csv.reader(counts_1)
# for rows in reader:
# mydict = {rows[0]:int(rows[1]) for rows in reader}
# COUNTS_SPELL = Counter(mydict)
COUNTS = COUNTS_BIG
print(COUNTS.most_common(10))
spell = open('data/spell-errors.txt','r')
line = spell.readline()
spell_errors = []
while line:
spell_errors.append(re.findall('[a-z]+', line.lower()))
line = spell.readline()
spell.close()
def correct(word):
candidates = (known(edits0(word)) or
known(edits1(word)) or
known(edits2(word)) or
[word])
return max(candidates, key=COUNTS.get)
# +
def known(words):
#Return the subset of words that are actually in the dictionary."
return {w for w in words if w in COUNTS}
def edits0(word):
return {word}
def edits2(word):
return {e2 for e1 in edits1(word) for e2 in edits1(e1)}
# +
def edits1(word):
pairs = splits(word)
deletes = [a+b[1:] for a, b in pairs if b]
transposes = [a+b[1]+b[0]+b[2:] for a, b in pairs if len(b) > 1]
replaces = [a+c+b[1:] for a, b in pairs for c in alphabet if b]
inserts = [a+c+b for a, b in pairs for c in alphabet]
return set(deletes + transposes + replaces + inserts)
def splits(word):
return [(word[:i], word[i:])
for i in range(len(word)+1)]
alphabet = 'abcdefghijklmnopqrstuvwxyz'
# +
def correct_text(text):
return re.sub('[a-zA-Z]+', correct_match, text)
def correct_match(match):
word = match.group()
return case_of(word)(correct(word.lower()))
def case_of(text):
return (str.upper if text.isupper() else
str.lower if text.islower() else
str.title if text.istitle() else
str)
# +
def correct_spell_err(text):
return re.sub('[a-zA-Z]+', correct_matcher, text)
def correct_matcher(match):
word = match.group()
return case_of1(word)(prime_check(word.lower()))
def case_of1(text):
return (str.upper if text.isupper() else
str.lower if text.islower() else
str.title if text.istitle() else
str)
# -
def prime_check(word):
for i in range(len(spell_errors)):
for j in range(len(spell_errors[i])):
if(spell_errors[i][j] == word):
return spell_errors[i][0]
return word
test = pd.read_csv("test.csv")
x_test = []
for index, row in test.iterrows():
check = correct_spell_err(row['WRONG'])
if check == row['WRONG']:
check = correct_text(row['WRONG'])
x_test.append(check)
correct_spell = pd.DataFrame()
correct_spell['CORRECT'] = x_test
writer = pd.ExcelWriter('data/test_darshil.xlsx')
correct_spell.to_excel(writer,'sheet1')
writer.save()
| Home Work SpellChecker.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Case 3. Downloading and extracting the dataset
# <NAME>
# 22.2.2018
# Cognitive Systems for Health Technology Applications
# Helsinki Metropolia University of Applied Science
# Download a local copy of the tar-file
from urllib.request import urlretrieve
url = r"http://disi.unitn.it/moschitti/corpora/ohsumed-first-20000-docs.tar.gz"
dst = 'ohsumed-first-20000-docs.tar.gz'
urlretrieve(url, dst)
# Extract the tarfile. Creates a folder: ohsumed-first-20000-docs
import tarfile
tar = tarfile.open("ohsumed-first-20000-docs.tar.gz")
tar.extractall()
tar.close()
| Case 3. Downloading and extracting the dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Datasets
#
# ## 機器學習資料集/ 範例一: The digits dataset
#
#
# http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html
#
# 這個範例目的是介紹機器學習範例資料集的操作,對於初學者以及授課特別適合使用。
#
# ## (一)引入函式庫及內建手寫數字資料庫
# +
#這行是在ipython notebook的介面裏專用,如果在其他介面則可以拿掉
# %matplotlib inline
from sklearn import datasets
import matplotlib.pyplot as plt
#載入數字資料集
digits = datasets.load_digits()
#畫出第一個圖片
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
# -
# ## (二)資料集介紹
# `digits = datasets.load_digits()` 將一個dict型別資料存入digits,我們可以用下面程式碼來觀察裏面資料
for key,value in digits.items() :
try:
print (key,value.shape)
except:
print (key)
# | 顯示 | 說明 |
# | -- | -- |
# | ('images', (1797L, 8L, 8L))| 共有 1797 張影像,影像大小為 8x8 |
# | ('data', (1797L, 64L)) | data 則是將8x8的矩陣攤平成64個元素之一維向量 |
# | ('target_names', (10L,)) | 說明10種分類之對應 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] |
# | DESCR | 資料之描述 |
# | ('target', (1797L,))| 記錄1797張影像各自代表那一個數字 |
#
# 接下來我們試著以下面指令來觀察資料檔,每張影像所對照的實際數字存在`digits.target`變數中
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:4]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Training: %i' % label)
# 
#接著我們嘗試將這個機器學習資料之描述檔顯示出來
print(digits['DESCR'])
# 這個描述檔說明了這個資料集是在 1998年時建立的,由`<NAME>, <NAME> ,Department of Computer Engineering
# Bogazici University, Istanbul Turkey ` 建立的。數字的筆跡總共來自43個人,一開始取像時為32x32的點陣影像,之後經運算處理形成 8x8影像,其中灰階記錄的範圍則為 0~16的整數。
# ## (三)應用範例介紹
# 在整個scikit-learn應用範例中,有以下幾個範例是利用了這組手寫辨識資料集。這個資料集的使用最適合機器學習初學者來理解分類法的原理以及其進階應用
#
# * [分類法 Classification](Classification/Classification.md)
# * [Ex 1: Recognizing hand-written digits](Classification/ex1_Recognizing_hand-written_digits.md)
# * [特徵選擇 Feature Selection](Feature_Selection/intro.md)
# * [Ex 2: Recursive Feature Elimination](Feature_Selection/ex2_Recursive_feature_elimination.md)
# * [Ex 3: Recursive Feature Elimination with Cross-Validation](Feature_Selection/ex3_rfe_crossvalidation__md.md)
| Datasets/ipython_notebook/ex1_the_digits_dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import argparse
from os.path import dirname
import torch
import torchvision
import os
import numpy as np
import tqdm
import matplotlib.pyplot as plt
from utils.models import PreprocessLayer
from utils.models import Classifier
from torch.utils.tensorboard import SummaryWriter
from utils.loader import Loader
from utils.loss import cross_entropy_loss_and_accuracy
from utils.dataset import NCaltech101
from torch.utils.data.dataloader import default_collate
# -
torch.manual_seed(777)
np.random.seed(777)
# +
validation_dataset="/ws/data/N-Caltech101/validation/"
training_dataset="/ws/data/N-Caltech101/training/"
log_dir="/ws/external/log_1.4_b4/temp"
device="cuda:0"
num_workers=4
pin_memory=True
batch_size=4
num_epochs=2
save_every_n_epochs=2
checkpoint = "/ws/external/log_1.4_b4/model_best.pth" # model_best.pth checkpoint_13625_0.5990.pth
assert os.path.isdir(dirname(log_dir)), f"Log directory root {dirname(log_dir)} not found."
assert os.path.isdir(validation_dataset), f"Validation dataset directory {validation_dataset} not found."
assert os.path.isdir(training_dataset), f"Training dataset directory {training_dataset} not found."
print(f"----------------------------\n"
f"Starting training with \n"
f"num_epochs: {num_epochs}\n"
f"batch_size: {batch_size}\n"
f"device: {device}\n"
f"log_dir: {log_dir}\n"
f"training_dataset: {training_dataset}\n"
f"validation_dataset: {validation_dataset}\n"
f"----------------------------")
# +
def percentile(t, q):
B, C, H, W = t.shape
k = 1 + round(.01 * float(q) * (C * H * W - 1))
result = t.view(B, -1).kthvalue(k).values
return result[:,None,None,None]
def create_image(representation):
B, C, H, W = representation.shape
representation = representation.view(B, 3, C // 3, H, W).sum(2)
# do robust min max norm
representation = representation.detach().cpu()
robust_max_vals = percentile(representation, 99)
robust_min_vals = percentile(representation, 1)
representation = (representation - robust_min_vals)/(robust_max_vals - robust_min_vals)
representation = torch.clamp(255*representation, 0, 255).byte()
representation = torchvision.utils.make_grid(representation)
return representation
# +
class Loader:
def __init__(self, dataset, batch_size=2, num_workers=2, pin_memory=True, device="cuda:0"):
self.device = device
split_indices = list(range(len(dataset)))
sampler = torch.utils.data.sampler.SubsetRandomSampler(split_indices)
self.loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=sampler,
num_workers=num_workers, pin_memory=pin_memory,
collate_fn=collate_events)
def __iter__(self):
for data in self.loader:
data = [d.to(self.device) for d in data]
yield data
def __len__(self):
return len(self.loader)
def collate_events(data):
labels = []
events = []
for i, d in enumerate(data):
labels.append(d[1])
ev = np.concatenate([d[0], i*np.ones((len(d[0]),1), dtype=np.float32)],1)
events.append(ev)
events = torch.from_numpy(np.concatenate(events,0))
labels = default_collate(labels)
return events, labels
# +
# datasets, add augmentation to training set
training_dataset = NCaltech101(training_dataset, augmentation=True)
validation_dataset = NCaltech101(validation_dataset)
# construct loader, handles data streaming to gpu
training_loader = Loader(training_dataset, batch_size=batch_size, num_workers=num_workers, pin_memory=True, device="cuda:0")
validation_loader = Loader(validation_dataset, batch_size=batch_size, num_workers=num_workers, pin_memory=True, device="cuda:0")
# +
# model, and put to device
preprocess = PreprocessLayer()
model = Classifier(pretrained=False)
ckpt = torch.load(checkpoint)
model.load_state_dict(ckpt["state_dict"])
model = model.to(device)
# # optimizer and lr scheduler
# optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
# lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, 0.5)
writer = SummaryWriter(log_dir)
iteration = 0
min_validation_loss = 1000
# -
_, (events, labels) = next(enumerate(validation_loader))
np.shape(events)
labels
# optimizer.zero_grad()
t = preprocess(events)
pred_labels, representation = model(events, t)
loss, accuracy = cross_entropy_loss_and_accuracy(pred_labels, labels)
# loss.backward()
# optimizer.step()
pred_labels.argmax(dim=1)
np.shape(pred_labels)
labels
representation_vizualization = create_image(representation)
# writer.add_image("training/representation", representation_vizualization, iteration)
np.shape(representation)
def create_image2(representation):
B, C, H, W = representation.shape
representation = representation.view(B, 3, C // 3, H, W).sum(2)
# do robust min max norm
representation = representation.detach().cpu()
robust_max_vals = percentile(representation, 99)
robust_min_vals = percentile(representation, 1)
representation = (representation - robust_min_vals)/(robust_max_vals - robust_min_vals)
representation = torch.clamp(255*representation, 0, 255).byte()
return representation
representation_vizualization2 = create_image2(representation)
np.shape(representation_vizualization2)
# +
def get_axis(axarr, H, W, i, j):
H, W = H - 1, W - 1
if not (H or W):
ax = axarr
elif not (H and W):
ax = axarr[max(i, j)]
else:
ax = axarr[i][j]
return ax
def show_image_row(xlist, ylist=None, fontsize=12, size=(2.5, 2.5), tlist=None, filename=None):
H, W = len(xlist), len(xlist[0])
fig, axarr = plt.subplots(H, W, figsize=(size[0] * W, size[1] * H))
for w in range(W):
for h in range(H):
ax = get_axis(axarr, H, W, h, w)
ax.imshow(xlist[h][w].permute(1, 2, 0))
ax.xaxis.set_ticks([])
ax.yaxis.set_ticks([])
ax.xaxis.set_ticklabels([])
ax.yaxis.set_ticklabels([])
if ylist and w == 0:
ax.set_ylabel(ylist[h], fontsize=fontsize)
if tlist:
ax.set_title(tlist[h][w], fontsize=fontsize)
if filename is not None:
plt.savefig(filename, bbox_inches='tight')
plt.show()
# -
show_image_row([representation_vizualization2.cpu()])
np.shape(representation_vizualization)
# +
import torch
import torch.nn.functional as F
src = torch.arange(25, dtype=torch.float).reshape(1, 1, 5, 5).requires_grad_() # 1 x 1 x 5 x 5 with 0 ... 25
indices = torch.tensor([[-1, -1], [0, 0]], dtype=torch.float).reshape(1, 1, -1, 2) # 1 x 1 x 2 x 2
output = F.grid_sample(src, indices)
print(output) # tensor([[[[ 0., 12.]]]])
# -
src
indices
output
| visualize_voxel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 32-bit ('venv')
# name: python385jvsc74a57bd08d96d0c19a8d9e97d86a45657989fc2bea3cd5f320f1262eb2458e6407e3afc8
# ---
# + tags=[]
import pandas as pd
from numpy.core.numeric import NaN
from bsbetl import g
from bsbetl.ov_calcs import ov_columns
csv_file = '\\BsbEtl\\OUT\\By_Share\\A0Hl8N.ETR\\A0Hl8N.ETR.CSV'
df_trades = pd.read_csv(csv_file, index_col='date_time',
parse_dates=True, infer_datetime_format=True)
#print(df_trades.head(2))
df_early = df_trades.between_time('00:00:00','08:59:59')
df_early_daily = df_early.resample('D', label='left', origin='start_day').agg({'price': 'mean', 'volume': 'sum'}).pad()
df_late = df_trades.between_time('17:36:00','23:59:59')
df_late_daily = df_late.resample('D', label='left', origin='start_day').agg({'price': 'mean', 'volume': 'sum'}).pad()
df_trades = df_trades.between_time('09:00:00', '17:35')
df_nz = df_trades[df_trades['price'] > 0]
#print(df_nz['price'].head(2))
#print(df_nz.iat[0,0])
#print(df_nz.iat[0,0])
row = df_nz.iloc[0]
print(row['volume'])
# and append the consolidated early / late trades to
# the opening / closing minutes of each day
# for idx,row in df_early_daily.iterrows():
# row.name = idx + pd.offsets.Hour(9)
# #print(row)
# df_trades = df_trades.append(row,ignore_index=False)
# #print(df_trades.tail(5))
# for idx,row in df_late_daily.iterrows():
# row.name = idx + pd.offsets.Minute(17*60+35)
#print(row.name)
#df_trades = df_trades.append(row,ignore_index=False)
#print(df_trades.tail(2))
# -
| bsbetl/notebooks/scratchpad4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="7TIu55xv_yax"
import pandas as pd
url='hamilton-air-quality.csv'
data = pd.read_csv(url,sep=",")
# + id="GVWdojQ9A2y4"
# to explicitly convert the date column to type DATETIME
data['Date'] = pd.to_datetime(data['date'])
data = data.set_index('Date')
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="rBOf_GvHBE-m" outputId="5cc6de18-bd09-4ce0-bbb6-4ac02282ce60"
data.head()
# + id="-I-QypWABbSI"
import sklearn.metrics as metrics
def regression_results(y_true, y_pred):
# Regression metrics
explained_variance=metrics.explained_variance_score(y_true, y_pred)
mean_absolute_error=metrics.mean_absolute_error(y_true, y_pred)
mse=metrics.mean_squared_error(y_true, y_pred)
mean_squared_log_error=metrics.mean_squared_log_error(y_true, y_pred)
median_absolute_error=metrics.median_absolute_error(y_true, y_pred)
r2=metrics.r2_score(y_true, y_pred)
print('explained_variance: ', round(explained_variance,4))
print('mean_squared_log_error: ', round(mean_squared_log_error,4))
print('r2: ', round(r2,4))
print('MAE: ', round(mean_absolute_error,4))
print('MSE: ', round(mse,4))
print('RMSE: ', round(np.sqrt(mse),4))
# + id="YxiMgujqCS8R"
x = data[' pm25']
# + colab={"base_uri": "https://localhost:8080/"} id="3V5Y2-hTBpt_" outputId="19aa9cb1-5c9d-4e18-f35f-1cb4549c4a06"
# creating new dataframe from pm2.5 column
data_pm25 = data[[' pm25']]
# inserting new column with yesterday's pm2.5 values
data_pm25.loc[:,'Yesterday'] = data_pm25.loc[:,' pm25'].shift()
# inserting another column with difference between yesterday and day before yesterday's pm2.5 values.
data_pm25.loc[:,'Yesterday_Diff'] = data_pm25.loc[:,'Yesterday'].diff()
# dropping NAs
data_pm25 = data_pm25.dropna()
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="6KRn2f-0CmEY" outputId="a61ca55e-b9ea-4bc0-c4a2-86520dcf4d5d"
data_pm25.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="T2in-q56Cy1f" outputId="8bd5e875-f62a-4fbb-f6b2-8b6f5de577a4"
data_pm25.head()
# + id="IdoEYV8sC7bx"
X_train = data_pm25[:'2014'].drop([' pm25'], axis = 1)
y_train = data_pm25.loc[:'2014', ' pm25']
X_test = data_pm25['2019'].drop([' pm25'], axis = 1)
y_test = data_pm25.loc['2019', ' pm25']
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="aC6WrH3PEGfh" outputId="40478a68-0299-4c86-ec2c-012e0f23b730"
X_train.head()
# + id="Vv32eI1KEqIf"
from sklearn.model_selection import TimeSeriesSplit, cross_val_score
import matplotlib.pyplot as plt
# %matplotlib inline
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="8yixoFYBEvwY" outputId="c0d04017-1a65-4374-b4a7-f2859725a597"
# Spot Check Algorithms
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor
from sklearn.svm import SVR, LinearSVR
from sklearn.neighbors import KNeighborsRegressor
models = []
models.append(('LR', LinearRegression()))
models.append(('NN', MLPRegressor(solver = 'lbfgs'))) #neural network
models.append(('KNN', KNeighborsRegressor()))
models.append(('RF', RandomForestRegressor(n_estimators = 10))) # Ensemble method - collection of many decision trees
models.append(('SVR', SVR(gamma='auto'))) # kernel = linear
# Evaluate each model in turn
results = []
names = []
for name, model in models:
# TimeSeries Cross validation
tscv = TimeSeriesSplit(n_splits=10)
cv_results = cross_val_score(model, X_train, y_train, cv=tscv, scoring='r2')
results.append(cv_results)
names.append(name)
print('%s: %f (%f)' % (name, cv_results.mean(), cv_results.std()))
# Compare Algorithms
plt.boxplot(results, labels=names)
plt.title('Algorithm Comparison')
plt.show()
# + id="lqg-Z5aTHWZv"
from sklearn.metrics import make_scorer
import numpy as np
def rmse(actual, predict):
predict = np.array(predict)
actual = np.array(actual)
distance = predict - actual
square_distance = distance ** 2
mean_square_distance = square_distance.mean()
score = np.sqrt(mean_square_distance)
return score
rmse_score = make_scorer(rmse, greater_is_better = False)
# + id="wwXK4JLMHCWS"
from sklearn.model_selection import GridSearchCV
model = RandomForestRegressor()
param_search = {
'n_estimators': [20, 50, 100],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [i for i in range(5,15)]
}
tscv = TimeSeriesSplit(n_splits=10)
gsearch = GridSearchCV(estimator=model, cv=tscv, param_grid=param_search, scoring = rmse_score)
gsearch.fit(X_train, y_train)
best_score = gsearch.best_score_
best_model = gsearch.best_estimator_
# + colab={"base_uri": "https://localhost:8080/"} id="0-gUgpVXIA1f" outputId="4603ba47-db32-4d63-b71c-05111bbe6ad0"
y_true = y_test.values
y_pred = best_model.predict(X_test)
regression_results(y_true, y_pred)
| Backend/mackhacksairquality.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 4. Classifying the P300
#
# The first tutorial covered visualizing the P300 potential through an ERP plot. This tutorial covers the classification of the P300 potential. The EEG recording used here is made of a subject that is presented with a screen containing 6 icons. These icons were highlighted one by one. For each trial, each icon was highlighted a total of 10 times. The subject selected one of the icons and mentally counted the number of times the chosen icon was highlighted (which was ofcourse always 10), a task designed to keep him focussed on this icon. Every time the chosen icon, which I will refer to now as the target, was highlighted, a P300 potential occurs in the EEG signal. By determining which of the 6 icons corresponds to the largest P300, we can determine which of the icons was the target. This paradigm is a simple version of the famous P300 speller [1].
#
# [1] <NAME>., & <NAME>. (1988). Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. *Electroencephalography and clinical neurophysiology*, 70(6), 510–523, http://www.ncbi.nlm.nih.gov/pubmed/2461285
# %pylab inline
# The data is stored on the virtual server.
# Loading it should look very familiar by now:
# +
import scipy.io
m = scipy.io.loadmat('data/tutorial4-01.mat')
EEG = m['EEG']
channel_names = [s.strip() for s in m['channel_names']]
event_onsets = m['event_onsets']
event_codes = m['event_codes']
targets = m['targets'][0] - 1 # -1 because the original list was 1-6, but numpy indexing is 0-5
sample_rate = m['sample_rate'][0][0]
ntrials = len(targets)
classes = unique(targets)
nclasses = len(classes)
nrepetitions = event_onsets.shape[1] // nclasses
nchannels = len(channel_names)
print('Duration of recording is', EEG.shape[1] / float(sample_rate), 'seconds.')
print('Number of EEG channels:', nchannels)
print()
print('Number of trials:', ntrials)
print('Target icon for each trial:', targets)
print('Number of icons on the screen:', nclasses)
print('Number of times each icon was highlighted:', nrepetitions)
print('Shape of event matrix:', event_onsets.shape, 'ntrials x (nclasses * nrepetitions)')
# -
# Cutting the data into trials. This time, it becomes a 5 dimensional array. Take a look at the resulting dimensions reading the following description:
#
# There are 12 trials. During each of these trials, data was collected for each of the 6 icons on the screen. Each icon was highlighted 10 times. The time-onsets when an icon was highlighted is called an epoch. For each epoch, the time interval 0.1 s *before* the onset until 1 s *after* the onset is extracted (1126 samples). The recording contains 32 channels.
# +
window = [int(-0.1*sample_rate), int(1.0*sample_rate)]
nsamples = window[1] - window[0]
trials = np.zeros((nchannels, nsamples, nrepetitions, nclasses, ntrials))
for trial in range(ntrials):
for cl in classes:
onsets = event_onsets[trial, event_codes[trial,:] == (cl + 1)]
for repetition, onset in enumerate(onsets):
trials[:, :, repetition, cl, trial] = EEG[:, window[0]+onset:window[1]+onset]
print('shape of trial matrix:', trials.shape)
# -
# During the first tutorial, the EEG signal was already filtered in advance. This data is not, so we do it here. The function below applies a bandpass filter with a passband between 0.5 - 30 Hz. Also, each epoch is baselined. The baseline in this case is the mean EEG voltage starting from 0.1 s before the onset of the epoch until the onset, which we regard as 'resting EEG'. This baseline is substracted from the rest of the epoch, so the 'resing EEG' voltage is 0. Any changes to the resting EEG (such as the P300) as now relative to 0.
# +
import scipy.signal
# Design and apply the bandpass filter
a, b = scipy.signal.iirfilter(3, [0.5/(sample_rate/2.0), 30/(sample_rate/2.0)])
trials_filt = scipy.signal.filtfilt(a, b, trials, axis=1)
# Calculate the baseline amplitude on the first 0.1 seconds (this corresponds to the time interval -0.1 - 0)
baseline = mean(trials_filt[:, 0:int(0.1*sample_rate), ...], axis=1)
trials_filt = trials_filt - tile(baseline[:, np.newaxis, :, :], (1, nsamples, 1, 1, 1))
# -
# Since we'll be using machine learning, split the data into a train and a test set 50-50, like we did in the previous tutorial:
# +
train_split = 0.5
ntrain_trials = int(train_split * ntrials)
ntest_trials = ntrials - ntrain_trials
train = trials_filt[..., :ntrain_trials]
train_targets = targets[:ntrain_trials]
test = trials_filt[..., ntrain_trials:]
test_targets = targets[ntrain_trials:]
print('channels x samples x repetitions x classes x trials')
print('Training data:', train.shape)
print('Test data: ', test.shape)
# -
# The training data can be simplified a little bit. We don't care any longer which epoch belongs to which icon on the screen. We only care about epochs where the target was highlighted versus epochs where a nontarget was highlighted.
# +
target_trials = []
nontarget_trials = []
for trial in range(ntrain_trials):
for cl in range(nclasses):
if cl == train_targets[trial]:
target_trials.append( train[..., cl, trial] )
else:
nontarget_trials.append( train[..., cl, trial] )
# The shape of the data is now
# trials x channels x samples x repetitions
target_trials = array(target_trials)
nontarget_trials = array(nontarget_trials)
# Rearranging the axes a bit to
# channels x samples x repetitions x trials
target_trials = target_trials.transpose([1,2,3,0])
nontarget_trials = nontarget_trials.transpose([1,2,3,0])
print('channels x samples x repetitions x trials')
print(target_trials.shape)
print(nontarget_trials.shape)
# -
# Before attempting classification, it is wise to first visualize the data. We do this in the same manner as during tutorial 1 with an ERP plot. So we bring back the `plot_eeg` function with some small improvements:
# +
from matplotlib.collections import LineCollection
def plot_eeg(EEG, vspace=100, color='k'):
'''
Plot the EEG data, stacking the channels horizontally on top of each other.
Arguments:
EEG - Array (channels x samples) containing the EEG data
vspace - Amount of vertical space to put between the channels (default 100)
color - Color to draw the EEG in (default black)
'''
nchannels, nsamples = EEG.shape
bases = vspace * arange(nchannels)
EEG = EEG.T + bases
# Calculate a timeline in seconds, knowing that the extracted time interval was -0.1 - 1.0 seconds
time = arange(nsamples) / float(sample_rate)
time -= 0.1
# Plot EEG versus time as a line collection. This is a small improvement from the version in tutorial 1
# and is useful for creating a figure legend later on. By default in a legend, every line gets one entry.
# But in this EEG plot, multiple lines share the same entry, so we use a line collection.
traces = LineCollection([list(zip(time, EEG[:, channel])) for channel in range(nchannels)], colors=color)
gca().add_collection(traces)
# Set the y limits of the plot to leave some spacing at the top and bottom
ylim(-vspace, nchannels * vspace)
# Set the x limits
xlim(-0.1, 1.0)
# Add gridlines to the plot
grid(True)
# Label the axes
xlabel('Time (s)')
ylabel('Channels')
# The y-ticks are set to the locations of the electrodes. The international 10-20 system defines
# default names for them.
gca().yaxis.set_ticks(bases)
gca().yaxis.set_ticklabels(channel_names)
# Put a nice title on top of the plot
title('EEG data')
# -
# Using the `plot_eeg` function to plot the ERPs of both classes (targets versus nontargets):
# +
# First average over trials, then over repetitions
target_erp = mean(mean(target_trials, axis=3), axis=2)
nontarget_erp = mean(mean(nontarget_trials, axis=3), axis=2)
figure(figsize=(5,16))
plot_eeg(target_erp, color='b', vspace=10)
plot_eeg(nontarget_erp, color='r', vspace=10)
legend(['targets', 'non-targets'])
# -
# The familiar shape of the P300 is clearly visible on almost every channel.
#
# Now for the classification. Classifying the P300 is relatively simple. We start by extracting some relevant features from the data, which we will feed into the machine learning algorithm. The feature extraction will proceed as follows:
#
# 1. For each trial, average across the repetitions, creating one ERP for each of the 6 classes.
# 1. Select 7 channels which show a strong P300 in the training data (done manually here)
# 1. For each channel, extract the average voltage for 20 time windows.
#
# Now, each trial has $7 \times 20 = 140$ features.
#
# The procedure is implemented in the `extract_features` function below:
def extract_features(epoch):
'''
Extract features form an epoch for classification.
arguments:
epoch - An array (channels x samples x repetitions) containing the epoch to extract features from.
returns:
A flat array containing the features.
'''
# Collect the features into this list
features = []
# First average over repetitions
epoch = mean(epoch, axis=2)
# Extract channels of interest
channels_of_interest = ['Fz', 'C3', 'Cz', 'C4', 'Pz', 'P3', 'P4']
#channels_of_interest = channel_names
epoch = epoch[[channel_names.index(ch) for ch in channels_of_interest], :]
# Finally, take the avarage value for 20 time windows
nwindows = 20
window_length = int(epoch.shape[1] / float(nwindows))
for channel in range(len(channels_of_interest)):
for window in range(nwindows):
feature = mean(epoch[channel, window*window_length:(window+1)*window_length])
features.append(feature)
return array(features)
# Applying the `extract_features` function to create the final training data:
# +
target_features = vstack([extract_features(target_trials[...,i]) for i in range(target_trials.shape[-1])])
nontarget_features = vstack([extract_features(nontarget_trials[...,i]) for i in range(nontarget_trials.shape[-1])])
print('observations x features')
print(target_features.shape)
print(nontarget_features.shape)
# -
# As a classifier, we bring back the LDA used in the previous tutorial:
# +
def train_lda(class1, class2):
'''
Trains the LDA algorithm.
arguments:
class1 - An array (observations x features) for class 1
class2 - An array (observations x features) for class 2
returns:
The projection matrix W
The offset b
'''
nclasses = 2
nclass1 = class1.shape[0]
nclass2 = class2.shape[0]
# Class priors: in this case, there are an unequal number of training
# examples for each class. There are 5 times as many nontarget trials
# as target trials.
prior1 = nclass1 / float(nclass1 + nclass2)
prior2 = nclass2 / float(nclass1 + nclass2)
mean1 = np.mean(class1, axis=0)
mean2 = np.mean(class2, axis=0)
class1_centered = class1 - mean1
class2_centered = class2 - mean2
# Calculate the covariance between the features
cov1 = class1_centered.T.dot(class1_centered) / (nclass1 - nclasses)
cov2 = class2_centered.T.dot(class2_centered) / (nclass2 - nclasses)
W = (mean2 - mean1).dot(np.linalg.pinv(prior1*cov1 + prior2*cov2))
b = (prior1*mean1 + prior2*mean2).dot(W)
return (W, b)
def apply_lda(test, W, b):
'''
Applies a previously trained LDA to new data.
arguments:
test - An array (observations x features) containing the data
W - The project matrix W as calculated by train_lda()
b - The offsets b as calculated by train_lda()
returns:
A list containing the classification result for each trial
'''
return test.dot(W) - b
# -
# The code below applies the LDA classifier to determine for each trial, which of the 6 icons corresponds to the largest P300 potential:
def classify(trials, W, b):
'''
Apply the LDA classifier to the test trials.
arguments:
trials - An array (channels x samples x repetitions x classes x trials) containing the test trials.
W - The weights W as returned by train_lda()
b - The offsets b as returned by train_lda()
returns:
A list containing the predicted target icon for each trial.
'''
nclasses = trials.shape[3]
ntrials = trials.shape[4]
predicted_targets = []
for trial in range(ntrials):
# Feature extraction
features = vstack([extract_features(test[:,:,:,cl,trial]) for cl in range(nclasses)])
# Classification
p = apply_lda(features, W, b)
# Determine icon with the highest P300
predicted_targets.append( argmin(p) )
return array(predicted_targets)
# Training the classifier on the training data, applying it on the test data:
# +
W, b = train_lda(target_features, nontarget_features)
predicted_targets = classify(test, W, b)
print('Predicted targets:', predicted_targets)
print('Real targets: ', test_targets)
print('Accuracy: %.2f' % (len(flatnonzero(predicted_targets == test_targets)) / float(ntest_trials)))
# -
# You see that with the first 6 trials as training data, we were able to correctly determine the target icon in the 6 remaining trials, using relatively simple techniques.
| eeg-bci/4. Classifying the P300.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# Compile Keras Models
# =====================
# **Author**: `<NAME> <https://Huyuwei.github.io/>`_
#
# This article is an introductory tutorial to deploy keras models with NNVM.
#
# For us to begin with, keras should be installed.
# Tensorflow is also required since it's used as the default backend of keras.
#
# A quick solution is to install via pip
#
# .. code-block:: bash
#
# pip install -U keras --user
# pip install -U tensorflow --user
#
# or please refer to official site
# https://keras.io/#installation
#
#
# +
import nnvm
import tvm
import keras
import numpy as np
def download(url, path, overwrite=False):
import os
if os.path.isfile(path) and not overwrite:
print('File {} exists, skip.'.format(path))
return
print('Downloading from url {} to {}'.format(url, path))
try:
import urllib.request
urllib.request.urlretrieve(url, path)
except:
import urllib
urllib.urlretrieve(url, path)
# -
# Load pretrained keras model
# ----------------------------
# We load a pretrained resnet-50 classification model provided by keras.
#
#
weights_url = ''.join(['https://github.com/fchollet/deep-learning-models/releases/',
'download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5'])
weights_file = 'resnet50_weights.h5'
download(weights_url, weights_file)
keras_resnet50 = keras.applications.resnet50.ResNet50(include_top=True, weights=None,
input_shape=(224, 224, 3), classes=1000)
keras_resnet50.load_weights('resnet50_weights.h5')
# Load a test image
# ------------------
# A single cat dominates the examples!
#
#
from PIL import Image
from matplotlib import pyplot as plt
from keras.applications.resnet50 import preprocess_input
img_url = 'https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true'
download(img_url, 'cat.png')
img = Image.open('cat.png').resize((224, 224))
plt.imshow(img)
plt.show()
# input preprocess
data = np.array(img)[np.newaxis, :].astype('float32')
data = preprocess_input(data).transpose([0, 3, 1, 2])
print('input_1', data.shape)
# Compile the model on NNVM
# --------------------------
# We should be familiar with the process now.
#
#
# convert the keras model(NHWC layout) to NNVM format(NCHW layout).
sym, params = nnvm.frontend.from_keras(keras_resnet50)
# compile the model
target = 'cuda'
shape_dict = {'input_1': data.shape}
with nnvm.compiler.build_config(opt_level=3):
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)
# Execute on TVM
# ---------------
# The process is no different from other examples.
#
#
from tvm.contrib import graph_runtime
ctx = tvm.gpu(0)
m = graph_runtime.create(graph, lib, ctx)
# set inputs
m.set_input('input_1', tvm.nd.array(data.astype('float32')))
m.set_input(**params)
# execute
m.run()
# get outputs
tvm_out = m.get_output(0)
top1_tvm = np.argmax(tvm_out.asnumpy()[0])
# Look up synset name
# -------------------
# Look up prediction top 1 index in 1000 class synset.
#
#
synset_url = ''.join(['https://gist.githubusercontent.com/zhreshold/',
'4d0b62f3d01426887599d4f7ede23ee5/raw/',
'596b27d23537e5a1b5751d2b0481ef172f58b539/',
'imagenet1000_clsid_to_human.txt'])
synset_name = 'synset.txt'
download(synset_url, synset_name)
with open(synset_name) as f:
synset = eval(f.read())
print('NNVM top-1 id: {}, class name: {}'.format(top1_tvm, synset[top1_tvm]))
# confirm correctness with keras output
keras_out = keras_resnet50.predict(data.transpose([0, 2, 3, 1]))
top1_keras = np.argmax(keras_out)
print('Keras top-1 id: {}, class name: {}'.format(top1_keras, synset[top1_keras]))
| exp/tvm_jupyter/nnvm/from_keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Create profile with multiple, blended Gaussians and added noise
# Store in format required for GaussPy
import numpy as np
import pickle
import os
def gaussian(amp, fwhm, mean):
return lambda x: amp * np.exp(-4. * np.log(2) * (x-mean)**2 / fwhm**2)
# Specify filename of output data
FILENAME = 'multiple_gaussians.pickle'
# Number of Gaussian functions per spectrum
NCOMPS = 3
# Component properties
AMPS = [3,2,1]
FWHMS = [20,50,40] # channels
MEANS = [0,250,300] # channels
# Data properties
RMS = 0.05
NCHANNELS = 512
# Initialize
data = {}
chan = np.arange(NCHANNELS)
errors = np.ones(NCHANNELS) * RMS
spectrum = np.random.randn(NCHANNELS) * RMS
# Create spectrum
for a, w, m in zip(AMPS, FWHMS, MEANS):
spectrum += gaussian(a, w, m)(chan)
# Enter results into AGD dataset
data['data_list'] = data.get('data_list', []) + [spectrum]
data['x_values'] = data.get('x_values', []) + [chan]
data['errors'] = data.get('errors', []) + [errors]
print(data['data_list'])
len(spectrum)
os.remove(FILENAME)
with open(FILENAME, 'wb') as handle:
pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL)
import matplotlib.pyplot as plt
plt.plot(data['data_list'][0])
# +
import pandas as pd
import os
import pickle
df = pd.read_csv('test.csv', sep=',')
#print(myFile)
df.T.values[1]
df.T.values[0]
data = {}
import numpy
#start = 570
#end = -1
import numpy
def shrink(data, rows):
return data.reshape(rows, data.shape[0]//rows,).sum(axis=1)
newy=shrink(df.T.values[1],1000)
newx=df.T.values[0][range(0,len(df.T.values[0]),10)]
start = 0
newy=numpy.append(newy,[0,0,0,0])
end = len(newy)
print(newy[end-10:end]+[1e-5])
print(len(newx))
data['data_list'] = data.get('data_list', []) + [numpy.log10(newy[start+4:end]+1e-5)+5+1e-5]
#data['data_list'] = data.get('data_list', []) + [newy[start:end]+1e-5]
data['x_values'] = data.get('x_values', [])+ [50-newx[start:end]]
data['errors'] = data.get('errors', [])+[newy[start+4:end] * 0+1e-5]
print(len(data['data_list'][0]))
print(len(data['x_values'][0]))
print(len(data['errors'][0]))
FILENAME = 'multiple_gaussians.pickle'
os.remove(FILENAME)
with open(FILENAME, 'wb') as handle:
pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL)
#pickle.dump(data, open(FILENAME, 'wb'))
import matplotlib.pyplot as plt
plt.plot(data['x_values'][0],data['data_list'][0])
# +
import pandas as pd
import os
import pickle
#df = pd.read_csv('bench90.csv', sep=',')
df = pd.read_csv('benchhigh.csv', sep=',')
#print(myFile)
df.values[1]
df.values[0]
data = {}
import numpy
#start = 570
#end = -1
import numpy
def shrink(data, rows):
return data.reshape(rows, data.shape[0]//rows,).sum(axis=1)
newy=shrink(df.values[1],1000)
newx=df.values[0][range(0,len(df.values[0]),1)]
start = 0
newy=numpy.append(newy,[0,0,0,0])
end = len(newy)
print(newy[end-10:end]+[1e-5])
print(len(newx))
data['data_list'] = data.get('data_list', []) + [numpy.log10(newy[start+4:end]+1e-5)+5+1e-5]
#data['data_list'] = data.get('data_list', []) + [newy[start:end]+1e-5]
data['x_values'] = data.get('x_values', [])+ [50-newx[start:end]]
data['errors'] = data.get('errors', [])+[newy[start+4:end] * 0+2e-5]
print(len(data['data_list'][0]))
print(len(data['x_values'][0]))
print(len(data['errors'][0]))
FILENAME = 'multiple_gaussians.pickle'
os.remove(FILENAME)
with open(FILENAME, 'wb') as handle:
pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL)
#pickle.dump(data, open(FILENAME, 'wb'))
import matplotlib.pyplot as plt
plt.plot(data['x_values'][0],data['data_list'][0])
# +
import pandas as pd
import os
import pickle
df = pd.read_csv('bench90.csv', sep=',')
#df = pd.read_csv('benchhigh.csv', sep=',')
#print(myFile)
df.values[1]
df.values[0]
data = {}
import numpy
#start = 570
#end = -1
import numpy
def shrink(data, rows):
return data.reshape(rows, data.shape[0]//rows,).sum(axis=1)
newy=shrink(df.values[1],1000)
newx=df.values[0][range(0,len(df.values[0]),1)]
start = 0
newy=numpy.append(newy,[0,0,0,0])
end = len(newy)
print(newy[end-10:end]+[1e-5])
print(len(newx))
data['data_list'] = data.get('data_list', []) + [numpy.log10(newy[start+4:end]+1e-5)+5+1e-5]
#data['data_list'] = data.get('data_list', []) + [newy[start:end]+1e-5]
data['x_values'] = data.get('x_values', [])+ [50-newx[start:end]]
data['errors'] = data.get('errors', [])+[newy[start+4:end] * 0+2e-5]
print(len(data['data_list'][0]))
print(len(data['x_values'][0]))
print(len(data['errors'][0]))
FILENAME = 'multiple_gaussians.pickle'
os.remove(FILENAME)
with open(FILENAME, 'wb') as handle:
pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL)
#pickle.dump(data, open(FILENAME, 'wb'))
import matplotlib.pyplot as plt
plt.plot(data['x_values'][0],data['data_list'][0])
# -
# +
# Decompose multiple Gaussian dataset using AGD
import pickle
import gausspy.gp as gp
import os
# Specify necessary parameters
alpha1 = 0.9
snr_thresh= 0.2
FILENAME_DATA = 'multiple_gaussians.pickle'
FILENAME_DATA_DECOMP = 'multiple_gaussians_decomposed.pickle'
# Load GaussPy
g = gp.GaussianDecomposer()
# Setting AGD parameters
g.set('phase', 'one')
g.set('SNR_thresh', [snr_thresh, snr_thresh])
g.set('alpha1', alpha1)
g.set('verbose',True)
# Run GaussPy
data_decomp = g.batch_decomposition(FILENAME_DATA)
os.remove(FILENAME_DATA_DECOMP)
# Save decomposition information
with open(FILENAME_DATA_DECOMP, 'wb') as handle:
pickle.dump(data_decomp, handle, protocol=pickle.HIGHEST_PROTOCOL)
#pickle.dump(data_decomp, open(FILENAME_DATA_DECOMP, 'wb'))
data_decomp
# +
# Plot GaussPy results
import numpy as np
import matplotlib.pyplot as plt
import pickle
import numpy
import math
from scipy.special import factorial
from gausspy.AGD_decomposer import gaussian
from gausspy.AGD_decomposer import gaussian2
#def gaussian(peak, FWHM, mean):
#return lambda x: numpy.where((x) < (mean),0,peak * np.exp(- ((x)-mean) / (FWHM)))
#return lambda x: np.where((x) < (mean),0,peak + np.exp(1/(math.tan((x)-mean)) * (FWHM)))
# return lambda x: np.where((x) < (mean),0,peak/((x-mean)**(1/FWHM) ))
#return lambda x: np.where((x)<mean, 0,peak / factorial((x-mean) / FWHM))
#return lambda x: np.where((-(x-mean) / FWHM + (peak))<=0,0,np.where(((x) <= (mean)),0,-(x-mean) / FWHM + (peak)))
def unravel(list):
return np.array([i for array in list for i in array])
FILENAME_DATA = 'multiple_gaussians.pickle'
FILENAME_DATA_DECOMP = 'multiple_gaussians_decomposed.pickle'
with open(FILENAME_DATA,'rb') as file_object:
data = pickle.load(file_object)
spectrum = data['data_list'][0]
chan = data['x_values'][0]
errors = data['errors'][0]
with open(FILENAME_DATA_DECOMP,'rb') as file_object:
data_decomp = pickle.load(file_object)
#data_decomp = pickle.load(open(FILENAME_DATA_DECOMP))
means_fit = unravel(data_decomp['means_fit'])
amps_fit = unravel(data_decomp['amplitudes_fit'])
fwhms_fit = unravel(data_decomp['fwhms_fit'])
fig = plt.figure()
ax = fig.add_subplot(111)
model = np.zeros(len(chan))
for j in range(len(means_fit)):
component = gaussian(amps_fit[j], fwhms_fit[j], means_fit[j])(chan)
model += component
ax.plot(chan, component, color='blue', lw=1.5)
#if means_fit[j] == max(means_fit):
# component = gaussian2(amps_fit[j], fwhms_fit[j], means_fit[j])(chan)
# model += component
# ax.plot(chan, component, color='purple', lw=1.5)
#else:
# component = gaussian(amps_fit[j], fwhms_fit[j], means_fit[j])(chan)
# model += component
ax.plot(chan, spectrum, label='Data', color='black', linewidth=1.5)
ax.plot(chan, model, label = 'Sum of exps', color='red', linewidth=1.5)
#ax.plot(chan, errors, label = 'Errors', color='green', linestyle='dashed', linewidth=2.)
ax.set_xlabel('Energy Loss')
ax.set_ylabel('Amplitude')
#ax.set_xlim(0,len(chan))
ax.set_ylim(0,np.max(spectrum)+1)
ax.legend(loc=1)
chan
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111)
start=570
end=len(spectrum)
model = np.zeros(len(chan[start:end]))
for j in range(len(means_fit)):
component = gaussian(amps_fit[j], fwhms_fit[j], means_fit[j])(chan[start:end])
model += component
ax.plot(chan[start:end], component, color='blue', lw=1.5)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.)
ax.plot(chan[start:end], model, label = 'Sum of exps', color='red', linewidth=1.)
#ax.plot(chan, errors, label = 'Errors', color='green', linestyle='dashed', linewidth=2.)
ax.set_xlabel('Energy Loss')
ax.set_ylabel('Amplitude')
#ax.set_xlim(0,len(chan))
ax.set_ylim(0,np.max(model)+1)
ax.legend(loc=1)
chan
plt.show()
print(spectrum[-8:])
print(chan[-8:])
# +
# Decompose multiple Gaussian dataset using AGD
import pickle
import gausspy.gp as gp
import os
# Specify necessary parameters
alpha1 = 0.000001
snr_thresh = 1.5
FILENAME_DATA = 'multiple_gaussians.pickle'
FILENAME_DATA_DECOMP = 'multiple_gaussians_decomposed.pickle'
# Load GaussPy
g = gp.GaussianDecomposer()
# Setting AGD parameters
g.set('phase', 'one')
g.set('SNR_thresh', [snr_thresh, snr_thresh])
g.set('alpha1', alpha1)1
# Run GaussPy
data_decomp = g.batch_decomposition(FILENAME_DATA)
os.remove(FILENAME_DATA_DECOMP)
# Save decomposition information
with open(FILENAME_DATA_DECOMP, 'wb') as handle:
pickle.dump(data_decomp, handle, protocol=pickle.HIGHEST_PROTOCOL)
#pickle.dump(data_decomp, open(FILENAME_DATA_DECOMP, 'wb'))
data_decomp
# +
# Plot GaussPy results
import numpy as np
import matplotlib.pyplot as plt
import pickle
import numpy
import math
from scipy.special import factorial
from gausspy.AGD_decomposer import gaussian
#def gaussian(peak, FWHM, mean):
#return lambda x: numpy.where((x) < (mean),0,peak * np.exp(- ((x)-mean) / (FWHM)))
#return lambda x: np.where((x) < (mean),0,peak + np.exp(1/(math.tan((x)-mean)) * (FWHM)))
# return lambda x: np.where((x) < (mean),0,peak/((x-mean)**(1/FWHM) ))
#return lambda x: np.where((x)<mean, 0,peak / factorial((x-mean) / FWHM))
#return lambda x: np.where((-(x-mean) / FWHM + (peak))<=0,0,np.where(((x) <= (mean)),0,-(x-mean) / FWHM + (peak)))
def unravel(list):
return np.array([i for array in list for i in array])
FILENAME_DATA = 'multiple_gaussians.pickle'
FILENAME_DATA_DECOMP = 'multiple_gaussians_decomposed.pickle'
with open(FILENAME_DATA,'rb') as file_object:
data = pickle.load(file_object)
spectrum = data['data_list'][0]
chan = data['x_values'][0]
errors = data['errors'][0]
with open(FILENAME_DATA_DECOMP,'rb') as file_object:
data_decomp = pickle.load(file_object)
#data_decomp = pickle.load(open(FILENAME_DATA_DECOMP))
means_fit = unravel(data_decomp['means_fit'])
amps_fit = unravel(data_decomp['amplitudes_fit'])
fwhms_fit = unravel(data_decomp['fwhms_fit'])
fig = plt.figure()
ax = fig.add_subplot(111)
model = np.zeros(len(chan))
for j in range(len(means_fit)):
component = gaussian(amps_fit[j], fwhms_fit[j], means_fit[j])(chan)
model += component
ax.plot(chan, component, color='purple', lw=1.5)
ax.plot(chan, spectrum, label='Data', color='black', linewidth=1.5)
ax.plot(chan, model, label = 'Sum of exps', color='red', linewidth=1.5)
#ax.plot(chan, errors, label = 'Errors', color='green', linestyle='dashed', linewidth=2.)
ax.set_xlabel('Energy Loss')
ax.set_ylabel('Amplitude')
#ax.set_xlim(0,len(chan))
ax.set_ylim(0,np.max(spectrum)+1)
ax.legend(loc=1)
chan
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111)
start=570
end=len(spectrum)
model = np.zeros(len(chan[start:end]))
for j in range(len(means_fit)):
component = gaussian(amps_fit[j], fwhms_fit[j], means_fit[j])(chan[start:end])
model += component
ax.plot(chan[start:end], component, color='purple', lw=1.5)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.5)
ax.plot(chan[start:end], model, label = 'Sum of exps', color='red', linewidth=1.5)
#ax.plot(chan, errors, label = 'Errors', color='green', linestyle='dashed', linewidth=2.)
ax.set_xlabel('Energy Loss')
ax.set_ylabel('Amplitude')
#ax.set_xlim(0,len(chan))
ax.set_ylim(0,np.max(model)+1)
ax.legend(loc=1)
chan
plt.show()
# +
# Decompose multiple Gaussian dataset using AGD
import pickle
import gausspy.gp as gp
import os
# Specify necessary parameters
alpha1 = 0.001
snr_thresh = 3
FILENAME_DATA = 'multiple_gaussians.pickle'
FILENAME_DATA_DECOMP = 'multiple_gaussians_decomposed.pickle'
# Load GaussPy
g = gp.GaussianDecomposer()
# Setting AGD parameters
g.set('phase', 'one')
g.set('SNR_thresh', [snr_thresh, snr_thresh])
g.set('alpha1', alpha1)
# Run GaussPy
data_decomp = g.batch_decomposition(FILENAME_DATA)
os.remove(FILENAME_DATA_DECOMP)
# Save decomposition information
with open(FILENAME_DATA_DECOMP, 'wb') as handle:
pickle.dump(data_decomp, handle, protocol=pickle.HIGHEST_PROTOCOL)
#pickle.dump(data_decomp, open(FILENAME_DATA_DECOMP, 'wb'))
data_decomp
# +
# Plot GaussPy results
import numpy as np
import matplotlib.pyplot as plt
import pickle
import numpy
import math
from scipy.special import factorial
from gausspy.AGD_decomposer import gaussian
#def gaussian(peak, FWHM, mean):
#return lambda x: numpy.where((x) < (mean),0,peak * np.exp(- ((x)-mean) / (FWHM)))
#return lambda x: np.where((x) < (mean),0,peak + np.exp(1/(math.tan((x)-mean)) * (FWHM)))
# return lambda x: np.where((x) < (mean),0,peak/((x-mean)**(1/FWHM) ))
#return lambda x: np.where((x)<mean, 0,peak / factorial((x-mean) / FWHM))
#return lambda x: np.where((-(x-mean) / FWHM + (peak))<=0,0,np.where(((x) <= (mean)),0,-(x-mean) / FWHM + (peak)))
def unravel(list):
return np.array([i for array in list for i in array])
FILENAME_DATA = 'multiple_gaussians.pickle'
FILENAME_DATA_DECOMP = 'multiple_gaussians_decomposed.pickle'
with open(FILENAME_DATA,'rb') as file_object:
data = pickle.load(file_object)
spectrum = data['data_list'][0]
chan = data['x_values'][0]
errors = data['errors'][0]
with open(FILENAME_DATA_DECOMP,'rb') as file_object:
data_decomp = pickle.load(file_object)
#data_decomp = pickle.load(open(FILENAME_DATA_DECOMP))
means_fit = unravel(data_decomp['means_fit'])
amps_fit = unravel(data_decomp['amplitudes_fit'])
fwhms_fit = unravel(data_decomp['fwhms_fit'])
fig = plt.figure()
ax = fig.add_subplot(111)
model = np.zeros(len(chan))
for j in range(len(means_fit)):
component = gaussian(amps_fit[j], fwhms_fit[j], means_fit[j])(chan)
model += component
ax.plot(chan, component, color='purple', lw=1.5)
ax.plot(chan, spectrum, label='Data', color='black', linewidth=1.5)
ax.plot(chan, model, label = 'Sum of exps', color='red', linewidth=1.5)
#ax.plot(chan, errors, label = 'Errors', color='green', linestyle='dashed', linewidth=2.)
ax.set_xlabel('Energy Loss')
ax.set_ylabel('Amplitude')
#ax.set_xlim(0,len(chan))
ax.set_ylim(0,np.max(spectrum)+1)
ax.legend(loc=1)
chan
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111)
start=570
end=len(spectrum)
model = np.zeros(len(chan[start:end]))
for j in range(len(means_fit)):
component = gaussian(amps_fit[j], fwhms_fit[j], means_fit[j])(chan[start:end])
model += component
ax.plot(chan[start:end], component, color='purple', lw=1.5)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.5)
ax.plot(chan[start:end], model, label = 'Sum of exps', color='red', linewidth=1.5)
#ax.plot(chan, errors, label = 'Errors', color='green', linestyle='dashed', linewidth=2.)
ax.set_xlabel('Energy Loss')
ax.set_ylabel('Amplitude')
#ax.set_xlim(0,len(chan))
ax.set_ylim(0,np.max(model)+1)
ax.legend(loc=1)
chan
plt.show()
# +
from gausspy import tvdiff
import matplotlib.pyplot as plt
import numpy as np
spectrum = data['data_list'][0]
chan = data['x_values'][0]
errors = data['errors'][0]
dv=np.abs(chan[1]-chan[0])
print(dv)
alpha = 1
u = tvdiff.TVdiff(spectrum,dx=dv,alph=alpha)
u2 = tvdiff.TVdiff(u,dx=dv,alph=alpha)
u3 = tvdiff.TVdiff(u2,dx=dv,alph=alpha)
u4 = tvdiff.TVdiff(u3,dx=dv,alph=alpha)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan, spectrum, label='Data', color='black', linewidth=1.5)
ax.plot(chan, u, label='u', color='red', linewidth=1.5)
ax.legend(loc=1)
start=570
end=len(chan)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.5)
ax.plot(chan[start:end], u[start:end], label='u', color='red', linewidth=1.5)
ax.legend(loc=1)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan, spectrum, label='Data', color='black', linewidth=1.5)
ax.plot(chan, u2/20,label='u2', color='red', linewidth=1.5)
ax.legend(loc=1)
start=570
end=len(chan)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.5)
ax.plot(chan[start:end], u2[start:end]/20, label='u2', color='red', linewidth=1.5)
ax.legend(loc=1)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan, spectrum, label='Data', color='black', linewidth=1.5)
ax.plot(chan, u3/100, label='u3', color='red', linewidth=1.5)
ax.legend(loc=1)
start=500
end=len(chan)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.5)
#ax.plot(chan[start+1:end], u3[start+1:end], label='Data', color='red', linewidth=1.5)
ax.plot(chan[start:end], u3[start:end]/100, label='u3', color='blue', linewidth=1.5)
ax.legend(loc=1)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan, spectrum, label='Data', color='black', linewidth=1.5)
ax.plot(chan, u4/1000, label='u4', color='red', linewidth=1.5)
ax.legend(loc=1)
start=500
end=len(chan)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.5)
#ax.plot(chan[start+1:end], u3[start+1:end], label='Data', color='red', linewidth=1.5)
ax.plot(chan[start:end], u4[start:end]/1000, label='u4', color='blue', linewidth=1.5)
ax.legend(loc=1)
fig = plt.figure()
ax = fig.add_subplot(111)
mask4 = np.array((u2.copy()[1:] < -2), dtype="int") # Negative second derivative
mask1 = np.array((np.diff(np.sign(u3))>0), dtype="int") # Negative second derivative
print(len(mask4))
print(mask4[start:end-1])
print(mask1)
print(mask4*mask1)
ax.plot(chan, spectrum, label='Data', color='black', linewidth=1.5)
ax.plot(chan, mask4, label='Data', color='red', linewidth=1.5)
ax.legend(loc=1)
start=500
end=len(chan)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.5)
#ax.plot(chan[start:end], mask4[start:end], label='Data', color='red', linewidth=1.5)
#ax.plot(chan[start+1:end], np.diff(np.sign(u3[start:end]))<0, label='Data', color='blue', linewidth=1.5)
ax.plot(chan[start+1:end], mask4[start+1:end]+(np.diff(np.sign(u3[start:end]))>0), label='Data', color='green', linewidth=1.5)
ax.legend(loc=1)
fig = plt.figure()
ax = fig.add_subplot(111)
#ax.plot(chan[start:end], spectrum[start:end], label='Data', color='black', linewidth=1.5)
ax.plot(chan[start+1:end], mask4[start+1:end], label='Data', color='red', linewidth=1.5)
ax.plot(chan[start+1:end], np.diff(np.sign(u4[start:end]))<0, label='Data', color='blue', linewidth=1.5)
#ax.plot(chan[start+1:end], mask4[start+1:end]*(np.diff(np.sign(u3[start:end]))<0), label='Data', color='green', linewidth=1.5)
ax.legend(loc=1)
start=500
end=len(chan)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(chan[start:end], spectrum[start:end], label='Data', color='pink', linewidth=1.5)
ax.plot(chan[start:end], u[start:end]/5, label='u', color='red', linewidth=1.)
ax.plot(chan[start:end], u2[start:end]/20, label='u2', color='green', linewidth=1.)
ax.plot(chan[start:end], u3[start:end]/100, label='u3', color='blue', linewidth=1.)
ax.plot(chan[start:end], u4[start:end]/1000, label='u4', color='black', linewidth=1.)
ax.legend(loc=1)
# +
import numpy as np
import matplotlib.pyplot as plt
import pickle
import numpy
import math
from scipy.special import factorial
def gaussian(peak, FWHM, mean):
"""Return a Gaussian function
"""
#return lambda x: -peak * (x-mean-1)**21
#return lambda x: -peak* ((x/FWHM) -mean-1)**21
return lambda x: np.where((-peak*(x/FWHM-mean-1)**21)<=0,0,np.where(((x) < (mean)),0,-peak*(x/FWHM-mean-1)**21))
xnums = np.linspace(1,6,1000)
ynums= gaussian(3,0.5,5)(xnums)
plt.plot(xnums,ynums)
max(ynums)
#print(5*1)
gaussian(5,5,1)(np.array([1]))
| testing/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Saugatkafley/Federated-Learning/blob/main/federated_learning_Maths_Handwritten.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="tL1cV9R2Ymmo"
# ## Mounting Kaggle Dataset Maths Handwritten.
# At first you have to upload the `kaggle.json` file , which can be obtained from `kaggle.com` -> `Account Settings`.
# Scroll a bit and there obtain `Create New API token`.
# Then upload it to the directory.
# + id="89TIcP1CYfLX"
# !pip install kaggle
# + [markdown] id="rU8sVujueF-y"
# Making a directory --> `./kaggle`
#
# Copying `kaggle.json` to the directory
#
# Changing the permisssion to `600`.
# + id="Vw8EJD5GZltR"
# ! mkdir ~/.kaggle
# ! cp kaggle.json ~/.kaggle/
# ! chmod 600 ~/.kaggle/kaggle.json
# + id="QolI7FIgAWTg"
# # ! kaggle datasets download -d sagyamthapa/handwritten-math-symbols
# ! kaggle datasets download -d xainano/handwrittenmathsymbols
# # !unzip mnistasjpg.zip
# ! unzip handwrittenmathsymbols.zip
# ! sudo apt install unrar
# ! unrar x data.rar /content/data/
# + colab={"base_uri": "https://localhost:8080/"} id="8xSJL0mpC30I" outputId="ca3eb5fd-e4df-41b8-e60c-112a53559a09"
# ! pip install split-folders
import splitfolders
input_folder = '/content/data/extracted_images/'
splitfolders.ratio(
input_folder,
output = "dataset",
seed =42,
ratio = (.2,.8),
)
# + [markdown] id="ekGOiwyNO8mF"
# ## Importing Libraries
# + colab={"base_uri": "https://localhost:8080/"} id="pXb8S1S2YajE" outputId="22c15f33-7527-491b-a7e5-d709ed16aab5"
import numpy as np
import pandas as pd
import random
import cv2
import os
from tqdm import tqdm
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from sklearn.metrics import accuracy_score
import tensorflow as tf
from tensorflow import expand_dims
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, Input, Lambda
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from tensorflow.keras import backend as K
# !pip install imutils
from imutils import paths
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="r950km2A-l0w" outputId="6c37e1f7-66d0-47f6-a466-519ade52e565"
# import cv2
# import matplotlib.pyplot as plt
# # img = cv2.imread("5.png" )
# img = cv2.imread("/content/dataset/train/sum/exp100626.jpg")
# # img_copy = img.copy()
# # img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# plt.imshow(img, cmap = 'gray')
# plt.title("Image--Sum")
# plt.show()
# + [markdown] id="qx8f_JkQEDu6"
# ## Training Dataset Preprocessing.
# + id="r9aOeT09Yajf"
def load(paths, verbose=-1):
'''expects images for each class in seperate dir,
e.g all digits in 0 class in the directory named 0 '''
data = list()
labels = list()
# loop over the input images
for (i, imgpath) in enumerate(paths):
# load the image and extract the class labels
image = cv2.imread(imgpath )
# print(image.shape)
label = imgpath.split(os.path.sep)[-2]
# scale the image to [0, 1] and add to list
data.append(image/255)
labels.append(label)
return data, labels
def create_clients(image_list, label_list, num_clients=10, initial='clients'):
''' return: a dictionary with keys clients' names and value as
data shards - tuple of images and label lists.
args:
image_list: a list of numpy arrays of training images
label_list:a list of binarized labels for each image
num_client: number of fedrated members (clients)
initials: the clients'name prefix, e.g, clients_1
'''
#create a list of client names
client_names = ['{}_{}'.format(initial, i+1) for i in range(num_clients)]
#randomize the data
data = list(zip(image_list, label_list))
random.shuffle(data) # <- IID
#shard data and place at each client
size = len(data)//num_clients
shards = [data[i:i + size] for i in range(0, size*num_clients, size)]
#number of clients must equal number of shards
assert(len(shards) == len(client_names))
return {client_names[i] : shards[i] for i in range(len(client_names))}
def batch_data(data_shard, bs=32):
'''Takes in a clients data shard and create a tfds object off it
args:
shard: a data, label constituting a client's data shard
bs:batch size
return:
tfds object'''
#seperate shard into data and labels lists
data, label = zip(*data_shard)
# dataset = tf.data.Dataset.from_tensor_slices(list(data), list(label))
dataset = tf.data.Dataset.from_tensor_slices((list(data), list(label)))
return dataset.shuffle(len(label)).batch(bs)
def weight_scalling_factor(clients_trn_data, client_name):
client_names = list(clients_trn_data.keys())
#get the bs
bs = list(clients_trn_data[client_name])[0][0].shape[0]
#first calculate the total training data points across clients
global_count = sum([tf.data.experimental.cardinality(clients_trn_data[client_name]).numpy() for client_name in client_names])*bs
# get the total number of data points held by a client
local_count = tf.data.experimental.cardinality(clients_trn_data[client_name]).numpy()*bs
return local_count/global_count
def scale_model_weights(weight, scalar):
'''function for scaling a models weights'''
weight_final = []
steps = len(weight)
for i in range(steps):
weight_final.append(scalar * weight[i])
return weight_final
def sum_scaled_weights(scaled_weight_list):
'''Return the sum of the listed scaled weights. The is equivalent to scaled avg of the weights'''
avg_grad = list()
#get the average grad accross all client gradients
for grad_list_tuple in zip(*scaled_weight_list):
layer_mean = tf.math.reduce_sum(grad_list_tuple, axis=0)
avg_grad.append(layer_mean)
return avg_grad
def test_model(X_test, Y_test, model, comm_round):
cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
logits = model.predict(X_test)
loss = cce(Y_test, logits)
acc = accuracy_score(tf.argmax(logits, axis=1), tf.argmax(Y_test, axis=1))
print('comm_round: {} | global_acc: {:.3%} | global_loss: {}'.format(comm_round, acc, loss))
return acc, loss
# + [markdown] id="wINO-rQoPP9d"
# ## Building CNN model
# Model has `input-shape` of `[45,45,3]`
#
# `Conv2D` layer having `filter` = `32` , `kernel_size` = `3` and Activation function = `'relu'`
#
# `MaxPool2D` layer having `strides` and `pool_size` = `2`
#
# And the after 2 `Conv2D`layers the layers are `Flatten`.
# The layers are then fully connected by `Dense` layers .
#
# The output is `Dense` layer of `82` units having activation `softmax`.
# + id="V8sqXLqRL5Vj"
def create_keras_model(shape , classes):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(filters =32 , kernel_size =3 , activation='relu' , input_shape = [shape[0], shape[1] , shape[2] ]))
# model.add(Lambda(lambda x: expand_dims(x, axis=-1)))
# model.add(tf.keras.layers.Conv2D(filters =32 , kernel_size =3 , activation='relu' , input_shape =(shape,)))
model.add(tf.keras.layers.MaxPool2D( strides = 2,pool_size=2))
model.add(tf.keras.layers.Conv2D(filters = 32, kernel_size = 3 , activation = 'relu' ))
model.add(tf.keras.layers.MaxPool2D(strides = 2, pool_size =2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units = 128 , activation = 'relu'))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Dense(units = 128 , activation = 'relu'))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Dense(units =classes ,activation ='softmax'))
return model
# + [markdown] id="RRSFOsQMQ6uc"
# ## Getting Image from the directories and giving them respective labels.
#
# `Labels` are then binary encoding by `LabelBinarizer()`
# + colab={"base_uri": "https://localhost:8080/"} id="6jybBZygYajq" outputId="c79709de-b4b9-4cef-f6df-a2ff795443ab"
#declear path to your Image data folder
img_path = './dataset/train/'
image_paths = sorted(list(paths.list_images(img_path)))
#apply our function
image_list, label_list = load(image_paths)
print(len(image_list))
#binarize the labels
lb = LabelBinarizer()
label_list = lb.fit_transform(label_list)
print(len(label_list))
# + [markdown] id="q7USb1bwRURl"
# Splitting dataset into `X_train, X_test, y_train, y_test`
# + id="X70EUPjLYajr"
#split data into training and test set
X_train, X_test, y_train, y_test = train_test_split(image_list,
label_list,
test_size=0.1,
random_state=42)
# + [markdown] id="pwvm90BaYajv"
# ### IID
# + colab={"base_uri": "https://localhost:8080/"} id="JQh87cnCYajw" outputId="1175b446-1a9c-4042-f91b-30ed88c80464"
len(X_train), len(X_test), len(y_train), len(y_test)
# + [markdown] id="ENsBsxe1RgLg"
# ## Creating Clients and batching them with datasets.
# + id="z5iyVxTsYajx"
#create clients
num_clients = 10
clients = create_clients(X_train, y_train, num_clients=num_clients, initial='client') #10 clients
# + id="2A0HsxVqYajz"
# process and batch the training data for each client
clients_batched = dict()
for (client_name, data) in clients.items():
clients_batched[client_name] = batch_data(data)
#process and batch the test set
test_batched = tf.data.Dataset.from_tensor_slices((X_test, y_test)).batch(len(y_test))
# + [markdown] id="cd17yPL1Rpwo"
# ## Global Training Parameters
# + id="H_rN_WivYajz"
comms_round = 1
loss='categorical_crossentropy'
metrics = ['accuracy']
optimizer = 'adam'
# + id="iH4Jd-WXYaj0"
#initialize global model
build_shape = [45,45,3] # Train image Dimensions
classes = 82
global_model = create_keras_model(build_shape , classes)
global_acc_list = []
global_loss_list = []
# + colab={"base_uri": "https://localhost:8080/", "height": 762} id="wJkOPO3RYaj0" outputId="fdf35cee-25a3-47ca-fd7a-18bdfa143aa0"
#commence global training loop
for comm_round in range(comms_round):
# get the global model's weights - will serve as the initial weights for all local models
global_weights = global_model.get_weights()
#initial list to collect local model weights after scalling
scaled_local_weight_list = list()
#randomize client data - using keys
all_client_names = list(clients_batched.keys())
client_names = random.sample(all_client_names, k=10)
# print(client_names, len(client_names))
random.shuffle(client_names)
#loop through each client and create new local model
for client in client_names:
local_model = create_keras_model(build_shape, classes)
local_model.compile(loss=loss,
optimizer=optimizer,
metrics=metrics)
#set local model weight to the weight of the global model
local_model.set_weights(global_weights)
#fit local model with client's data
local_model.fit(clients_batched[client], epochs=1)
#scale the model weights and add to list
scaling_factor = 0.1 # weight_scalling_factor(clients_batched, client)
scaled_weights = scale_model_weights(local_model.get_weights(), scaling_factor)
scaled_local_weight_list.append(scaled_weights)
#clear session to free memory after each communication round
K.clear_session()
#to get the average over all the local model, we simply take the sum of the scaled weights
average_weights = sum_scaled_weights(scaled_local_weight_list)
#update global model
global_model.set_weights(average_weights)
#test global model and print out metrics after each communications round
for(X_test, Y_test) in test_batched:
global_acc, global_loss = test_model(X_test, Y_test, global_model, comm_round)
global_acc_list.append(global_acc)
global_loss_list.append(global_loss)
# + [markdown] id="ratgY_rxSNZV"
# ## Making a single prediction with Global Model
# + colab={"base_uri": "https://localhost:8080/"} id="4GLiBxkec1xK" outputId="38e0f032-5fa9-45b2-8a82-f638c21130c9"
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('5.png' , target_size = (45,45))
#change to PIL format
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image , axis = 0) #extra dimension to batch
result = global_model.predict(test_image)
result = lb.inverse_transform(result)
result1 = "".join(str(e) for e in result)
print(result1)
# + [markdown] id="6aX-BRvVSZAt"
# ## Saving and Loading Models
# + colab={"base_uri": "https://localhost:8080/"} id="8wTMK1pHCOOX" outputId="bb3287ac-4d4a-498e-e56f-d60d8e36e018"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="F_A3PYRLqlvX" outputId="5c471ae2-4287-419a-e3d0-2b875da3fed6"
global_model.save("/content/drive/MyDrive/global_model/")
# + colab={"base_uri": "https://localhost:8080/"} id="bSya1SOqCN0k" outputId="7c951ec2-6acb-412c-933e-ec35a8125378"
import tensorflow as tf
global_model = tf.keras.models.load_model("/content/drive/MyDrive/global_model/")
# + [markdown] id="dSbM9mXBl3gc"
# ## Using OPENCV to classify multiple digits prediction using bounding box.
#
# + colab={"base_uri": "https://localhost:8080/"} id="er9S8sJL0mzs" outputId="8cbea865-5291-418d-cef6-c5bd70431a53"
# ! pip install wget
# + id="LcB91p4wyu7M"
import wget
file_name = wget.download ("https://i.imgur.com/hRmXkdQ.png")
# # ! wget https://i.imgur.com/PwWiTk8.png
# file_url = "https://i.imgur.com/PwWiTk8.png"
# file_name = file_url.split("/")[-1]
# + colab={"base_uri": "https://localhost:8080/", "height": 240} id="vXWm_b8yl7Qf" outputId="59e02a7c-7416-40ed-9b4d-0f1133387c28"
from tensorflow.keras.preprocessing import image
import cv2
import matplotlib.pyplot as plt
# img = cv2.imread("5.png" )
img = cv2.imread(file_name)
img_copy = img.copy()
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(img, cmap = 'gray')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 252} id="uu_xktICwpEI" outputId="77902fe5-bd9c-4d8c-f47d-7418a54336db"
# img = image.img_to_array(img, dtype='uint8')
(thresh, img_bin) = cv2.threshold(img_gray, 20, 255 , cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
plt.imshow(img_bin,cmap='gray')
plt.title('Threshold: {}'.format(thresh))
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 258} id="7hlugLzv05Ym" outputId="78c5babc-cc45-465c-bbee-01794a5606f0"
cv2.floodFill(img_bin, None, (0, 0), 0)
plt.imshow(img_bin,)
# + id="ALvABQ9c1LrH"
# Get each bounding box
# Find the big contours/blobs on the filtered image:
# Bug use img_gray not binary image
contours, hierarchy = cv2.findContours(img_bin, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# + id="HDebl5T9pXjb"
sorted_ctrs = sorted(contours, key=lambda ctr: cv2.boundingRect(ctr)[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ZXitGlj11Yod" outputId="2b27e4d4-2d7a-4825-d83d-810632ef80de"
from google.colab.patches import cv2_imshow
from keras.preprocessing import image
import numpy as np
predictions = []
# Look for the outer bounding boxes (no children):
current_crop = [] # hold the cropped images
for _, c in enumerate(sorted_ctrs):
# Get the bounding rectangle of the current contour:
boundRect = cv2.boundingRect(c)
# Get the bounding rectangle data:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Estimate the bounding rect area:
rectArea = rectWidth * rectHeight
# print(rectArea)
# Set a min area threshold
minArea =40
# Filter blobs by area:
if rectArea > minArea:
# Draw bounding box:
color = (255, 0, 255)
cv2.rectangle(img_copy, (int(rectX), int(rectY)),
(int(rectX + rectWidth), int(rectY + rectHeight)), color, 1)
# Crop bounding box:
currentCrop = img[rectY-3:rectY+rectHeight+3,rectX-3:rectX+rectWidth+3]
current_crop.append(currentCrop)
# Resize image to (45,45)
test_image = cv2.resize(currentCrop, (45,45), interpolation = cv2.INTER_AREA)
#change to PIL format
# test_image = cropped_image
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image , axis = 0) #extra dimension to batch
result = global_model.predict(test_image)
result = lb.inverse_transform(result)
result1 = "".join(str(e) for e in result)
predictions.append(result1)
print(predictions)
cv2.putText(img_copy ,result1, (int(rectX), int(rectY)-5 ),cv2.FONT_HERSHEY_SIMPLEX,1,(0,0,255),1)
cv2_imshow(currentCrop)
# img_copy = cv2.resize(img_copy, (100,100), interpolation = cv2.INTER_AREA)
# cv2_imshow(img_copy)
plt.imshow(img_copy, cmap = 'gray')
plt.show()
cv2.waitKey(0)
pred = "".join(str(e) for e in predictions)
print(eval(pred))
# + [markdown] id="i5KQJ-PNf31w"
# ## Model Evaluation Metrics
#
# + colab={"base_uri": "https://localhost:8080/"} id="cGHDx_4p9WBy" outputId="6dc578fe-f0b8-43bd-f45d-1abe58fd69b4"
y_pred = global_model.predict(X_test)
# y_pred = np.argmax(y_pred,axis=1)
print(y_pred)
y_pred = lb.inverse_transform(y_pred)
print(y_pred)
# + colab={"base_uri": "https://localhost:8080/"} id="9zgldhxd9xY9" outputId="601e8edf-7a85-4b52-e589-2e0ef5ed275d"
y_test1 = y_test.copy()
print(y_test1)
# y_test1 = lb.inverse_transform(y_test1)
y_test1 = lb.inverse_transform(y_test1)
print(y_test1)
# + colab={"base_uri": "https://localhost:8080/"} id="1iFRxqsyf7rR" outputId="1d6aa44d-3ed7-4c81-a23e-c2d3bd243a2d"
# Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test1,y_pred)
# + id="H9ryRTSs9hmE"
import warnings
warnings.filterwarnings('always')
from sklearn.metrics import precision_recall_fscore_support
# + colab={"base_uri": "https://localhost:8080/"} id="duuU9k9_Xhw_" outputId="c7a42ec7-5d6c-4cb0-9209-7561ae8be61b"
precision , recall , fscore , support = precision_recall_fscore_support(y_test1,y_pred , average='weighted')
print("Precision: {:.3f}\n Recall: {:.3f}\n fscore: {:.3f}\n Support: {}\n".format(precision , recall , fscore , support))
# + id="yab4zs_IfLEA"
labels = ['!','(',')','+',',','-','0','1','2','3','4','5','6','7','8','9','=','A','C',
'Delta','G','H','M','N','R','S','T','X','[',']','alpha','ascii_124','b',
'beta','cos','d','div','e','exists','f','forall','forward_slash','gamma','geq','gt',
'i','in','infty','int','j','k','l','lambda','ldots','leq','lim','log','lt','mu',
'neq','o','p','phi','pi','pm',
# 'prime',
'q','rightarrow','sigma','sin','sqrt','sum','tan','theta','times','u','v','w','y','z','{','}']
# + id="CKgLa3GGZsRl"
# import seaborn as sns
# import pandas as pd
# import matplotlib.pyplot as plt
# df_cm = pd.DataFrame(cm, index = [i for i in labels],
# columns = [i for i in labels])
# plt.figure(figsize = (30,30))
# sns.heatmap(df_cm, annot=True , cmap = "Blues")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="nFwzTBMZihMg" outputId="8e48178b-fbc5-4afc-befb-0d7895183c49"
import matplotlib.pyplot as plt
plt.figure(figsize = (20,20))
plt.imshow(cm, cmap=plt.cm.Blues_r)
plt.xlabel("Predicted labels")
plt.ylabel("True labels")
plt.xticks([], [])
plt.yticks([], [])
plt.title('Confusion matrix ')
plt.colorbar()
plt.show()
# + id="aUkQw_xGCPHi"
# import numpy as np
# import matplotlib.pyplot as plt
# from matplotlib.animation import FuncAnimation
# from IPython.display import HTML
# x_data = []
# y_data = []
# fig, ax = plt.subplots()
# ax.set_xlim(0,20)
# ax.set_ylim(0, 0.95)
# line, = ax.plot(0, 0)
# arr = list(range(0, len(global_acc_list)))
# def animation_frame(i):
# x_data = arr[:i]
# y_data = global_acc_list[:i]
# line.set_xdata(x_data)
# line.set_ydata(y_data)
# return line,
# animation = FuncAnimation(fig, func=animation_frame, frames = 100, interval=40 , blit= True)
# HTML(animation.to_html5_video())
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="yZN-NQ9xYaj1" outputId="9c81ff7d-ee55-4c11-8ba2-a8f17efe01ca"
# IID
import matplotlib.pyplot as plt
plt.figure(figsize=(15,4))
plt.subplot(121)
plt.title("IID | total comm rounds: {}".format(len(global_acc_list)))
plt.plot(list(range(0,len(global_loss_list))), global_loss_list , label = "global_loss")
plt.legend()
plt.subplot(122)
plt.title("IID | total comm rounds: {}".format(len(global_acc_list)))
plt.plot(list(range(0,len(global_acc_list))), global_acc_list , label = 'global_accuracy' , color = 'orange')
plt.legend()
print('IID | total comm rounds', len(global_acc_list))
# + id="UF10q5RjYaj2"
iid_df = pd.DataFrame(list(zip(global_acc_list, global_loss_list)), columns =['global_acc_list', 'global_loss_list'])
iid_df.to_csv('IID.csv',index=False)
# + [markdown] id="EDxBPCz0Yaj3"
# ### Non-IID
# + id="2ULRv0iKlWRd"
def batch_data(data_shard, bs=32):
'''Takes in a clients data shard and create a tfds object off it
args:
shard: a data, label constituting a client's data shard
bs:batch size
return:
tfds object'''
#seperate shard into data and labels lists
data, label = zip(*data_shard)
# dataset = tf.data.Dataset.from_tensor_slices(list(data), list(label))
dataset = tf.data.Dataset.from_tensor_slices((list(data), list(label)))
return dataset.shuffle(len(label)).batch(bs)
# + id="JfvnvItPYaj4"
def create_clients(image_list, label_list, num_clients=100, initial='clients'):
''' return: a dictionary with keys clients' names and value as
data shards - tuple of images and label lists.
args:
image_list: a list of numpy arrays of training images
label_list:a list of binarized labels for each image
num_client: number of fedrated members (clients)
initials: the clients'name prefix, e.g, clients_1
'''
#create a list of client names
client_names = ['{}_{}'.format(initial, i+1) for i in range(num_clients)]
#randomize the data
# data = list(zip(image_list, label_list))
# random.shuffle(data) # <- IID
# sort data for non-iid
max_y = np.argmax(label_list, axis=1)
sorted_zip = sorted(zip(max_y, label_list, image_list), key=lambda x: x[0])
data = [(x,y) for _,y,x in sorted_zip]
#shard data and place at each client
size = len(data)//num_clients
shards = [data[i:i + size] for i in range(0, size*num_clients, size)]
#number of clients must equal number of shards
# assert(len(shards) == len(client_names))
return {client_names[i] : shards[i] for i in range(len(client_names))}
# + colab={"base_uri": "https://localhost:8080/"} id="bhSwtmMqYaj6" outputId="73369599-0ccb-4e7d-badf-c7fbdd5742f7"
len(X_train), len(X_test), len(y_train), len(y_test)
# + id="2WRvvt1uYaj6"
#create clients
clients = create_clients(X_train, y_train, num_clients=10, initial='client')
# + id="lS4Hr2cSYaj7"
#process and batch the training data for each client
clients_batched = dict()
for (client_name, data) in clients.items():
clients_batched[client_name] = batch_data(data)
#process and batch the test set
test_batched = tf.data.Dataset.from_tensor_slices((X_test, y_test)).batch(len(y_test))
# + id="HuL63pOIYaj7"
comms_round = 20
loss='categorical_crossentropy'
metrics = ['accuracy']
optimizer = 'adam'
# + id="dyz5qkwUYaj7"
#initialize global model
build_shape = [45,45,3] # Shape of Image
classes = 82
global_model = create_keras_model(build_shape , classes)
global_acc_list = []
global_loss_list = []
# + colab={"base_uri": "https://localhost:8080/"} id="T5MNwlaCYaj8" outputId="83dad4fa-2b8a-452c-aa87-5fcbfc8ed79d"
#commence global training loop
for comm_round in range(comms_round):
# get the global model's weights - will serve as the initial weights for all local models
global_weights = global_model.get_weights()
#initial list to collect local model weights after scalling
scaled_local_weight_list = list()
#randomize client data - using keys
# all_client_names = list(clients_batched.keys())
client_names = list(clients_batched.keys())
print(client_names)
# random.shuffle(client_names)
#loop through each client and create new local model
for client in client_names:
local_model = create_keras_model(build_shape, classes)
local_model.compile(loss=loss,
optimizer=optimizer,
metrics=metrics)
#set local model weight to the weight of the global model
local_model.set_weights(global_weights)
#fit local model with client's data
local_model.fit(clients_batched[client], epochs=1, workers = 8 )
#scale the model weights and add to list
scaling_factor = 0.1 # weight_scalling_factor(clients_batched, client)
scaled_weights = scale_model_weights(local_model.get_weights(), scaling_factor)
scaled_local_weight_list.append(scaled_weights)
# print(scaled_local_weight_list)
#clear session to free memory after each communication round
K.clear_session()
#to get the average over all the local model, we simply take the sum of the scaled weights
average_weights = sum_scaled_weights(scaled_local_weight_list)
#update global model
global_model.set_weights(average_weights)
#test global model and print out metrics after each communications round
for(X_test, Y_test) in test_batched:
global_acc, global_loss = test_model(X_test, Y_test, global_model, comm_round)
global_acc_list.append(global_acc)
global_loss_list.append(global_loss)
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="E0kP3CxAYaj-" outputId="705b53f0-eb72-49a6-e7ee-53f76c52b581"
# NON-IID
import matplotlib.pyplot as plt
plt.figure(figsize=(15,4))
plt.subplot(121)
plt.title("NON-IID | total comm rounds: {}".format(len(global_acc_list)))
plt.plot(list(range(0,len(global_loss_list))), global_loss_list , label = "global_loss")
plt.legend()
plt.subplot(122)
plt.title("NON-IID | total comm rounds: {}".format(len(global_acc_list)))
plt.plot(list(range(0,len(global_acc_list))), global_acc_list , label = 'global_accuracy' , color = 'orange')
plt.legend()
print('NON-IID | total comm rounds', len(global_acc_list))
# + [markdown] id="kYK4aoe3BgSl"
# ## Traing By Solely on CNN for 20 Epochs
# + id="YPT8KoHSvq5p"
cnn_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)).batch(32)
# + id="hSQPXYIOwLX6"
build_shape = [45,45,3] # Train image Dimensions
classes = 82
loss='categorical_crossentropy'
metrics = ['accuracy']
optimizer = 'adam'
# + id="PQGRIxnEv99K"
cnn_model = create_keras_model( build_shape , classes)
cnn_model.compile(
loss=loss,
optimizer=optimizer,
metrics=metrics
)
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="jD5VL82OxIdq" outputId="13d65aa0-3452-4a2d-faa4-d16c18f0ee30"
cnn_model.fit(cnn_dataset , epochs = 20)
# + id="ubbOlqiIxc4j"
# for(X_test, Y_test) in test_batched:
# acc, loss = test_model(X_test, Y_test, cnn_model, 1)
# + id="bu--HmHm66Mr"
acc_list = cnn_model.history.history['accuracy']
loss_list = cnn_model.history.history['loss']
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="EqkLw3US-Ng4" outputId="a15c294b-fa8c-4885-df81-37727ca472b4"
# Single CNN
import matplotlib.pyplot as plt
plt.figure(figsize=(15,4))
plt.subplot(121)
plt.title("CNN Training Losses for 20 epochs")
plt.plot(list(range(0,len(loss_list))), loss_list , label = "CNN_loss")
plt.legend()
plt.subplot(122)
plt.title("CNN Training Accuracy for 20 epochs")
plt.plot(list(range(0,len(acc_list))), acc_list , label = 'CNN_accuracy' , color = 'orange')
plt.legend()
print('CNN Training for 20 epochs')
# + id="65gGYkoW7ECj"
y_pred = cnn_model.predict(X_test)
# y_pred = np.argmax(y_pred,axis=1)
print(y_pred)
y_pred = lb.inverse_transform(y_pred)
print(y_pred)
| federated_learning_Maths_Handwritten.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
participants = ["ALLAOUI", "CIRG_UP_KUNLE", "OMEGA", "jomar", "rrg", "shisunzhang"]
problems = ["a280_n279", "a280_n1395", "a280_n2790",
"fnl4461_n4460", "fnl4461_n22300", "fnl4461_n44600",
"pla33810_n33809", "pla33810_n169045", "pla33810_n338090"
]
| submissions/.ipynb_checkpoints/evaluation-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Setup
import torch
from torch import tensor
import ipywidgets as widgets
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Task
# Suppose we have a dataset with just a single feature `x` and continuous outcome variable `y`.
#
# In general we're going to be faced with a dataset with an unknown and probably nonlinear relationship. But for now let's use a simple dataset with a known linear relationship:
# +
true_weights = 4.0
true_bias = -1.0
# Make the randomness consistent
torch.manual_seed(0)
# Use random x values
x = torch.rand(100)
# Generate random noise, same shape as *x*, that has some outliers.
#noise = torch.randn_like(x)
noise = torch.distributions.studentT.StudentT(2.0).sample(x.shape)
#print(f"Noise mean: {noise.mean()}, noise variance {noise.var()}")
# Generate true y values
y_true = true_weights * x + true_bias + noise
# Make a scatterplot. The semicolon at the end says to ignore the return value.
plt.scatter(x, y_true); plt.ylim(-11, 6);
# -
bias = 0.0
@widgets.interact(slope=(-5.0, 5.0))
def plot_linreg(slope):
y_pred = slope * x + bias
plt.scatter(x, y_true); plt.plot(x, y_pred, 'r');
resid = y_true - y_pred
mse = resid.pow(2).mean()
mae = resid.abs().mean()
print(f"MSE: {mse}, MAE: {mae}")
# - Slope that minimizes MSE: 1.4
# - Slope that minimizes MAE: 2.4
# - Description of the difference: MAE ignores outliers
# ### Gradient
#
# Make a function that computes the MSE.
def linreg_mse(slope):
y_pred = slope * x + bias
resid = y_true - y_pred
return resid.pow(2).mean()
linreg_mse(0.0)
linreg_mse(2.4)
linreg_mse(1.4)
# Plot the MSE as a function of slope.
# +
slopes = torch.linspace(-1, 2, steps=100)
mse_s = []
for slope in slopes:
mse_s.append(linreg_mse(slope))
@widgets.interact(slope=(-1.0, 2.0))
def plot_tangents(slope):
plt.plot(slopes, mse_s, label="MSE"); plt.xlabel("slope"); plt.ylabel("MSE");
# finite difference method for numerically approximating a gradient
eps = .01
gradient = (linreg_mse(slope + eps) - linreg_mse(slope)) / eps
plt.plot(slopes, (slopes - slope) * gradient + linreg_mse(slope), label=f"Tangent line at {slope:.3f}");
print(gradient)
plt.ylim(4, 6.5)
plt.legend();
# -
# finite difference method for numerically approximating a gradient
slope = 0.0
eps = 1e-3
gradient = (linreg_mse(slope + eps) - linreg_mse(slope)) / eps
gradient.item()
# ### Gradient Descent
losses = []
slope = 0.0
N_ITER = 100
for i in range(N_ITER):
loss = linreg_mse(slope)
gradient = (linreg_mse(slope + eps) - linreg_mse(slope)) / eps
slope -= .1 * gradient
#print(f"slope = {slope:.3f}, Loss = {loss:.3f}, gradient = {gradient}")
losses.append(loss)
plt.plot(losses)
slope
# ## PyTorch autograd
eps = 1e-3
gradients = []
grads_finite_diff = []
for slope in torch.linspace(1.3, 1.5, steps=100):
slope = torch.tensor(slope.item(), requires_grad=True)
loss = linreg_mse(slope)
loss.backward()
gradients.append(slope.grad.item())
grads_finite_diff.append(
(linreg_mse(slope.item() + eps)
-
linreg_mse(slope.item()))
/ (eps))
plt.plot(slopes, gradients)
plt.plot(slopes, grads_finite_diff)
| content/units/04models/lab/lab04-walkthrough.ipynb |
# ---
# title: "DataFrames counting"
# date: 2020-04-12T14:41:32+02:00
# author: "<NAME>"
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Most common summary statistics:
# - `.mean()`
# - `.std()`
# - `.var()`
# - `.median()`
# - `.mode()`
# - `.min()`
# - `.max()`
# - `.sum()`
# - `.quantile()`
# ### Generate some data
# +
import pandas as pd
import numpy as np
from pandas import Timestamp
mycols = ['store', 'type', 'department', 'date', 'weekly_sales', 'is_holiday']
content = np.array([[1, 'A', 1, Timestamp('2010-02-05 00:00:00'), 24924.5, False],[1, 'B', 1, Timestamp('2010-03-05 00:00:00'), 21827.9, True],[1, 'A', 1, Timestamp('2010-04-02 00:00:00'), 57258.43, False],[1, 'C', 1, Timestamp('2010-05-07 00:00:00'), 17413.94, False],[1, 'C', 1, Timestamp('2010-06-04 00:00:00'), 17558.09, False],[1, 'C', 1, Timestamp('2010-07-02 00:00:00'), 16333.14, False],[1, 'C', 1, Timestamp('2010-08-06 00:00:00'), 17508.41, False],[1, 'B', 1, Timestamp('2010-09-03 00:00:00'), 16241.78, False],[1, 'A', 1, Timestamp('2010-10-01 00:00:00'), 20094.19, True],[1, 'B', 1, Timestamp('2010-11-05 00:00:00'), 34238.88, False],[1, 'C', 1, Timestamp('2010-12-03 00:00:00'), 22517.56, False],[1, 'A', 1, Timestamp('2011-01-07 00:00:00'), 15984.24, False],[1, 'C', 2, Timestamp('2010-02-05 00:00:00'), 50605.27, True],[1, 'A', 2, Timestamp('2010-03-05 00:00:00'), 48397.98, False],[1, 'A', 2, Timestamp('2010-04-02 00:00:00'), 47450.5, False],[1, 'A', 2, Timestamp('2010-05-07 00:00:00'), 47903.01, False],[1, 'A', 2, Timestamp('2010-06-04 00:00:00'), 48754.47, False],[1, 'A', 2, Timestamp('2010-07-02 00:00:00'), 47077.72, False],[1, 'A', 2, Timestamp('2010-08-06 00:00:00'), 50031.73, False],[1, 'A', 2, Timestamp('2010-09-03 00:00:00'), 49015.05, False],[1, 'A', 2, Timestamp('2010-10-01 00:00:00'), 45829.02, False],[1, 'A', 2, Timestamp('2010-11-05 00:00:00'), 46381.43, True],[1, 'A', 2, Timestamp('2010-12-03 00:00:00'), 44405.02, False],[1, 'A', 2, Timestamp('2011-01-07 00:00:00'), 43202.29, False],[1, 'A', 3, Timestamp('2010-02-05 00:00:00'), 13740.12, False],[1, 'A', 3, Timestamp('2010-03-05 00:00:00'), 12275.58, False],[1, 'A', 3, Timestamp('2010-04-02 00:00:00'), 11157.08, False],[1, 'A', 3, Timestamp('2010-05-07 00:00:00'), 9372.8, True],[1, 'A', 3, Timestamp('2010-06-04 00:00:00'), 8001.41, False],[1, 'C', 3, Timestamp('2010-07-02 00:00:00'), 7857.88, True],[1, 'A', 3, Timestamp('2010-08-06 00:00:00'), 26719.02, False],[1, 'A', 3, Timestamp('2010-09-03 00:00:00'), 19081.8, False],[1, 'B', 3, Timestamp('2010-10-01 00:00:00'), 9775.17, False],[1, 'A', 3, Timestamp('2010-11-05 00:00:00'), 9825.22, False],[1, 'A', 3, Timestamp('2010-12-03 00:00:00'), 10856.85, False],[1, 'A', 3, Timestamp('2011-01-07 00:00:00'), 15808.15, False],[1, 'A', 4, Timestamp('2010-02-05 00:00:00'), 39954.04, False],[1, 'A', 4, Timestamp('2010-03-05 00:00:00'), 38086.19, False],[1, 'A', 4, Timestamp('2010-04-02 00:00:00'), 37809.49, False],[1, 'A', 4, Timestamp('2010-05-07 00:00:00'), 37168.34, False],[1, 'A', 4, Timestamp('2010-06-04 00:00:00'), 40548.19, False],[1, 'B', 4, Timestamp('2010-07-02 00:00:00'), 39773.71, False],[1, 'A', 4, Timestamp('2010-08-06 00:00:00'), 40973.88, False],[1, 'A', 4, Timestamp('2010-09-03 00:00:00'), 38321.88, False],[1, 'A', 4, Timestamp('2010-10-01 00:00:00'), 34912.45, False],[1, 'A', 4, Timestamp('2010-11-05 00:00:00'), 37980.55, False],[1, 'A', 4, Timestamp('2010-12-03 00:00:00'), 37110.55, False],[1, 'A', 4, Timestamp('2011-01-07 00:00:00'), 37947.8, False],[1, 'A', 5, Timestamp('2010-02-05 00:00:00'), 32229.38, False],[1, 'A', 5, Timestamp('2010-03-05 00:00:00'), 23082.14, False],[1, 'B', 5, Timestamp('2010-04-02 00:00:00'), 29967.92, False],[1, 'A', 5, Timestamp('2010-05-07 00:00:00'), 19260.44, False],[1, 'A', 5, Timestamp('2010-06-04 00:00:00'), 22932.26, False],[1, 'A', 5, Timestamp('2010-07-02 00:00:00'), 18887.71, False],[1, 'A', 5, Timestamp('2010-08-06 00:00:00'), 16926.17, False],[1, 'A', 5, Timestamp('2010-09-03 00:00:00'), 15390.52, False],[1, 'A', 5, Timestamp('2010-10-01 00:00:00'), 23381.38, False],[1, 'A', 5, Timestamp('2010-11-05 00:00:00'), 23903.81, False],[1, 'A', 5, Timestamp('2010-12-03 00:00:00'), 36472.02, False],[1, 'A', 5, Timestamp('2011-01-07 00:00:00'), 22699.69, False],[1, 'A', 6, Timestamp('2010-02-05 00:00:00'), 5749.03, False],[1, 'A', 6, Timestamp('2010-03-05 00:00:00'), 4221.25, False],[1, 'A', 6, Timestamp('2010-04-02 00:00:00'), 4132.61, False],[1, 'A', 6, Timestamp('2010-05-07 00:00:00'), 7477.7, False],[1, 'A', 6, Timestamp('2010-06-04 00:00:00'), 5484.9, False],[1, 'A', 6, Timestamp('2010-07-02 00:00:00'), 4541.91, False],[1, 'A', 6, Timestamp('2010-08-06 00:00:00'), 4700.38, False],[1, 'A', 6, Timestamp('2010-09-03 00:00:00'), 3553.75, False],[1, 'B', 6, Timestamp('2010-10-01 00:00:00'), 2876.19, False],[1, 'A', 6, Timestamp('2010-11-05 00:00:00'), 5036.99, False],[1, 'A', 6, Timestamp('2010-12-03 00:00:00'), 6356.96, False],[1, 'A', 6, Timestamp('2011-01-07 00:00:00'), 1376.15, False],[1, 'A', 7, Timestamp('2010-02-05 00:00:00'), 21084.08, False],[1, 'A', 7, Timestamp('2010-03-05 00:00:00'), 19659.7, False],[1, 'A', 7, Timestamp('2010-04-02 00:00:00'), 22427.62, False],[1, 'A', 7, Timestamp('2010-05-07 00:00:00'), 20457.62, False],[1, 'A', 7, Timestamp('2010-06-04 00:00:00'), 44563.68, False],[1, 'A', 7, Timestamp('2010-07-02 00:00:00'), 22589.0, False],[1, 'A', 7, Timestamp('2010-08-06 00:00:00'), 21842.57, False],[1, 'A', 7, Timestamp('2010-09-03 00:00:00'), 18005.65, False],[1, 'A', 7, Timestamp('2010-10-01 00:00:00'), 16481.79, False],[1, 'A', 7, Timestamp('2010-11-05 00:00:00'), 19136.58, False],[1, 'A', 7, Timestamp('2010-12-03 00:00:00'), 47406.83, False],[1, 'A', 7, Timestamp('2011-01-07 00:00:00'), 17516.16, False],[1, 'A', 8, Timestamp('2010-02-05 00:00:00'), 40129.01, False],[1, 'A', 8, Timestamp('2010-03-05 00:00:00'), 38776.09, False],[1, 'A', 8, Timestamp('2010-04-02 00:00:00'), 38151.58, False],[1, 'A', 8, Timestamp('2010-05-07 00:00:00'), 35393.78, False],[1, 'A', 8, Timestamp('2010-06-04 00:00:00'), 35181.47, False],[1, 'A', 8, Timestamp('2010-07-02 00:00:00'), 35580.01, False],[1, 'A', 8, Timestamp('2010-08-06 00:00:00'), 34833.35, False],[1, 'A', 8, Timestamp('2010-09-03 00:00:00'), 35562.68, False],[1, 'A', 8, Timestamp('2010-10-01 00:00:00'), 34658.25, False],[1, 'A', 8, Timestamp('2010-11-05 00:00:00'), 36182.58, False],[1, 'A', 8, Timestamp('2010-12-03 00:00:00'), 36222.74, False],[1, 'A', 8, Timestamp('2011-01-07 00:00:00'), 36599.46, False],[1, 'A', 9, Timestamp('2010-02-05 00:00:00'), 16930.99, False],[1, 'A', 9, Timestamp('2010-03-05 00:00:00'), 24064.7, False],[1, 'A', 9, Timestamp('2010-04-02 00:00:00'), 25435.02, False],[1, 'A', 9, Timestamp('2010-05-07 00:00:00'), 27588.34, False]])
df = pd.DataFrame(content, columns=mycols)
df['frac_sales'] = df['weekly_sales']*np.random.rand()
# -
df.head()
# ### Drop duplicates with `.drop_duplicates()`
df.drop_duplicates(subset='type')
df.drop_duplicates(subset=['type','is_holiday'])
df_unique = df.drop_duplicates(subset=['type','is_holiday'])
# ### Count with `.value_counts()`
df_unique['type'].value_counts()
df_unique['type'].value_counts(sort=True)
df_unique['type'].value_counts(normalize=True)
# ### Grouping with `.groupby()`
df.groupby('type')['weekly_sales'].min()
# Example of how much the syntax gets simplified with `.groupby()`
# +
# Calc total weekly sales
sales_all = df["weekly_sales"].sum()
# Subset for type A stores, calc total weekly sales
sales_A = df[df["type"] == "A"]["weekly_sales"].sum()
# Subset for type B stores, calc total weekly sales
sales_B = df[df["type"] == "B"]["weekly_sales"].sum()
# Subset for type C stores, calc total weekly sales
sales_C = df[df["type"] == "C"]["weekly_sales"].sum()
print('all=', sales_all)
print('A ', sales_A)
print('B ', sales_B)
print('C ', sales_C)
# -
df.groupby('type')['weekly_sales'].sum()
print('A ', sales_A/sales_all)
print('B ', sales_B/sales_all)
print('C ', sales_C/sales_all)
df.groupby("type")["weekly_sales"].sum()/df['weekly_sales'].sum()
# #### Multiple categories
df.groupby(['type','is_holiday'])['weekly_sales'].sum()
# #### Multiple grouped summaries
df.groupby('type')['weekly_sales'].agg([np.min,np.max])
df.groupby('type')[['weekly_sales','frac_sales']].agg([np.min,np.max])
| courses/datacamp/notes/python/pandas/countingDF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="MfBg1C5NB3X0"
# # Distributed training with TensorFlow
#
# **Learning Objectives**
# 1. Create MirroredStrategy
# 2. Integrate tf.distribute.Strategy with tf.keras
# 3. Create the input dataset and call tf.distribute.Strategy.experimental_distribute_dataset
#
# + [markdown] id="xHxb-dlhMIzW"
# ## Introduction
#
# `tf.distribute.Strategy` is a TensorFlow API to distribute training
# across multiple GPUs, multiple machines or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes.
#
# `tf.distribute.Strategy` has been designed with these key goals in mind:
#
# * Easy to use and support multiple user segments, including researchers, ML engineers, etc.
# * Provide good performance out of the box.
# * Easy switching between strategies.
#
# `tf.distribute.Strategy` can be used with a high-level API like [Keras](https://www.tensorflow.org/guide/keras), and can also be used to distribute custom training loops (and, in general, any computation using TensorFlow).
#
# In TensorFlow 2.x, you can execute your programs eagerly, or in a graph using [`tf.function`](function.ipynb). `tf.distribute.Strategy` intends to support both these modes of execution, but works best with `tf.function`. Eager mode is only recommended for debugging purpose and not supported for `TPUStrategy`. Although training is the focus of this guide, this API can also be used for distributing evaluation and prediction on different platforms.
#
# You can use `tf.distribute.Strategy` with very few changes to your code, because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
#
# In this guide, we explain various types of strategies and how you can use them in different situations. To learn how to debug performance issues, see the [Optimize TensorFlow GPU Performance](gpu_performance_analysis.md) guide.
#
# Note: For a deeper understanding of the concepts, please watch [this deep-dive presentation](https://youtu.be/jKV53r9-H14). This is especially recommended if you plan to write your own training loop.
#
# Each learning objective will correspond to a __#TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/production_ml/solutions/distributed_training_with_TF.ipynb) for reference.
#
# + id="EVOZFbNgXghB"
# Import TensorFlow
import tensorflow as tf
# + [markdown] id="m7KBpffWzlxH"
# This notebook uses TF2.x. Please check your tensorflow version using the cell below.
# -
# Show the currently installed version of TensorFlow
print(tf.__version__)
# + [markdown] id="eQ1QESxxEbCh"
# ## Types of strategies
# `tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
#
# * *Synchronous vs asynchronous training:* These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
# * *Hardware platform:* You may want to scale your training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
#
# In order to support these use cases, there are six strategies available. The next section explains which of these are supported in which scenarios in TF. Here is a quick overview:
#
# | Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
# |:----------------------- |:------------------- |:--------------------- |:--------------------------------- |:--------------------------------- |:-------------------------- |
# | **Keras API** | Supported | Supported | Supported | Experimental support | Supported planned post 2.4 |
# | **Custom training loop** | Supported | Supported | Supported | Experimental support | Experimental support |
# | **Estimator API** | Limited Support | Not supported | Limited Support | Limited Support | Limited Support |
#
# Note: [Experimental support](https://www.tensorflow.org/guide/versions#what_is_not_covered) means the APIs are not covered by any compatibilities guarantees.
#
# Note: Estimator support is limited. Basic training and evaluation are experimental, and advanced features—such as scaffold—are not implemented. We recommend using Keras or custom training loops if a use case is not covered.
# + [markdown] id="DoQKKK8dtfg6"
# ### MirroredStrategy
# `tf.distribute.MirroredStrategy` supports synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called `MirroredVariable`. These variables are kept in sync with each other by applying identical updates.
#
# Efficient all-reduce algorithms are used to communicate the variable updates across the devices.
# All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device.
# It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. You can choose from a few other options, or write your own.
#
# Here is the simplest way of creating `MirroredStrategy`:
#
# + id="9Z4FMAY9ADxK"
mirrored_strategy = tf.distribute.MirroredStrategy()
# + [markdown] id="wldY4aFCAH4r"
# This will create a `MirroredStrategy` instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
#
# If you wish to use only some of the GPUs on your machine, you can do so like this:
# + id="nbGleskCACv_"
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
# + [markdown] id="8-KDnrJLAhav"
# If you wish to override the cross device communication, you can do so using the `cross_device_ops` argument by supplying an instance of `tf.distribute.CrossDeviceOps`. Currently, `tf.distribute.HierarchicalCopyAllReduce` and `tf.distribute.ReductionToOneDevice` are two options other than `tf.distribute.NcclAllReduce` which is the default.
# + id="6-xIOIpgBItn"
# TODO 1 - Your code goes here.
# + [markdown] id="kPEBCMzsGaO5"
# ### TPUStrategy
# `tf.distribute.TPUStrategy` lets you run your TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) and [Cloud TPU](https://cloud.google.com/tpu).
#
# In terms of distributed training architecture, `TPUStrategy` is the same `MirroredStrategy` - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in `TPUStrategy`.
#
# Here is how you would instantiate `TPUStrategy`:
#
# Note: To run this code in Colab, you should select TPU as the Colab runtime. See [TensorFlow TPU Guide](https://www.tensorflow.org/guide/tpu).
#
# ```
# cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
# tpu=tpu_address)
# tf.config.experimental_connect_to_cluster(cluster_resolver)
# tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
# tpu_strategy = tf.distribute.TPUStrategy(cluster_resolver)
# ```
#
#
# The `TPUClusterResolver` instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it.
#
# If you want to use this for Cloud TPUs:
# - You must specify the name of your TPU resource in the `tpu` argument.
# - You must initialize the tpu system explicitly at the *start* of the program. This is required before TPUs can be used for computation. Initializing the tpu system also wipes out the TPU memory, so it's important to complete this step first in order to avoid losing state.
# + [markdown] id="8Xc3gyo0Bejd"
# ### MultiWorkerMirroredStrategy
#
# `tf.distribute.MultiWorkerMirroredStrategy` is very similar to `MirroredStrategy`. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to `tf.distribute.MirroredStrategy`, it creates copies of all variables in the model on each device across all workers.
#
# Here is the simplest way of creating `MultiWorkerMirroredStrategy`:
# + id="m3a_6ebbEjre"
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# + [markdown] id="bt94JBvhEr4s"
# `MultiWorkerMirroredStrategy` has two implementations for cross-device communications. `CommunicationImplementation.RING` is RPC-based and supports both CPU and GPU. `CommunicationImplementation.NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) and provides the state of art performance on GPU, but it doesn't support CPU. `CollectiveCommunication.AUTO` defers the choice to Tensorflow.
#
# + [markdown] id="0JiImlw3F77E"
# One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. The `TF_CONFIG` environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. Learn more about [setting up TF_CONFIG](#TF_CONFIG).
# + [markdown] id="3ZLBhaP9NUNr"
# ### ParameterServerStrategy
# Parameter server training is a common data-parallel method to scale up model training on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step. Please see the [parameter server training tutorial](../tutorials/distribute/parameter_server_training.ipynb) for details.
#
# TensorFlow 2 parameter server training uses a central-coordinator based architecture via the `tf.distribute.experimental.coordinator.ClusterCoordinator` class.
#
# In this implementation the `worker` and `parameter server` tasks run `tf.distribute.Server`s that listen for tasks from the coordinator. The coordinator creates resources, dispatches training tasks, writes checkpoints, and deals with task failures.
#
# In the programming running on the coordinator, you will use a `ParameterServerStrategy` object to define a training step and use a `ClusterCoordinator` to dispatch training steps to remote workers. Here is the simplest way to create them:
#
# ```Python
# strategy = tf.distribute.experimental.ParameterServerStrategy(
# tf.distribute.cluster_resolver.TFConfigClusterResolver(),
# variable_partitioner=variable_partitioner)
# coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
# strategy)
# ```
#
# Note you will need to configure TF_CONFIG environment variable if you use `TFConfigClusterResolver`. It is similar to [TF_CONFIG](#TF_CONFIG) in `MultiWorkerMirroredStrategy` but has additional caveats.
#
# In TF 1, `ParameterServerStrategy` is available only with estimator via `tf.compat.v1.distribute.experimental.ParameterServerStrategy` symbol.
# + [markdown] id="E20tG21LFfv1"
# Note: This strategy is [`experimental`](https://www.tensorflow.org/guide/versions#what_is_not_covered) as it is currently under active development.
# + [markdown] id="45H0Wa8WKI8z"
# ### CentralStorageStrategy
# `tf.distribute.experimental.CentralStorageStrategy` does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.
#
# Create an instance of `CentralStorageStrategy` by:
#
# + id="rtjZOyaoMWrP"
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
# + [markdown] id="KY1nJHNkMl7b"
# This will create a `CentralStorageStrategy` instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggregated before being applied to variables.
# + [markdown] id="aAFycYUiNCUb"
# Note: This strategy is [`experimental`](https://www.tensorflow.org/guide/versions#what_is_not_covered) as it is currently a work in progress.
# + [markdown] id="t2XUdmIxKljq"
# ### Other strategies
#
# In addition to the above strategies, there are two other strategies which might be useful for prototyping and debugging when using `tf.distribute` APIs.
# + [markdown] id="UD5I1beTpc7a"
# #### Default Strategy
#
# Default strategy is a distribution strategy which is present when no explicit distribution strategy is in scope. It implements the `tf.distribute.Strategy` interface but is a pass-through and provides no actual distribution. For instance, `strategy.run(fn)` will simply call `fn`. Code written using this strategy should behave exactly as code written without any strategy. You can think of it as a "no-op" strategy.
#
# Default strategy is a singleton - and one cannot create more instances of it. It can be obtained using `tf.distribute.get_strategy()` outside any explicit strategy's scope (the same API that can be used to get the current strategy inside an explicit strategy's scope).
# + id="ibHleFOOmPn9"
default_strategy = tf.distribute.get_strategy()
# + [markdown] id="EkxPl_5ImLzc"
# This strategy serves two main purposes:
#
# * It allows writing distribution aware library code unconditionally. For example, in `tf.optimizer`s can use `tf.distribute.get_strategy()` and use that strategy for reducing gradients - it will always return a strategy object on which we can call the reduce API.
#
# + id="WECeRzUdT6bU"
# In optimizer or other library code
# Get currently active strategy
strategy = tf.distribute.get_strategy()
strategy.reduce("SUM", 1., axis=None) # reduce some values
# + [markdown] id="JURbH-pUT51B"
# * Similar to library code, it can be used to write end users' programs to work with and without distribution strategy, without requiring conditional logic. A sample code snippet illustrating this:
# + id="O4Vmae5jmSE6"
if tf.config.list_physical_devices('GPU'):
strategy = tf.distribute.MirroredStrategy()
else: # use default strategy
strategy = tf.distribute.get_strategy()
with strategy.scope():
# do something interesting
print(tf.Variable(1.))
# + [markdown] id="kTzsqN4lmJ0d"
# #### OneDeviceStrategy
# `tf.distribute.OneDeviceStrategy` is a strategy to place all variables and computation on a single specified device.
#
# ```
# strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
# ```
#
# This strategy is distinct from the default strategy in a number of ways. In default strategy, the variable placement logic remains unchanged when compared to running TensorFlow without any distribution strategy. But when using `OneDeviceStrategy`, all variables created in its scope are explicitly placed on the specified device. Moreover, any functions called via `OneDeviceStrategy.run` will also be placed on the specified device.
#
# Input distributed through this strategy will be prefetched to the specified device. In default strategy, there is no input distribution.
#
# Similar to the default strategy, this strategy could also be used to test your code before switching to other strategies which actually distribute to multiple devices/machines. This will exercise the distribution strategy machinery somewhat more than default strategy, but not to the full extent as using `MirroredStrategy` or `TPUStrategy` etc. If you want code that behaves as if no strategy, then use default strategy.
# + [markdown] id="hQv1lm9UPDFy"
# So far you've seen the different strategies available and how you can instantiate them. The next few sections show the different ways in which you can use them to distribute your training.
# + [markdown] id="_mcuy3UhPcen"
# ## Using `tf.distribute.Strategy` with `tf.keras.Model.fit`
#
# `tf.distribute.Strategy` is integrated into `tf.keras` which is TensorFlow's implementation of the
# [Keras API specification](https://keras.io). `tf.keras` is a high-level API to build and train models. By integrating into `tf.keras` backend, we've made it seamless for you to distribute your training written in the Keras training framework using `model.fit`.
#
# Here's what you need to change in your code:
#
# 1. Create an instance of the appropriate `tf.distribute.Strategy`.
# 2. Move the creation of Keras model, optimizer and metrics inside `strategy.scope`.
#
# We support all types of Keras models - sequential, functional and subclassed.
#
# Here is a snippet of code to do this for a very simple Keras model with one dense layer:
# + id="gbbcpzRnPZ6V"
mirrored_strategy = tf.distribute.MirroredStrategy()
# TODO 2 - Your code goes here.
model.compile(loss='mse', optimizer='sgd')
# + [markdown] id="773EOxCRVlTg"
# This example usees `MirroredStrategy` so you can run this on a machine with multiple GPUs. `strategy.scope()` indicates to Keras which strategy to use to distribute the training. Creating models/optimizers/metrics inside this scope allows us to create distributed variables instead of regular variables. Once this is set up, you can fit your model like you would normally. `MirroredStrategy` takes care of replicating the model's training on the available GPUs, aggregating gradients, and more.
# + id="ZMmxEFRTEjH5"
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
# + [markdown] id="nofTLwyXWHK8"
# Here a `tf.data.Dataset` provides the training and eval input. You can also use numpy arrays:
# + id="Lqgd9SdxW5OW"
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
# + [markdown] id="IKqaj7QwX0Zb"
# In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using `MirroredStrategy` with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use `strategy.num_replicas_in_sync` to get the number of replicas.
# + id="quNNTytWdGBf"
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
# + [markdown] id="z1Muy0gDZwO5"
# ### What's supported now?
#
#
# | Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | ParameterServerStrategy | CentralStorageStrategy |
# |---------------- |--------------------- |----------------------- |----------------------------------- |----------------------------------- |--------------------------- |
# | Keras APIs | Supported | Supported | Experimental support | Experimental support | Experimental support |
#
# ### Examples and Tutorials
#
# Here is a list of tutorials and examples that illustrate the above integration end to end with Keras:
#
# 1. [Tutorial](https://www.tensorflow.org/tutorials/distribute/keras) to train MNIST with `MirroredStrategy`.
# 2. [Tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) to train MNIST using `MultiWorkerMirroredStrategy`.
# 3. [Guide](https://www.tensorflow.org/guide/tpu#train_a_model_using_keras_high_level_apis) on training MNIST using `TPUStrategy`.
# 4. [Tutorial](https://www.tensorflow.org/tutorials/distribute/parameter_server_training) for parameter server training in TensorFlow 2 with `ParameterServerStrategy`.
# 5. TensorFlow Model Garden [repository](https://github.com/tensorflow/models/tree/master/official) containing collections of state-of-the-art models implemented using various strategies.
# + [markdown] id="IlYVC0goepdk"
# ## Using `tf.distribute.Strategy` with custom training loops
# As you've seen, using `tf.distribute.Strategy` with Keras `model.fit` requires changing only a couple lines of your code. With a little more effort, you can also use `tf.distribute.Strategy` with custom training loops.
#
# If you need more flexibility and control over your training loops than is possible with Estimator or Keras, you can write custom training loops. For instance, when using a GAN, you may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training.
#
# The `tf.distribute.Strategy` classes provide a core set of methods through to support custom training loops. Using these may require minor restructuring of the code initially, but once that is done, you should be able to switch between GPUs, TPUs, and multiple machines simply by changing the strategy instance.
#
# Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
#
# + [markdown] id="XNHvSY32nVBi"
# First, create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
# + id="W-3Bn-CaiPKD"
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
# + [markdown] id="mYkAyPeYnlXk"
# Next, create the input dataset and call `tf.distribute.Strategy.experimental_distribute_dataset` to distribute the dataset based on the strategy.
# + id="94BkvkLInkKd"
# TODO 3 - Your code goes here.
# + [markdown] id="grzmTlSvn2j8"
# Then, define one step of the training. Use `tf.GradientTape` to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, put it in a function `train_step` and pass it to `tf.distrbute.Strategy.run` along with the dataset inputs you got from the `dist_dataset` created before:
# + id="NJxL5YrVniDe"
loss_object = tf.keras.losses.BinaryCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=global_batch_size)
def train_step(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
predictions = model(features, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
@tf.function
def distributed_train_step(dist_inputs):
per_replica_losses = mirrored_strategy.run(train_step, args=(dist_inputs,))
return mirrored_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
# + [markdown] id="yRL5u_NLoTvq"
# A few other things to note in the code above:
#
# 1. It used `tf.nn.compute_average_loss` to compute the loss. `tf.nn.compute_average_loss` sums the per example loss and divide the sum by the global_batch_size. This is important because later after the gradients are calculated on each replica, they are aggregated across the replicas by **summing** them.
# 2. It used the `tf.distribute.Strategy.reduce` API to aggregate the results returned by `tf.distribute.Strategy.run`. `tf.distribute.Strategy.run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can `reduce` them to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results` to get the list of values contained in the result, one per local replica.
# 3. When `apply_gradients` is called within a distribution strategy scope, its behavior is modified. Specifically, before applying gradients on each parallel instance during synchronous training, it performs a sum-over-all-replicas of the gradients.
#
# + [markdown] id="o9k_6-6vpQ-P"
# Finally, once you have defined the training step, we can iterate over `dist_dataset` and run the training in a loop:
# + id="Egq9eufToRf6"
for dist_inputs in dist_dataset:
print(distributed_train_step(dist_inputs))
# + [markdown] id="jK8eQXF_q1Zs"
# In the example above, you iterated over the `dist_dataset` to provide input to your training. We also provide the `tf.distribute.Strategy.make_experimental_numpy_dataset` to support numpy inputs. You can use this API to create a dataset before calling `tf.distribute.Strategy.experimental_distribute_dataset`.
#
# Another way of iterating over your data is to explicitly use iterators. You may want to do this when you want to run for a given number of steps as opposed to iterating over the entire dataset.
# The above iteration would now be modified to first create an iterator and then explicitly call `next` on it to get the input data.
#
# + id="e5BEvR0-LJAc"
iterator = iter(dist_dataset)
for _ in range(10):
print(distributed_train_step(next(iterator)))
# + [markdown] id="vDJO8mnypqBA"
# This covers the simplest case of using `tf.distribute.Strategy` API to distribute custom training loops.
# + [markdown] id="BZjNwCt1qBdw"
# ### What's supported now?
#
# | Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | ParameterServerStrategy | CentralStorageStrategy |
# |:----------------------- |:------------------- |:------------------- |:----------------------------- |:------------------------ |:------------------------- |
# | Custom Training Loop | Supported | Supported | Experimental support | Experimental support | Experimental support |
#
# ### Examples and Tutorials
# Here are some examples for using distribution strategy with custom training loops:
#
# 1. [Tutorial](https://www.tensorflow.org/tutorials/distribute/custom_training) to train MNIST using `MirroredStrategy`.
# 2. [Guide](https://www.tensorflow.org/guide/tpu#train_a_model_using_custom_training_loop) on training MNIST using `TPUStrategy`.
# 3. TensorFlow Model Garden [repository](https://github.com/tensorflow/models/tree/master/official) containing collections of state-of-the-art models implemented using various strategies.
#
# + [markdown] id="Xk0JdsTHyUnE"
# ## Other topics
#
# This section covers some topics that are relevant to multiple use cases.
# + [markdown] id="cP6BUIBtudRk"
# <a name="TF_CONFIG"></a>
# ### Setting up TF\_CONFIG environment variable
#
# For multi-worker training, as mentioned before, you need to set `TF_CONFIG` environment variable for each
# binary running in your cluster. The `TF_CONFIG` environment variable is a JSON string which specifies what
# tasks constitute a cluster, their addresses and each task's role in the cluster. The
# [tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) repo provides a Kubernetes template in which sets
# `TF_CONFIG` for your training tasks.
#
# There are two components of TF_CONFIG: cluster and task. cluster provides information about the training cluster, which is a dict consisting of different types of jobs such as worker. In multi-worker training, there is usually one worker that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular worker does. Such worker is referred to as the 'chief' worker, and it is customary that the worker with index 0 is appointed as the chief worker (in fact this is how tf.distribute.Strategy is implemented). task on the other hand provides information of the current task. The first component cluster is the same for all workers, and the second component task is different on each worker and specifies the type and index of that worker.
#
# One example of `TF_CONFIG` is:
# ```
# os.environ["TF_CONFIG"] = json.dumps({
# "cluster": {
# "worker": ["host1:port", "host2:port", "host3:port"],
# "ps": ["host4:port", "host5:port"]
# },
# "task": {"type": "worker", "index": 1}
# })
# ```
#
# + [markdown] id="fezd3aF8wj9r"
# This `TF_CONFIG` specifies that there are three workers and two ps tasks in the
# cluster along with their hosts and ports. The "task" part specifies that the
# role of the current task in the cluster, worker 1 (the second worker). Valid roles in a cluster is
# "chief", "worker", "ps" and "evaluator". There should be no "ps" job except when using `tf.distribute.experimental.ParameterServerStrategy`.
# + [markdown] id="GXIbqSW-sFVg"
# ## What's next?
#
# `tf.distribute.Strategy` is actively under development. Try it out and provide and your feedback using [GitHub issues](https://github.com/tensorflow/tensorflow/issues/new).
| courses/machine_learning/deepdive2/production_ml/labs/distributed_training_with_TF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from matplotlib import pyplot as plt
import math
# %matplotlib inline
def get_primes(n):
is_prime = {i: True for i in range(2, n + 1)}
i = 2
while(i <= n):
if is_prime[i]:
m = n / i
j = 2
while(j <= m):
is_prime[i * j] = False
j += 1
i += 1
return is_prime
# %%time
first_n_primes = get_primes(1000000)
print(len(first_n_primes))
def plot_prime_number_function(n):
x = range(2, n)
nb_of_primes_so_far = 0
y = []
for i in x:
if first_n_primes[i]:
nb_of_primes_so_far += 1
y += [nb_of_primes_so_far]
plt.step(x, y)
plt.plot(x, [i / (math.log(i)) for i in x])
plot_prime_number_function(100)
plot_prime_number_function(1000)
plot_prime_number_function(100000)
| sieve_of_erastothenes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evaluate different architecture
#
# Here we set up the infrasturcture to evaluate different TaylorNets aginst Multi-Layer Perceptrons (MLPs) and ResNet.
#
# Import all necessary packages
import torch
from torch.utils.data.dataset import Dataset
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Data generation (optional)
# First we generate the train and test data and save it on file
# +
## Generate the data and save to file
## The data are generated in the cube (-1,1) X (-1,1)
def separatrix(x1,x2):
return x2[:]*x2[:]+x1[:]*x1[:]-0.7
class DATA():
def __init__(self,n_sample=100,noise_strenght = 0.0):
self.label_xy = np.zeros((n_sample,3),dtype=float)
self.label_xy[:,1]=2*np.random.rand(n_sample)-1 # x is between -1 and 1
self.label_xy[:,2]=2*np.random.rand(n_sample)-1 # y is between -1 and 1
if(noise_strenght>0.0):
noise=np.random.randn(n_sample)*noise_strenght
self.label_xy[:,0] = separatrix(self.label_xy[:,1],self.label_xy[:,2]) +noise[:] > 0.0
else:
self.label_xy[:,0] = separatrix(self.label_xy[:,1],self.label_xy[:,2]) > 0.0
def visualize(self):
plt.scatter(self.label_xy[:,1],self.label_xy[:,2],c=self.label_xy[:,0])
def save(self,file_path):
np.save(file_path, self.label_xy, allow_pickle=False, fix_imports=False)
# -
train=DATA(5000,0.1)
test=DATA(1000,0.0)
test.save("./test_data.npy")
train.save("./train_data.npy")
test.visualize()
#test.visualize()
# ## Next we import the data into a dataloader (mandatory)
# +
class DataSet_for_regression(Dataset):
def __init__(self,file_path):
self.data = torch.FloatTensor(np.load(file_path))
def __getitem__(self, index):
sample=self.data[index,1:3] #extract the last 2 elements
label=self.data[index,0] # extract the first element
return (sample, label)
def __len__(self):
return self.data.shape[0]
train_data = DataSet_for_regression("./train_data.npy")
test_data = DataSet_for_regression("./test_data.npy")
BATCH= 10
USE_CUDA=False
kwargs = {'num_workers': 1, 'pin_memory': USE_CUDA}
train_loader = torch.utils.data.DataLoader(dataset=train_data,batch_size=BATCH,
shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(dataset=test_data,batch_size=BATCH,
shuffle=False, **kwargs)
# -
# ## Next we define the Multi-Layer Perceptron (Mandatory)
# +
import torch
from torch.autograd import Variable
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from torchvision.utils import save_image
import torch.optim as optim
from torch import nn
import collections
def order_dict_from_list(list_of_dims):
OD = collections.OrderedDict()
for i in range(len(list_of_dims)-2):
in_dim=list_of_dims[i]
out_dim=list_of_dims[i+1]
name_layer = "layer"+str(i+1)
name_activation = "relu"+str(i+1)
OD[name_layer] = torch.nn.Linear(in_dim,out_dim)
OD[name_activation] = nn.ReLU()
# Add the last layer
in_dim=list_of_dims[-2]
out_dim=list_of_dims[-1]
name_layer = "output"
name_activation = "sigmoid"
OD[name_layer] = torch.nn.Linear(in_dim,out_dim)
OD[name_activation] = nn.Sigmoid()
return OD
# Define the Multi-Layer Perceptron
class MLP(torch.nn.Module):
def __init__(self,input_dim,hd1,hd2,output_dim):
super().__init__()
# Auxiliary variables
self.loss_history = []
self.epoch = 0
self.linear1 = torch.nn.Linear(input_dim,hd1)
self.linear2 = torch.nn.Linear(hd1,hd2)
self.linear3 = torch.nn.Linear(hd2,output_dim)
def forward(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = torch.sigmoid(self.linear3(x))
xout = x.view(x.shape[0])
return xout
def specify_loss(self,criterion):
self.criterion = criterion
def specify_optimizer(self,optimizer):
self.optimizer=optimizer
def save_everything(self,filepath):
state={
'state_dict': self.state_dict(),
'loss_history': self.loss_history,
'epoch': self.epoch
}
if(self.optimizer != None):
state['optimizer_name'] = self.optimizer
state['optimizer_dict'] = self.optimizer.state_dict()
if(self.criterion != None):
state['criterion_name'] = self.criterion
state['criterion_dict'] = self.criterion.state_dict()
torch.save(state,filepath)
def load_everything(self,filepath):
state = torch.load(filepath,map_location='cpu')
self.load_state_dict(state['state_dict'])
self.loss_history = state['loss_history']
self.epoch = state['epoch']
if 'optimizer_name' in state:
self.optimizer = state['optimizer_name']
self.optimizer.load_state_dict(state['optimizer_dict'])
if 'criterion_name' in state:
self.criterion = state['criterion_name']
self.criterion.load_state_dict(state['criterion_dict'])
def compute_loss(self, output, labels):
loss = self.criterion(output,labels)
return loss
def train_one_epoch(self,trainloader):
tmp = []
for i, data in enumerate(trainloader, 0): #loop over minibatches
xin, labels = data
xout = self.forward(Variable(xin))
loss = self.compute_loss(xout, labels)
tmp.append(loss.item()) # add the loss to the tmp list
# For each minibatch set the gradient to zero
self.optimizer.zero_grad()
loss.backward() # do backprop and compute all the gradients
self.optimizer.step() # update the parameters
# Svae the average loss during the epoch and the final value at the end of epoch
self.loss_history.append(np.mean(tmp))
self.epoch += 1
def test_accuracy(self,data_loader):
with torch.no_grad():
correct=0
total=0
for i, data in enumerate(data_loader, 0): #loop over minibatches
xin, labels = data
xout = self.forward(Variable(xin))
xout = xout>0.5
ls = labels > 0.5
count=torch.sum(torch.eq(ls,xout)).item()
correct += count
total += ls.shape[0]
return float(correct)/total
input_DIM=2
hd1=6
hd2=6
output_DIM=1
mlp=MLP(input_DIM,hd1,hd2,output_DIM)
mlp.specify_loss(nn.BCELoss(reduction='elementwise_mean'))
mlp.specify_optimizer(optim.Adam(mlp.parameters(), lr=0.0001))
data = next(iter(train_loader))
xin, labels = data
xout = mlp(xin)
# +
if(mlp.epoch > 100):
mlp.load_everything('./mlp.pth')
for _ in range(0,200):
mlp.train_one_epoch(train_loader)
if(mlp.epoch % 10==0):
accuracy=mlp.test_accuracy(test_loader)
print("[Epoch %3d] loss: %.4f accuracy: %.4f" % (mlp.epoch,mlp.loss_history[-1],accuracy))
mlp.save_everything('./mlp.pth')
plt.plot(mlp.loss_history)
plt.xlabel("epoch")
plt.ylabel("Loss")
# -
# ## Viasualize the domanin boundary and the test data
# +
# visualize the decision boundary
xgrid, ygrid = np.mgrid[-1:1.1:0.1, -1.0:1.1:0.1]
xygrid = np.array([xgrid, ygrid]).transpose(1,2,0)
myinput=torch.Tensor(xygrid.reshape(-1,2))
# print(myinput.shape)
with torch.no_grad():
pp=mlp(myinput)
probs=pp.numpy().reshape(xgrid.shape)
f, ax = plt.subplots(figsize=(8, 6))
contour = ax.contourf(xgrid, ygrid, probs, [0,0.5,1.0], cmap="RdBu", vmin=0, vmax=1)
ax_c = f.colorbar(contour)
ax_c.set_label("$P(y = 1)$")
ax_c.set_ticks([0, .25, .5, .75, 1])
test=np.load("./test_data.npy")
ax.scatter(test[:,1],test[:,2],c=test[:,0], s=50,
cmap="RdBu", vmin=-.2, vmax=1.2,
edgecolor="white", linewidth=2)
ax.set(aspect="equal",
xlim=(-1, 1), ylim=(-1, 1),
xlabel="$X_1$", ylabel="$X_2$")
# -
# ## TaylorNet (Stephen's random thoughts)
# +
from TaylorNet import TaylorNet
# Define the Multi-Layer Perceptron with TaylorNets
class Taylor_MLP(torch.nn.Module):
def __init__(self,input_dim,output_dim):
super().__init__()
# Auxiliary variables
self.loss_history = []
self.epoch = 0
self.tnet = TaylorNet(2,3)
self.linear = torch.nn.Linear(3,1)
def forward(self,x):
x = self.tnet(x)
x = torch.sigmoid(self.linear(x))
xout = x.view(x.shape[0])
return xout
def specify_loss(self,criterion):
self.criterion = criterion
def specify_optimizer(self,optimizer):
self.optimizer=optimizer
def save_everything(self,filepath):
state={
'state_dict': self.state_dict(),
'loss_history': self.loss_history,
'epoch': self.epoch
}
if(self.optimizer != None):
state['optimizer_name'] = self.optimizer
state['optimizer_dict'] = self.optimizer.state_dict()
if(self.criterion != None):
state['criterion_name'] = self.criterion
state['criterion_dict'] = self.criterion.state_dict()
torch.save(state,filepath)
def load_everything(self,filepath):
state = torch.load(filepath,map_location='cpu')
self.load_state_dict(state['state_dict'])
self.loss_history = state['loss_history']
self.epoch = state['epoch']
if 'optimizer_name' in state:
self.optimizer = state['optimizer_name']
self.optimizer.load_state_dict(state['optimizer_dict'])
if 'criterion_name' in state:
self.criterion = state['criterion_name']
self.criterion.load_state_dict(state['criterion_dict'])
def compute_loss(self, output, labels):
loss = self.criterion(output,labels)
return loss
def train_one_epoch(self,trainloader):
tmp = []
for i, data in enumerate(trainloader, 0): #loop over minibatches
xin, labels = data
xout = self.forward(Variable(xin))
loss = self.compute_loss(xout, labels)
tmp.append(loss.item()) # add the loss to the tmp list
# For each minibatch set the gradient to zero
self.optimizer.zero_grad()
loss.backward() # do backprop and compute all the gradients
self.optimizer.step() # update the parameters
# Svae the average loss during the epoch and the final value at the end of epoch
self.loss_history.append(np.mean(tmp))
self.epoch += 1
def test_accuracy(self,data_loader):
with torch.no_grad():
correct=0
total=0
for i, data in enumerate(data_loader, 0): #loop over minibatches
xin, labels = data
xout = self.forward(Variable(xin))
xout = xout>0.5
ls = labels > 0.5
count=torch.sum(torch.eq(ls,xout)).item()
correct += count
total += ls.shape[0]
return float(correct)/total
input_DIM=2
output_DIM=1
tmlp=Taylor_MLP(input_DIM,output_DIM)
tmlp.specify_loss(nn.BCELoss(reduction='elementwise_mean'))
tmlp.specify_optimizer(optim.Adam(tmlp.parameters(), lr=0.001))
data = next(iter(train_loader))
xin, labels = data
xout = tmlp(xin)
# +
for _ in range(0,50):
tmlp.train_one_epoch(train_loader)
if(tmlp.epoch % 10==0):
accuracy=tmlp.test_accuracy(test_loader)
print("[Epoch %3d] loss: %.4f accuracy: %.4f" % (tmlp.epoch,tmlp.loss_history[-1],accuracy))
plt.plot(tmlp.loss_history)
plt.xlabel("epoch")
plt.ylabel("Loss")
# +
# visualize the decision boundary
xgrid, ygrid = np.mgrid[-1:1.1:0.1, -1.0:1.1:0.1]
xygrid = np.array([xgrid, ygrid]).transpose(1,2,0)
myinput=torch.Tensor(xygrid.reshape(-1,2))
# print(myinput.shape)
with torch.no_grad():
pp=tmlp(myinput)
probs=pp.numpy().reshape(xgrid.shape)
f, ax = plt.subplots(figsize=(8, 6))
contour = ax.contourf(xgrid, ygrid, probs, [0,0.5,1.0], cmap="RdBu", vmin=0, vmax=1)
ax_c = f.colorbar(contour)
ax_c.set_label("$P(y = 1)$")
ax_c.set_ticks([0, .25, .5, .75, 1])
test=np.load("./test_data.npy")
ax.scatter(test[:,1],test[:,2],c=test[:,0], s=50,
cmap="RdBu", vmin=-.2, vmax=1.2,
edgecolor="white", linewidth=2)
ax.set(aspect="equal",
xlim=(-1, 1), ylim=(-1, 1),
xlabel="$X_1$", ylabel="$X_2$")
# -
| Multy_Layer_Perceptron.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import the necessary packages.
from Images2Movies_module import *
# Variables
create_all_videos = 0 # [bool] --> When 0, asks for the Well diirectory. When 1, asks for the directory containing the parents.
# A function to allow the user to select the folder contianing the subfolders of images.
# Function input arg 1: create_all_videos [bool] --> When 0, asks for the Well diirectory. When 1, asks for the parent directory (contains the village folder).
# Function input arg 2: test [bool] --> When 1, will change the gui title to that of the test gui.
# Function output 1: The path of the folder selected by the user.
folder_selection_dialog(create_all_videos = 0,
test = 0)
# A function to organise the creation of movies.
# Function input arg 1: selected_directory [string] --> The well or village directory, as previously selected.
# Function input arg 2: create_all_videos [bool] --> When 0, creates individual videos from the well directory. When 1, considers every well directory and makes videos for all of them.
# Function input arg 3: file_type [string] --> The image file type which is searched for to create the movies.
# Function input arg 4: frame_rate [int] --> Desired frame rate. !!!SET TO 0 IF YOU USE movie_time or subsampling_rate!!!
# Function input arg 5: movie_time [int] --> Desired movie length (min). !!!SET TO 0 IF YOU USE frame_rate or subsampling_rate!!!
# Function input arg 6: movie_extension [string] --> Your desired movie file extension. Tested for .avi and .mp4.
# Function input arg 7: video_width [int] --> The desired video width (the height will be altered proportionally.
# Function input arg 8: subsampling_rate [float] --> A variable to be used when you want the biggest movie to be of a particular length, and all other videos to be subsampled proportionally. !!!SET TO 0 WHEN USING frame_rate or movie_time!!!. Wen set to above 0, ensure frame_rate=0 and movie_time=0. Calculated with: (50 * desired time for biggest movie in seconds) / number of images in biggest movie.
# Function output 1: The movie will be saved to 'selected_directory'.
create_movie(selected_directory,
create_all_videos = create_all_videos,
file_type = '.JPG',
frame_rate = 0,
movie_time = 5,
movie_extension = '.mp4',
video_width = 1920,
subsampling_rate = 0)
| Images2Movies/RUNME.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="3lkHZaiYCYt0" colab_type="text"
# # Generative Adversarial Network
#
# In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
#
# GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from <NAME> and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
#
# * [Pix2Pix](https://affinelayer.com/pixsrv/)
# * [CycleGAN & Pix2Pix in PyTorch, Jun-Yan Zhu](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)
# * [A list of generative models](https://github.com/wiseodd/generative-models)
#
# The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes "fake" data to pass to the discriminator. The discriminator also sees real training data and predicts if the data it's received is real or fake.
# > * The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real, training data.
# * The discriminator is a classifier that is trained to figure out which data is real and which is fake.
#
# What ends up happening is that the generator learns to make data that is indistinguishable from real data to the discriminator.
#
# <img src='https://github.com/DayrisRM/deep-learning-v2-pytorch/blob/master/gan-mnist/assets/gan_pipeline.png?raw=1' width=70% />
#
# The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector that the generator uses to construct its fake images. This is often called a **latent vector** and that vector space is called **latent space**. As the generator trains, it figures out how to map latent vectors to recognizable images that can fool the discriminator.
#
# If you're interested in generating only new images, you can throw out the discriminator after training. In this notebook, I'll show you how to define and train these adversarial networks in PyTorch and generate new images!
# + id="FX-iDtXvCYt1" colab_type="code" colab={}
# %matplotlib inline
import numpy as np
import torch
import matplotlib.pyplot as plt
# + id="UcojG2C2CYt7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 352, "referenced_widgets": ["d9b5fa45561541cfb8d5700e67104925", "291d4235ac604ad698815398d38f1544", "4c87c4ba72f24a7d9ea7cee602842a42", "6b393187152a4d1aadde2f55c43ffe71", "0adf9e734631463cb701f9085c7f246f", "21903e447aa04c9480fa497dd35afa1f", "30168317563049659c5f7d3f9a61f65f", "100dda47566740729e1c2a72e971ad87", "f009e956981c44f99d0c4ab8a75b2b12", "c82996a44f9e4e64a0ea474fcae74bd0", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "04da5654661f4d2aaad7ed8b88af8529", "<KEY>", "0e809edd1bd04f4ea4c6d3698d58e81e", "9c62ec79f4fb41c1beeb042015ada131", "9a6a170c78604bbeaa4a01b1e9d9413f", "<KEY>", "<KEY>", "<KEY>", "48375a396ba54fa48e564e89ed15fbd2", "<KEY>", "<KEY>", "a6805278b4a8411ea9f79f5b7bc5535e", "<KEY>", "fd144992d7da49ac88e99f9dc62b7faf", "f347b03a6f364ba680406a740f0da3f1", "<KEY>", "fbd9f13e1e9b492a978006a6170bbfbe"]} outputId="3c296052-8820-4069-a41c-aad3074b8de1"
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 64
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# get the training datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
# prepare data loader
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
# + [markdown] id="XlGvonNDCYt_" colab_type="text"
# ### Visualize the data
# + id="IG7evJATCYt_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 229} outputId="97657f14-6787-4d41-8d7f-bd0427030f30"
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (3,3))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
# + [markdown] id="jAPv1Kn5CYuE" colab_type="text"
# ---
# # Define the Model
#
# A GAN is comprised of two adversarial networks, a discriminator and a generator.
# + [markdown] id="Nx_mW-gOCYuF" colab_type="text"
# ## Discriminator
#
# The discriminator network is going to be a pretty typical linear classifier. To make this network a universal function approximator, we'll need at least one hidden layer, and these hidden layers should have one key attribute:
# > All hidden layers will have a [Leaky ReLu](https://pytorch.org/docs/stable/nn.html#torch.nn.LeakyReLU) activation function applied to their outputs.
#
# <img src='https://github.com/DayrisRM/deep-learning-v2-pytorch/blob/master/gan-mnist/assets/gan_network.png?raw=1' width=70% />
#
# #### Leaky ReLu
#
# We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
#
# <img src='https://github.com/DayrisRM/deep-learning-v2-pytorch/blob/master/gan-mnist/assets/leaky_relu.png?raw=1' width=40% />
#
# #### Sigmoid Output
#
# We'll also take the approach of using a more numerically stable loss function on the outputs. Recall that we want the discriminator to output a value 0-1 indicating whether an image is _real or fake_.
# > We will ultimately use [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss), which combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.
#
# So, our final output layer should not have any activation function applied to it.
# + id="E5022aGpCYuF" colab_type="code" colab={}
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Discriminator, self).__init__()
# define all layers
# define hidden linear layers
self.fc1 = nn.Linear(input_size, hidden_dim*4)
self.fc2 = nn.Linear(hidden_dim*4, hidden_dim*2)
self.fc3 = nn.Linear(hidden_dim*2, hidden_dim)
#define fully-connected layer
self.fc4 = nn.Linear(hidden_dim, output_size)
#dropout layer
self.dropout = nn.Dropout(0.3)
def forward(self, x):
# flatten image
x = x.view(-1, 28*28)
#all hidden layers
x = F.leaky_relu(self.fc1(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc2(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc3(x), 0.2)
x = self.dropout(x)
#final layer
out = self.fc4(x)
return out
# + [markdown] id="3fV7MVyDCYuK" colab_type="text"
# ## Generator
#
# The generator network will be almost exactly the same as the discriminator network, except that we're applying a [tanh activation function](https://pytorch.org/docs/stable/nn.html#tanh) to our output layer.
#
# #### tanh Output
# The generator has been found to perform the best with $tanh$ for the generator output, which scales the output to be between -1 and 1, instead of 0 and 1.
#
# <img src='https://github.com/DayrisRM/deep-learning-v2-pytorch/blob/master/gan-mnist/assets/tanh_fn.png?raw=1' width=40% />
#
# Recall that we also want these outputs to be comparable to the *real* input pixel values, which are read in as normalized values between 0 and 1.
# > So, we'll also have to **scale our real input images to have pixel values between -1 and 1** when we train the discriminator.
#
# I'll do this in the training loop, later on.
# + id="MkV0nIYRCYuL" colab_type="code" colab={}
class Generator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Generator, self).__init__()
# define hidden linear layers
self.fc1 = nn.Linear(input_size, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim*2)
self.fc3 = nn.Linear(hidden_dim*2, hidden_dim*4)
#final fully-connected layer
self.fc4 = nn.Linear(hidden_dim*4, output_size)
#dropout layer
self.dropout = nn.Dropout(0.3)
def forward(self, x):
#all hidden layers
x = F.leaky_relu(self.fc1(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc2(x), 0.2)
x = self.dropout(x)
x = F.leaky_relu(self.fc3(x), 0.2)
x = self.dropout(x)
#final layer with tanh
out = F.tanh(self.fc4(x))
return out
# + [markdown] id="4tJtNyAbCYuP" colab_type="text"
# ## Model hyperparameters
# + id="k6ToIEDLCYuP" colab_type="code" colab={}
# Discriminator hyperparams
# Size of input image to discriminator (28*28)
input_size = 784
# Size of discriminator output (real or fake)
d_output_size = 1
# Size of *last* hidden layer in the discriminator
d_hidden_size = 32
# Generator hyperparams
# Size of latent vector to give to generator
z_size = 100
# Size of discriminator output (generated image)
g_output_size = 784
# Size of *first* hidden layer in the generator
g_hidden_size = 32
# + [markdown] id="Y9tTzqzsCYuU" colab_type="text"
# ## Build complete network
#
# Now we're instantiating the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
# + id="meKElmK7CYuU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="b75018e4-4ccf-4b64-ea8a-c12325978bec"
# instantiate discriminator and generator
D = Discriminator(input_size, d_hidden_size, d_output_size)
G = Generator(z_size, g_hidden_size, g_output_size)
# check that they are as you expect
print(D)
print()
print(G)
# + [markdown] id="jVjEf6JUCYud" colab_type="text"
# ---
# ## Discriminator and Generator Losses
#
# Now we need to calculate the losses.
#
# ### Discriminator Losses
#
# > * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`.
# * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
#
# <img src='https://github.com/DayrisRM/deep-learning-v2-pytorch/blob/master/gan-mnist/assets/gan_pipeline.png?raw=1' width=70% />
#
# The losses will by binary cross entropy loss with logits, which we can get with [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss). This combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.
#
# For the real images, we want `D(real_images) = 1`. That is, we want the discriminator to classify the the real images with a label = 1, indicating that these are real. To help the discriminator generalize better, the labels are **reduced a bit from 1.0 to 0.9**. For this, we'll use the parameter `smooth`; if True, then we should smooth our labels. In PyTorch, this looks like `labels = torch.ones(size) * 0.9`
#
# The discriminator loss for the fake data is similar. We want `D(fake_images) = 0`, where the fake images are the _generator output_, `fake_images = G(z)`.
#
# ### Generator Loss
#
# The generator loss will look similar only with flipped labels. The generator's goal is to get `D(fake_images) = 1`. In this case, the labels are **flipped** to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
# + id="7lkhnwNgCYue" colab_type="code" colab={}
# Calculate losses
def real_loss(D_out, smooth=False):
# compare logits to real labels
# smooth labels if smooth=True
batch_size = D_out.size(0)
if smooth:
labels = torch.ones(batch_size) * 0.9
else:
labels = torch.ones(batch_size)
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
# compare logits to fake labels
batch_size = D_out.size(0)
labels = torch.zeros(batch_size)
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
# + [markdown] id="lp6S9bVsCYuh" colab_type="text"
# ## Optimizers
#
# We want to update the generator and discriminator variables separately. So, we'll define two separate Adam optimizers.
# + id="ZQ5qEaSTCYuh" colab_type="code" colab={}
import torch.optim as optim
# learning rate for optimizers
lr = 0.002
# Create optimizers for the discriminator and generator
d_optimizer = optim.Adam(D.parameters(), lr)
g_optimizer = optim.Adam(G.parameters(), lr)
# + [markdown] id="5z4J4pPGCYuk" colab_type="text"
# ---
# ## Training
#
# Training will involve alternating between training the discriminator and the generator. We'll use our functions `real_loss` and `fake_loss` to help us calculate the discriminator losses in all of the following cases.
#
# ### Discriminator training
# 1. Compute the discriminator loss on real, training images
# 2. Generate fake images
# 3. Compute the discriminator loss on fake, generated images
# 4. Add up real and fake loss
# 5. Perform backpropagation + an optimization step to update the discriminator's weights
#
# ### Generator training
# 1. Generate fake images
# 2. Compute the discriminator loss on fake images, using **flipped** labels!
# 3. Perform backpropagation + an optimization step to update the generator's weights
#
# #### Saving Samples
#
# As we train, we'll also print out some loss statistics and save some generated "fake" samples.
# + id="qTkk44v0CYul" colab_type="code" colab={}
import pickle as pkl
# training hyperparams
num_epochs = 40
# keep track of loss and generated, "fake" samples
samples = []
losses = []
print_every = 400
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# train the network
D.train()
G.train()
for epoch in range(num_epochs):
for batch_i, (real_images, _) in enumerate(train_loader):
batch_size = real_images.size(0)
## Important rescaling step ##
real_images = real_images*2 - 1 # rescale input images from [0,1) to [-1, 1)
# ============================================
# TRAIN THE DISCRIMINATOR
# ============================================
d_optimizer.zero_grad()
# 1. Train with real images
# Compute the discriminator losses on real images
# use smoothed labels
D_real = D(real_images)
d_real_loss = real_loss(D_real, smooth=True)
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
fake_images = G(z)
# Compute the discriminator losses on fake images
D_fake = D(fake_images)
d_fake_loss(fake_loss(D_fake))
# add up real and fake losses and perform backprop
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# =========================================
# TRAIN THE GENERATOR
# =========================================
g_optimizer.zero_grad()
# 1. Train with fake images and flipped labels
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
fake_images = G(z)
# Compute the discriminator losses on fake images
# using flipped labels!
D_fake = D(fake_images)
g_loss = real_loss(D_fake)
# perform backprop
g_loss.backward()
g_optimizer.step()
# Print some loss stats
if batch_i % print_every == 0:
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, num_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# generate and save sample, fake images
G.eval() # eval mode for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to train mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# + [markdown] id="KCgMQt-2CYur" colab_type="text"
# ## Training loss
#
# Here we'll plot the training losses for the generator and discriminator, recorded after each epoch.
# + id="oz8QWEGICYus" colab_type="code" colab={}
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
# + [markdown] id="88iMH-XwCYuv" colab_type="text"
# ## Generator samples from training
#
# Here we can view samples of images from the generator. First we'll look at the images we saved during training.
# + id="qv3liAyWCYuw" colab_type="code" colab={}
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
# + id="VGa9loUnCYuz" colab_type="code" colab={}
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
# + [markdown] id="a9YQPvoLCYu2" colab_type="text"
# These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
# + id="wdT0I-eJCYu3" colab_type="code" colab={}
# -1 indicates final epoch's samples (the last in the list)
view_samples(-1, samples)
# + [markdown] id="RJwpt_FwCYu6" colab_type="text"
# Below I'm showing the generated images as the network was training, every 10 epochs.
# + id="NmvGxBoYCYu7" colab_type="code" colab={}
rows = 10 # split epochs into 10, so 100/10 = every 10 epochs
cols = 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
img = img.detach()
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
# + [markdown] id="vHBvhJ_xCYu_" colab_type="text"
# It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
# + [markdown] id="47CA_VMzCYvA" colab_type="text"
# ## Sampling from the generator
#
# We can also get completely new images from the generator by using the checkpoint we saved after training. **We just need to pass in a new latent vector $z$ and we'll get new samples**!
# + id="nTn7o5mJCYvA" colab_type="code" colab={}
# randomly generated, new latent vectors
sample_size=16
rand_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
rand_z = torch.from_numpy(rand_z).float()
G.eval() # eval mode
# generated samples
rand_images = G(rand_z)
# 0 indicates the first set of samples in the passed in list
# and we only have one batch of samples, here
view_samples(0, [rand_images])
| gan-mnist/MNIST_GAN_Exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pyvizenv] *
# language: python
# name: conda-env-pyvizenv-py
# ---
# # LSTM Stock Predictor Using Closing Prices
#
# In this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin closing prices to predict the 11th day closing price.
#
# You will need to:
#
# 1. Prepare the data for training and testing
# 2. Build and train a custom LSTM RNN
# 3. Evaluate the performance of the model
# ## Data Preparation
#
# In this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price.
#
# You will need to:
# 1. Use the `window_data` function to generate the X and y values for the model.
# 2. Split the data into 70% training and 30% testing
# 3. Apply the MinMaxScaler to the X and y values
# 4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is:
#
# ```python
# reshape((X_train.shape[0], X_train.shape[1], 1))
# ```
import numpy as np
import pandas as pd
import hvplot.pandas
# Set the random seed for reproducibility
# Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model
from numpy.random import seed
seed(1)
from tensorflow import random
random.set_seed(2)
# Load the fear and greed sentiment data for Bitcoin
df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True)
df = df.drop(columns="fng_classification")
df.head()
# Load the historical closing prices for Bitcoin
df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close']
df2 = df2.sort_index()
df2.tail()
# Join the data into a single DataFrame
df = df.join(df2, how="inner")
df.tail()
df.head()
# This function accepts the column number for the features (X) and the target (y)
# It chunks the data up with a rolling window of Xt-n to predict Xt
# It returns a numpy array of X any y
def window_data(df, window, feature_col_number, target_col_number):
X = []
y = []
for i in range(len(df) - window - 1):
features = df.iloc[i:(i + window), feature_col_number]
target = df.iloc[(i + window), target_col_number]
X.append(features)
y.append(target)
return np.array(X), np.array(y).reshape(-1, 1)
# +
# Predict Closing Prices using a 10 day window of previous closing prices
# Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes
window_size = 10
# Column index 0 is the 'fng_value' column
# Column index 1 is the `Close` column
feature_column = 1
target_column = 1
X, y = window_data(df, window_size, feature_column, target_column)
# +
# Use 70% of the data for training and the remaineder for testing
split = int(0.7 * len(X))
X_train = X[: split]
X_test = X[split:]
y_train = y[: split]
y_test = y[split:]
# +
from sklearn.preprocessing import MinMaxScaler
# Use the MinMaxScaler to scale data between 0 and 1.
x_train_scaler = MinMaxScaler()
x_test_scaler = MinMaxScaler()
y_train_scaler = MinMaxScaler()
y_test_scaler = MinMaxScaler()
# Fit the scaler for the Training Data
x_train_scaler.fit(X_train)
y_train_scaler.fit(y_train)
# Scale the training data
X_train = x_train_scaler.transform(X_train)
y_train = y_train_scaler.transform(y_train)
# Fit the scaler for the Testing Data
x_test_scaler.fit(X_test)
y_test_scaler.fit(y_test)
# Scale the y_test data
X_test = x_test_scaler.transform(X_test)
y_test = y_test_scaler.transform(y_test)
# -
# Reshape the features for the model
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1))
# ---
# ## Build and Train the LSTM RNN
#
# In this section, you will design a custom LSTM RNN and fit (train) it using the training data.
#
# You will need to:
# 1. Define the model architecture
# 2. Compile the model
# 3. Fit the model to the training data
#
# ### Hints:
# You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
# +
# Build the LSTM model.
# The return sequences need to be set to True if you are adding additional LSTM layers, but
# You don't have to do this for the final layer.
# Note: The dropouts help prevent overfitting
# Note: The input shape is the number of time steps and the number of indicators
# Note: Batching inputs has a different input shape of Samples/TimeSteps/Features
# Define the LSTM RNN model.
model = Sequential()
# Initial model setup
number_units = 30
dropout_fraction = 0.2
# Layer 1
model.add(LSTM(
units=number_units,
return_sequences=True,
input_shape=(X_train.shape[1], 1))
)
model.add(Dropout(dropout_fraction))
# Layer 2
model.add(LSTM(units=number_units, return_sequences=True))
model.add(Dropout(dropout_fraction))
# Layer 3
model.add(LSTM(units=number_units))
model.add(Dropout(dropout_fraction))
# Output layer
model.add(Dense(1))
# +
# Compile the model
model.compile(optimizer="adam", loss="mean_squared_error")
# -
# Summarize the model
model.summary()
# +
# Train the model
# Use at least 10 epochs
# Do not shuffle the data
# Experiement with the batch size, but a smaller batch size is recommended
model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=90, verbose=1)
# -
# ---
# ## Model Performance
#
# In this section, you will evaluate the model using the test data.
#
# You will need to:
# 1. Evaluate the model using the `X_test` and `y_test` data.
# 2. Use the X_test data to make predictions
# 3. Create a DataFrame of Real (y_test) vs predicted values.
# 4. Plot the Real vs predicted values as a line chart
#
# ### Hints
# Remember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices.
# Evaluate the model
model.evaluate(X_test, y_test, verbose=0)
# Make some predictions
predicted = model.predict(X_test)
# Recover the original prices instead of the scaled version
predicted_prices = y_test_scaler.inverse_transform(predicted)
real_prices = y_test_scaler.inverse_transform(y_test.reshape(-1, 1))
# Create a DataFrame of Real and Predicted values
stocks = pd.DataFrame({
"Real": real_prices.ravel(),
"Predicted": predicted_prices.ravel()
}, index = df.index[-len(real_prices): ])
stocks.head()
# Plot the real vs predicted values as a line chart
stocks.plot(title="Actual Vs. Predicted Gold Prices")
| Starter_Code/.ipynb_checkpoints/lstm_stock_predictor_closing-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.6 64-bit
# metadata:
# interpreter:
# hash: df955ce39d0f31d56d4bb2fe0a613e5326ba60723fd33d8303a3aede8f65715c
# name: python3
# ---
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# +
from sklearn.preprocessing import OneHotEncoder, LabelEncoder, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
from sklearn.metrics import r2_score, accuracy_score
import torch
import torch.nn.functional as F
import torch.nn as nn
import numpy as np
import pandas as pd
import torch
from torch.utils.data import DataLoader
import torch.nn.functional as F
from loguru import logger
import matplotlib.pyplot as plt
from nam.config import defaults
from nam.types import Config
from nam.utils.args import parse_args
from nam.data import NAMDataset
from nam.models import DNN, FeatureNN, NAM, get_num_units
from nam.engine import Engine
from nam.utils import graphing
from main import get_config
# +
config = get_config()
features_columns = ["income_2", "WP1219", "WP1220", "weo_gdpc_con_ppp"]
targets_column = ["WP16"]
weights_column = ["wgt"]
data = pd.read_csv('data/GALLUP.csv')
missing = data.isnull().sum()
print(missing)
data = data.fillna(method='ffill')
# +
X = np.array(data[features_columns])
y = np.array(data[targets_column])
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=2137)
X_train = torch.from_numpy(X_train.astype('float32'))
X_test = torch.from_numpy(X_test.astype('float32'))
y_train = torch.from_numpy(y_train.reshape(-1, 1).astype('float32'))
y_test = torch.from_numpy(y_test.reshape(-1, 1).astype('float32'))
dataset_train = torch.utils.data.TensorDataset(X_train, y_train)
batch_size = 128
dataset_train = torch.utils.data.DataLoader(dataset_train, batch_size=batch_size, shuffle=True)
# +
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(4, 32)
self.layer2 = nn.Linear(32, 16)
self.layer3 = nn.Linear(16, 12)
self.layer4 = nn.Linear(12, 1)
self.dropout1 = nn.Dropout(0.1)
self.dropout2 = nn.Dropout(0.1)
self.dropout3 = nn.Dropout(0.1)
def forward(self, x):
x = F.relu(self.layer1(x))
x = self.dropout1(x)
x = F.relu(self.layer2(x))
x = self.dropout2(x)
x = F.relu(self.layer3(x))
x = self.dropout3(x)
x = self.layer4(x)
return x
nn_model = NeuralNetwork()
nn_model
# -
loss_obj = torch.nn.MSELoss()
optimizer = torch.optim.Adam(nn_model.parameters())
for epoch in range(100):
optimizer.zero_grad()
i = 0
for X, y in dataset_train:
y_pred = nn_model(X)
loss = loss_obj(y_pred, y)
loss.backward()
optimizer.step()
i+=1
if not i % 1000:
print(loss)
y_pred = nn_model(X_test).detach().numpy()
y_true = y_test.detach().numpy()
print(y_true[:10])
print(y_pred[:10])
r2_score(y_true, y_pred)
| gallup_runs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="zX4Kg8DUTKWO" colab_type="code" colab={}
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + colab_type="code" id="BOwsuGQQY9OL" colab={}
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
import numpy as np
# + colab_type="code" id="PRnDnCW-Z7qv" colab={}
tokenizer = Tokenizer()
data="In the town of Athy one <NAME> \n Battered away til he hadnt a pound. \nHis father died and made him a man again \n Left him a farm and ten acres of ground. \nHe gave a grand party for friends and relations \nWho didnt forget him when come to the wall, \nAnd if youll but listen Ill make your eyes glisten \nOf the rows and the ructions of Lanigans Ball. \nMyself to be sure got free invitation, \nFor all the nice girls and boys I might ask, \nAnd just in a minute both friends and relations \nWere dancing round merry as bees round a cask. \n<NAME>, that nice little milliner, \nShe tipped me a wink for to give her a call, \nAnd I soon arrived with <NAME>Gilligan \nJust in time for Lanigans Ball. \nThere were lashings of punch and wine for the ladies, \nPotatoes and cakes; there was bacon and tea, \nThere were the Nolans, Dolans, OGradys \nCourting the girls and dancing away. \nSongs they went round as plenty as water, \nThe harp that once sounded in Taras old hall,\nSweet Nelly Gray and The Rat Catchers Daughter,\nAll singing together at Lanigans Ball. \nThey were doing all kinds of nonsensical polkas \nAll round the room in a whirligig. \nJulia and I, we banished their nonsense \nAnd tipped them the twist of a reel and a jig. \nAch mavrone, how the girls got all mad at me \nDanced til youd think the ceiling would fall. \nFor I spent three weeks at Brooks Academy \nLearning new steps for Lanigans Ball. \nThree long weeks I spent up in Dublin, \nThree long weeks to learn nothing at all,\n Three long weeks I spent up in Dublin, \nLearning new steps for Lanigans Ball. \nShe stepped out and I stepped in again, \nI stepped out and she stepped in again, \nShe stepped out and I stepped in again, \nLearning new steps for Lanigans Ball. \nBoys were all merry and the girls they were hearty \nAnd danced all around in couples and groups, \nTil an accident happened, young <NAME> \nPut his right leg through miss Finnertys hoops. \nPoor creature fainted and cried Meelia murther, \nCalled for her brothers and gathered them all. \nCarmody swore that hed go no further \nTil he had satisfaction at Lanigans Ball. \nIn the midst of the row miss Kerrigan fainted, \nHer cheeks at the same time as red as a rose. \nSome of the lads declared she was painted, \nShe took a small drop too much, I suppose. \nHer sweetheart, <NAME>, so powerful and able, \nWhen he saw his fair colleen stretched out by the wall, \nTore the left leg from under the table \nAnd smashed all the Chaneys at Lanigans Ball. \nBoys, oh boys, twas then there were runctions. \nMyself got a lick from big Phelim McHugh. \nI soon replied to his introduction \nAnd kicked up a terrible hullabaloo. \nOld Casey, the piper, was near being strangled. \nThey squeezed up his pipes, bellows, chanters and all. \nThe girls, in their ribbons, they got all entangled \nAnd that put an end to Lanigans Ball."
corpus = data.lower().split("\n")
tokenizer.fit_on_texts(corpus)
total_words = len(tokenizer.word_index) + 1
print(tokenizer.word_index)
print(total_words)
# + colab_type="code" id="soPGVheskaQP" colab={}
input_sequences = []
for line in corpus:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
# pad sequences
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))
# create predictors and label
xs, labels = input_sequences[:,:-1],input_sequences[:,-1]
ys = tf.keras.utils.to_categorical(labels, num_classes=total_words)
# + colab_type="code" id="pJtwVB2NbOAP" colab={}
print(tokenizer.word_index['in'])
print(tokenizer.word_index['the'])
print(tokenizer.word_index['town'])
print(tokenizer.word_index['of'])
print(tokenizer.word_index['athy'])
print(tokenizer.word_index['one'])
print(tokenizer.word_index['jeremy'])
print(tokenizer.word_index['lanigan'])
# + colab_type="code" id="49Cv68JOakwv" colab={}
print(xs[6])
# + colab_type="code" id="iY-jwvfgbEF8" colab={}
print(ys[6])
# + colab_type="code" id="wtzlUMYadhKt" colab={}
print(xs[5])
print(ys[5])
# + colab_type="code" id="H4myRpB1c4Gg" colab={}
print(tokenizer.word_index)
# + colab_type="code" id="w9vH8Y59ajYL" colab={}
model = Sequential()
model.add(Embedding(total_words, 64, input_length=max_sequence_len-1))
model.add(Bidirectional(LSTM(20)))
model.add(Dense(total_words, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(xs, ys, epochs=500, verbose=1)
# + colab_type="code" id="3YXGelKThoTT" colab={}
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.show()
# + colab_type="code" id="poeprYK8h-c7" colab={}
plot_graphs(history, 'accuracy')
# + colab_type="code" id="6Vc6PHgxa6Hm" colab={}
seed_text = "Laurence went to dublin"
next_words = 100
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted = model.predict_classes(token_list, verbose=0)
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word
print(seed_text)
| Week 2/NLP with Tensorflow/Week_4_Poetry.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="7znzR-dT0PQY" colab_type="text"
# # Cornell Box + Path Tracing
#
# Numpy implementation of path tracer.
# + [markdown] id="QlgXwQNraKtK" colab_type="text"
# ## Numpy Implementation
# + id="XUDOth2z0Mb-" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
# + id="juLnLW09yVq9" colab_type="code" colab={}
def length(p):
return np.linalg.norm(p, axis=1, keepdims=True)
def normalize(p):
return p/length(p)
# + id="dK63FtbP7sWU" colab_type="code" colab={}
def sdSphere(p,radius):
return length(p) - radius
# + id="IVxIyjt8XGNU" colab_type="code" colab={}
def udBox(p, b):
# b = half-widths
return length(np.maximum(np.abs(p)-b,0.0))
# + id="Kd0s1C7OL7ax" colab_type="code" colab={}
def rotateX(p,a):
c = np.cos(a); s = np.sin(a);
px,py,pz=p[:,0,None],p[:,1,None],p[:,2,None]
return np.concatenate([px,c*py-s*pz,s*py+c*pz],axis=1)
def rotateY(p,a):
c = np.cos(a); s = np.sin(a);
px,py,pz=p[:,0,None],p[:,1,None],p[:,2,None]
return np.concatenate([c*px+s*pz,py,-s*px+c*pz],axis=1)
def rotateZ(p,a):
c = np.cos(a); s = np.sin(a);
px,py,pz=p[:,0,None],p[:,1,None],p[:,2,None]
return np.concatenate([c*px-s*py,s*px+c*py,pz],axis=1)
# + id="XEmjAOlZOsiD" colab_type="code" colab={}
def opU(a,b):
# can this be implemented using tf.select?
# element-wise minimum (id,distance)
a_smaller = a[:,1] < b[:,1]
b[a_smaller,:] = a[a_smaller,:]
return b
# + id="V-5naOPZ7yG7" colab_type="code" colab={}
def clamp01(v):
# maybe we should use sigmoid instead of hard thresholding for nicer gradients / soft shadows?
return np.minimum(np.maximum(v,0.0),1.0)
def relu(a):
return np.maximum(a,0.)
def dot(a,b):
return np.sum(a*b,axis=1,keepdims=True)
# + id="7Q6WHGsusb5i" colab_type="code" colab={}
p = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
# + id="I9U_U4GgshCb" colab_type="code" outputId="2550eb1f-60fd-4468-a54e-216d7dcef328" colab={"base_uri": "https://localhost:8080/", "height": 68}
length(p)
# + id="LzN3o5t1swVQ" colab_type="code" outputId="48833e1c-43f7-42f1-864e-20c0e41c7c6f" colab={"base_uri": "https://localhost:8080/", "height": 68}
udBox(p - np.array([[0,3.9,2]]), np.array([[.5,.01,.5]]))
# + id="7jlL0BzDNCQj" colab_type="code" colab={}
import pdb
# + id="BQnVrYY9s3Bp" colab_type="code" colab={}
# Discussion of cosine-weighted importance sampling for Lambertian BRDFs,
# http://www.rorydriscoll.com/2009/01/07/better-sampling/
# and implementation here:
# https://www.shadertoy.com/view/4tl3z4
def sampleCosineWeightedHemisphere(n):
u1 = np.random.uniform(low=0,high=1,size=(n.shape[0],1))
u2 = np.random.uniform(low=0,high=1,size=(n.shape[0],1))
uu = normalize(np.cross(n, np.array([[0.,1.,1.]])))
vv = np.cross(uu,n)
ra = np.sqrt(u2)
rx = ra*np.cos(2*np.pi*u1)
ry = ra*np.sin(2*np.pi*u1)
rz = np.sqrt(1.-u2)
rr = rx*uu+ry*vv+rz*n
return normalize(rr)
# + id="hy-um8rYMwD1" colab_type="code" outputId="4b52db9b-526e-4ac9-8fc3-6ca51f46f2f7" colab={"base_uri": "https://localhost:8080/", "height": 582}
# testing cosine-weighted sphere projection
nor = normalize(np.array([[1.,1.,0.]]))
nor = np.tile(nor,[1000,1])
rd = sampleCosineWeightedHemisphere(nor)
fig = plt.figure()
ax = fig.add_subplot(121, projection='3d')
ax.scatter(rd[:,0],rd[:,2],rd[:,1])
ax.set_xlabel('X')
ax.set_ylabel('Z')
ax.set_zlabel('Y')
ax.set_aspect('equal')
ax = fig.add_subplot(122)
ax.scatter(rd[:,0],rd[:,1])
ax.set_aspect('equal')
# + id="u4YdySjAHf_U" colab_type="code" colab={}
# object ids
OBJ_NONE=0.0
OBJ_FLOOR=0.1
OBJ_CEIL=.2
OBJ_WALL_RD=.3
OBJ_WALL_WH=.4
OBJ_WALL_GR=.5
OBJ_SHORT_BLOCK=.6
OBJ_TALL_BLOCK=.7
OBJ_LIGHT=1.0
OBJ_SPHERE=0.9
# + id="3kjjsudGUOHg" colab_type="code" colab={}
# helper fn for constructing distance fields with object ids
# will be better to refactor this later as its own struct
def df(obj_id, dist):
d = np.zeros((dist.shape[0],2))
d[:,0] = obj_id
d[:,1] = dist.flatten()
return d
# + id="aP7ctdANIF8C" colab_type="code" colab={}
def sdScene(p):
px,py,pz=p[:,0,None],p[:,1,None],p[:,2,None]
# floor
obj_floor = df(OBJ_FLOOR, py) # py = distance from y=0
res = obj_floor
# sphere
#obj_sphere = df(OBJ_SPHERE, sdSphere(p-np.array([[0,2.,0.]]),1.0))
#res=obj_sphere
#res = opU(res,obj_sphere)
# ceiling
obj_ceil = df(OBJ_CEIL, 4.-py)
res = opU(res,obj_ceil)
# backwall
obj_bwall = df(OBJ_WALL_WH, 4.-pz)
res = opU(res,obj_bwall)
# leftwall
obj_lwall = df(OBJ_WALL_RD, px-(-2))
res = opU(res,obj_lwall)
# rightwall
obj_rwall = df(OBJ_WALL_GR, 2-px)
res = opU(res,obj_rwall)
# light
obj_light = df(OBJ_LIGHT, udBox(p - np.array([[0,3.9,2]]), np.array([[.5,.01,.5]])))
res = opU(res,obj_light)
# tall block
bh = 1.3
p2 = rotateY(p- np.array([[-.64,bh,2.6]]),.15*np.pi)
d = udBox(p2, np.array([[.6,bh,.6]]))
obj_tall_block = df(OBJ_TALL_BLOCK, d)
res = opU(res,obj_tall_block)
# short block
bw = .6
p2 = rotateY(p- np.array([[.65,bw,1.7]]),-.1*np.pi)
d = udBox(p2, np.array([[bw,bw,bw]]))
obj_short_block = df(OBJ_SHORT_BLOCK, d)
res = opU(res,obj_short_block)
return res
# + id="q4PVMLM7tMlt" colab_type="code" outputId="4e0c512b-82fc-462d-840a-b2747fd945db" colab={"base_uri": "https://localhost:8080/", "height": 68}
sdScene(p)
# + id="Y1P4QDDW7to1" colab_type="code" colab={}
def calcNormal(p):
# derivative approximation via midpoint rule
eps = 0.001
dx=np.array([[eps,0,0]])
dy=np.array([[0,eps,0]])
dz=np.array([[0,0,eps]])
# exactract just the distance component
nor = np.concatenate([
sdScene(p+dx)[:,1,None] - sdScene(p-dx)[:,1,None],
sdScene(p+dy)[:,1,None] - sdScene(p-dy)[:,1,None],
sdScene(p+dz)[:,1,None] - sdScene(p-dz)[:,1,None],
],axis=1)
return normalize(nor)
# + id="kbrYbtuxWtNh" colab_type="code" colab={}
MAX_ITERS = 50
HORIZON=20.0
def raymarch(ro,rd):
# returns df struct (id,t)
t = 0.0
for i in range(MAX_ITERS):
res = sdScene(ro + t*rd)
# print(res)
t += res[:,1,None] # t is (N,1).
# perform horizon cutoff
res[t[:,0] > HORIZON,0] = OBJ_NONE
# zip object id + intersect
return df(res[:,0],t)
# + id="FuAU5vPFtS-f" colab_type="code" outputId="5f141f34-0417-4c5b-efda-4d36c37b9de7" colab={"base_uri": "https://localhost:8080/", "height": 68}
rd = np.array([[0, 0., -1],[0, 0., -1],[0, 0., -1]])
raymarch(p, rd)
# + id="weZsJxX5Idug" colab_type="code" colab={}
# LIGHT_AREA=.2*.2 # 20cm x 20cm light source.
LIGHT_AREA = 1 * 1 # geometery implies 1 meter by 1 meter !!
# emissive_power = np.array([[0.15, 0.15, 0.15]])
emissive_power = np.array([[17.79017145, 11.36382183, 3.30840752]]) # In Watts.
emittedRadiance = emissive_power / np.pi * LIGHT_AREA # Emitted radiance from any direction (Watts / m^2 )
# Lambertian BRDF
lightDiffuseColor = np.array([[.2, .2, .2]]);
leftWallColor = np.array([[0.49389675, 0.04880702, 0.05120209]])
rightWallColor = np.array([[0.06816358, 0.39867395, 0.08645648]])
whiteWallColor = np.array([[0.73825082, 0.73601591, 0.73883266]])
# + id="nCBp5Wnw1fCQ" colab_type="code" outputId="30c2a362-9654-4242-871d-033e4a5fc98e" colab={"base_uri": "https://localhost:8080/", "height": 34}
emittedRadiance
# + id="YtKkLluzd9sK" colab_type="code" colab={}
# # we know what the light normals must be, so we don't need to analytically
# # approximate. Anyways, if we learn the normal field, then
# # this shouldn't be any extra computation.
nor_light = np.array([[0.,-1.,0.]])
# + id="Ali1ksNIz0a7" colab_type="code" colab={}
def trace(ro,rd,depth,debug=False):
#print('depth %d, tracing %d rays' % (depth, rd.shape[0]))
# returns color arriving at ro from rd
#pdb.set_trace()
res = raymarch(ro,rd)
#print('%d null-intersections' % np.sum(res[:,0] == OBJ_NONE))
if debug:
# visualize id
return np.tile(res[:,0,None],[1,3])
# assign lambertian brdfs
brdf = np.zeros((res.shape[0],3))
brdf[res[:,0] == OBJ_NONE,:] = 0.0
brdf[res[:,0] == OBJ_CEIL,:] = whiteWallColor
brdf[res[:,0] == OBJ_FLOOR,:] = whiteWallColor
brdf[res[:,0] == OBJ_LIGHT,:] = lightDiffuseColor
brdf[res[:,0] == OBJ_SHORT_BLOCK,:] = whiteWallColor
brdf[res[:,0] == OBJ_TALL_BLOCK,:] = whiteWallColor
brdf[res[:,0] == OBJ_WALL_GR,:] = rightWallColor
brdf[res[:,0] == OBJ_WALL_RD,:] = leftWallColor
brdf[res[:,0] == OBJ_WALL_WH,:] = whiteWallColor
t = res[:,1,None]
p = ro + t*rd
nor = calcNormal(p)
radiance = np.zeros((ro.shape[0],3))
did_intersect = res[:,0] != OBJ_NONE
# emitted radiance. This is only counted if it's an eye ray, since
# this contribution is also added in at every bounce.
if depth==0: # is eye ray
is_light = res[:,0] == OBJ_LIGHT
Li_e = emittedRadiance
radiance[is_light,:] += Li_e
#
# estimate direct area
#
# each of the intersect points has some amount of light arriving
# sample point on area light TODO (ejang) - fix hardcoded light position
u = .5 * np.random.random(size=(p.shape[0],2))
p_light = np.zeros((p.shape[0],3))
p_light[:,0] = np.random.uniform(low=-.5,high=.5,size=(p.shape[0]))
p_light[:,1] = 3.9
p_light[:,2] = 2. + np.random.uniform(low=-.5,high=.5,size=(p.shape[0]))
wi_light = normalize(p_light - p)
res2 = raymarch(p + 0.001 * nor, wi_light)
# occlusion factor
vis = res2[:,0] == OBJ_LIGHT
isect_and_vis = np.logical_and(did_intersect, vis)
# Change of variables from (probability of sampling area) ->
# (probability of sampling solid angle) requires division by dw_i/dA.
# distance from p to light source.
square_distance = np.sum(np.square(p_light-p), axis=1, keepdims=True)
# http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Working_with_Radiometric_Integrals.html
pdf_A = 1./LIGHT_AREA
Li_direct = (emittedRadiance * brdf *
relu(dot(nor, wi_light)) *
relu(dot(nor_light, -wi_light))) / (square_distance * pdf_A)
#print('Li_direct, current depth=%d' % (depth))
radiance[isect_and_vis,:] += Li_direct[isect_and_vis,:]
# return (radiance, p_light, p, emittedRadiance, brdf,
# relu(dot(nor, wi_light)),
# relu(dot(nor_light, -wi_light)), square_distance, pdf_A, isect_and_vis)
# indirect incoming contribution for intersected points
#
# instead of RR sampling, we just trace for a fixed number of steps
# to yield a good approximation
if depth < 3:
# note that ro2, rd2 have fewer items than ro,rd!
ro2 = p[did_intersect,:] + 0.001 * nor[did_intersect,:] # bump along normal
rd2 = sampleCosineWeightedHemisphere(nor[did_intersect,:])
Li_indirect = trace(ro2,rd2,depth+1)
# doing cosweighted sampling cancels out the geom term
radiance[did_intersect,:] += brdf[did_intersect,:] * Li_indirect
#print('depth %d max_direct=%f, max_indirect=%f' % (depth,
# np.max(Li_direct[isect_and_vis,:]),
# np.max(Li_indirect)))
return radiance # actually, radiance
# + id="cH3SNYEs70fl" colab_type="code" colab={}
# temporary lighting hack
#LIGHT_POS = np.array([[.7,2,-1]])
#LIGHT_POS = np.array([[0,3.,2]])
def eotf_inverse_sRGB(L):
V = np.where(L <= 0.0031308, L * 12.92, 1.055 * L ** (1 / 2.4) - 0.055)
return V
def render(ro,rd,debug=False):
# average over many samples of trace in pixel space
# TODO - the proper thing to do is to handle the colorspace conversion
# (Radiance -> CIE XYZ -> RGB xform) but let's just do a hack and use
# a sigmoid
color = trace(ro,rd,0)
return eotf_inverse_sRGB(color)
# + id="vO8dNNnovu3Q" colab_type="code" colab={}
import time
# + id="WUa7hgrqzJ7w" colab_type="code" colab={}
# render the image
# perspective camera with image plane centered at 0,0,0
N=100 # width of image plane
xs=np.linspace(0,1,N) # 10 pixels
us,vs = np.meshgrid(xs,xs)
uv = np.vstack([us.flatten(),vs.flatten()]).T # 10x10 image grid
p = np.zeros((N*N,3)) #
p[:,:2] = -1+2*uv # normalize pixel locations to -1,1
eye = np.tile([0,2.,-3.5],[p.shape[0],1])
look = np.array([[0,2.0,0]]) # look straight ahead
w = normalize(look - eye)
up = np.array([[0,1,0]]) # up axis of world
u = normalize(np.cross(w,up))
v = normalize(np.cross(u,w))
d=2.2 # focal distance
rd = normalize(p[:,0,None]*u + p[:,1,None]*v + d*w)
# + id="lNaQtLv-zKjB" colab_type="code" outputId="19ffb218-0adf-4330-f91c-aa66204f149d" colab={"base_uri": "https://localhost:8080/", "height": 204}
print('eye')
print(eye[:3])
print('rd')
print(rd[:3])
trace(eye[:3], rd[:3], 0)
# + id="2eRZmnVLFNxb" colab_type="code" outputId="ce89ad52-0771-4bed-ef91-2b7fc0aaa099" colab={"base_uri": "https://localhost:8080/", "height": 51}
# %%time
img = render(eye,rd,debug=False)
# (radiance, p_light, p, emittedRadiance, brdf,
# cos_i, cos_o, square_distance, pdf_A, isect_and_vis) = render(eye, rd, debug=False)
# + id="cYX1ikiqFRxA" colab_type="code" outputId="f083061b-bd7a-4f35-94a0-bbba68892ca9" colab={"base_uri": "https://localhost:8080/", "height": 285}
img = np.fliplr(np.flipud(img.reshape((N,N,3))))
plt.imshow(img,interpolation='none',vmin = 0, vmax = 1)
plt.grid('off')
plt.show()
# + id="RO3NWwI572Nu" colab_type="code" outputId="f84ef0d4-628a-4ce5-801d-6a5723979a9d" colab={"base_uri": "https://localhost:8080/", "height": 221}
# %%time
NUM_SAMPLES=100
for i in range(2,NUM_SAMPLES+1):
sample = render(eye,rd,debug=False)
if i % 10 == 0:
print('Sample %d' % i)
sample = np.fliplr(np.flipud(sample.reshape((N,N,3))))
img = (img + sample)
# + id="SSd-rKiJv3Kq" colab_type="code" outputId="89db5189-59c2-4076-aa4b-62b4a4ec4cb2" colab={"base_uri": "https://localhost:8080/", "height": 302}
# there are some bugs where the indirect illumination can exceed the direct illumination
img = img/NUM_SAMPLES
print(np.max(img))
plt.imshow(img,interpolation='none',vmin = 0, vmax = 1)
plt.grid('off')
| cornell_box_pt_numpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from unet import *
model=unet(input_size=(144,144,1))
import nibabel as nib
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import os, shutil
from keras.preprocessing.image import ImageDataGenerator
import SimpleITK as sitk
from keras.models import *
from metrics import*
from keras.callbacks import *
# +
#model=load_model('image_segmentation_model_new.h5',custom_objects={'dice_loss':dice_loss,'DICE':DICE,'Specificity':Specificity,'Precision':Precision,'Recall':Recall})
# -
model.summary()
image_input='C:/Users/24710/Desktop/image_processing_project/train_img'
label_input='C:/Users/24710/Desktop/image_processing_project/train_label'
list_f=[os.path.join(image_input,f) for f in os.listdir(image_input)]
slices=[nib.load(f) for f in list_f] #slices中为1-6号的图像(320,320,x)
list_l=[os.path.join(label_input,f) for f in os.listdir(label_input)]
labels=[nib.load(l) for l in list_l]#labels中为1-6号的label(320,320,x)
input_arr=[slices[i].get_fdata() for i in range(18)]
input_label=[labels[i].get_fdata() for i in range(18)]
for i in range(18):
print(input_label[i].shape)
train_data=np.ones([530,320,320,1])
train_label=np.ones([530,320,320,1])
# +
n=0
for i in range(18):
if i ==2:
for j in range(40,60,1):
a=input_arr[i][:,:,j]
#a=a*1.0/2095
b=np.atleast_3d(a)
#b=a
#c=b[np.newaxis,:]
b = b.reshape((1,) + b.shape)
train_data[n,:,:,:]=b
x=input_label[i][:,:,j]
y=np.atleast_3d(x)
y=y.reshape((1,)+y.shape)
train_label[n,:,:,:]=y
n=n+1
else:
for j in range(50,80,1):
a=input_arr[i][:,:,j]
#a=a*1.0/2095
b=np.atleast_3d(a)
#b=a
#c=b[np.newaxis,:]
b = b.reshape((1,) + b.shape)
train_data[n,:,:,:]=b
x=input_label[i][:,:,j]
y=np.atleast_3d(x)
y=y.reshape((1,)+y.shape)
train_label[n,:,:,:]=y
n=n+1
train_label=train_label
train_data=train_data
[train_data.shape,train_label.shape]
# -
train_data=train_data[:,106:250:,106:250:,:]
train_label=train_label[:,106:250:,106:250:,:]
[train_data.shape,train_label.shape]
train_data.max()
figure=plt.figure(figsize=(12,6))
sub1=figure.add_subplot(121)
x=0
sub1.imshow(train_data[80,:,:,0],cmap='gray')
sub2=figure.add_subplot(122)
sub2.imshow(train_label[80,:,:,0],cmap='gray')
train_data=train_data*1.0/1975
history=model.fit(train_data,train_label,batch_size=10,epochs=30,validation_split=0.25,callbacks=[EarlyStopping(monitor='loss', min_delta=0, patience=2,mode='min', restore_best_weights=True)])
model.save('image_segmentation_model_4.h5')
img=input_arr[2][:,:,50]
img=img*1.0/1975
img=img[106:250:,106:250:]
img=img.reshape((1,)+img.shape)
result=model.predict(img)
result.max()
# +
img=input_arr[2][:,:,60]
img=img*1.0/2095
img=img[106:250:,106:250:]
img=img.reshape((1,)+img.shape)
result=model.predict(img)
figure=plt.figure(figsize=(16,16))
sub1=figure.add_subplot(221)
sub1.imshow(np.round(result[0,:,:,0]),cmap='gray')
sub2=figure.add_subplot(222)
sub2.imshow(input_arr[2][106:250:,106:250:,60],cmap='gray')
sub3=figure.add_subplot(223)
sub3.imshow(input_label[2][106:250:,106:250:,60],cmap='gray')
# -
from keras.utils import plot_model
plot_model(model, to_file='model_structure.jpg',show_shapes=False)
| train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### clinvar missense prediction w/ feature intersection
# * only use consistent positions
# * only missense clinvar
# * use positions w/ mpc **OR** pathogenic fraction
# * calc path freq using counts
# * total path freq
# * total benign freq
import pandas, numpy
from scipy.stats import entropy
import pydot, pydotplus, graphviz
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from sklearn import linear_model, metrics, tree, svm
from sklearn.neural_network import MLPClassifier
from sklearn.externals.six import StringIO
from sklearn.preprocessing import PolynomialFeatures
from sklearn.ensemble import ExtraTreesClassifier
from IPython.display import HTML
# %matplotlib inline
# +
def calc_path_freq(rows):
# sum of freqs for path
df = rows[ (rows.clin_class=='PATHOGENIC') |
(rows.clin_class=='LIKLEY_PATHOGENIC')]
l = len(df)
pathogenic_sum = sum(df['freq'])
neg = sum(df['neg_fam'])
if l == 0:
return 0, 0, -1, 0
return pathogenic_sum, pathogenic_sum/l, entropy(df['freq']/pathogenic_sum), l
def calc_benign_freq(rows):
# sum of freqs for
df = rows[ (rows.clin_class=='LIKELY_BENIGN') |
(rows.clin_class=='BENIGN')]
benign_sum = sum(df['freq'])
l = len(df)
neg = sum(df['neg_fam'])
if l == 0:
return 0, 0, -1, 0
return benign_sum, benign_sum/l, entropy(df['freq']/benign_sum), l
def calc_path_frac(rows):
pfam = list(rows['pfam'].values)[0]
pathogenic = len(rows[ (rows.clin_class=='PATHOGENIC') | (rows.clin_class=='LIKLEY_PATHOGENIC')])
benign = len(rows[ (rows.clin_class=='LIKELY_BENIGN') | (rows.clin_class=='BENIGN')])
frac = -1
if pathogenic+benign:
frac = pathogenic/(pathogenic+benign)
pf, pf_avg, pf_ent, pcount = calc_path_freq(rows)
bf, bf_avg, bf_ent, bcount = calc_benign_freq(rows)
r = -1
if bf:
r = pf/bf
return pandas.Series([frac, len(rows), pf, pf_avg, pf_ent, pcount, bf, bf_avg, bf_ent, bcount, r],
index=['path_frac', 'size',
'path_freq', 'p_freq_avg', 'p_freq_ent', 'ps',
'benign_freq', 'b_freq_avg', 'b_freq_ent', 'bs',
'fRatio'])
def calc_tot_freq_ratio(rows):
path_sum = calc_path_freq(rows)
benign_sum = calc_benign_freq(rows)
return path_sum/benign_sum
dat_file = '../data/interim/EPIv6.eff.dbnsfp.anno.hHack.dat.xls'
df_pre = pandas.read_csv(dat_file, sep='\t').fillna(0)
df_pre.loc[:, 'freq'] = df_pre['pos_fam']/(df_pre['pos_fam']+df_pre['neg_fam'])
df = (df_pre['pfam'].str.split(',', expand=True)
.stack()
.reset_index(level=0)
.set_index('level_0')
.rename(columns={0:'pfam'})
.join(df_pre.drop('pfam',1), how='left')
)
dd = df.groupby('pfam').apply(calc_path_frac)
ff = dd.reset_index()
# mk domain features
def match(row, domain_info):
ls = []
for pfam in row['pfam'].split(','):
if pfam in domain_info:
if domain_info[pfam][2] == 0:
ls.append(domain_info[pfam])
if len(ls) == 0:
for pfam in row['pfam'].split(','):
if pfam in domain_info:
return domain_info[pfam]
if len(ls):
return ls[0]
else:
return (0, 0,
0, 0, -1, 0,
0, 0, -1, 0,
-1, 1)
ff.loc[:, 'path_na'] = ff.apply(lambda row: 1 if row['path_frac']==-1 else 0, axis=1)
domain_info = {pfam:[path_frac, size,
path_freq, path_avg, path_ent, pc,
b_freq, b_avg, b_ent, bc,
fr, path_na]
for pfam, path_frac, size, path_freq, path_avg, path_ent, pc, b_freq, b_avg, b_ent, bc, fr, path_na
in ff.values}
df_pre.loc[:, 'path_frac_t'] = df_pre.apply(lambda row: match(row, domain_info)[0], axis=1)
df_pre.loc[:, 'size_t'] = df_pre.apply(lambda row: match(row, domain_info)[1], axis=1)
df_pre.loc[:, 'path_na_t'] = df_pre.apply(lambda row: match(row, domain_info)[-1], axis=1)
df_pre.loc[:, 'in_none_pfam'] = df_pre.apply(lambda row: 1 if 'none' in row['pfam'] else 0, axis=1)
# use patient counts
df_pre.loc[:, 'path_freq'] = df_pre.apply(lambda row: match(row, domain_info)[2], axis=1)
df_pre.loc[:, 'path_avg'] = df_pre.apply(lambda row: match(row, domain_info)[3], axis=1)
df_pre.loc[:, 'path_ent'] = df_pre.apply(lambda row: match(row, domain_info)[4], axis=1)
df_pre.loc[:, 'path_cnt'] = df_pre.apply(lambda row: match(row, domain_info)[5], axis=1)
df_pre.loc[:, 'benign_freq'] = df_pre.apply(lambda row: match(row, domain_info)[6], axis=1)
df_pre.loc[:, 'benign_avg'] = df_pre.apply(lambda row: match(row, domain_info)[7], axis=1)
df_pre.loc[:, 'benign_ent'] = df_pre.apply(lambda row: match(row, domain_info)[8], axis=1)
df_pre.loc[:, 'benign_cnt'] = df_pre.apply(lambda row: match(row, domain_info)[9], axis=1)
df_pre.loc[:, 'path_benign_freq_r'] = df_pre.apply(lambda row: match(row, domain_info)[10], axis=1)
#df_pre.loc[:, 'path_na_t'] = df_pre.apply(lambda row: match(row, domain_info)[2], axis=1)
# -
# this is for training
# use not just missense
# I do not need to require an mpc score here anymore (df_pre.mpc>0)
df_x_pre = df_pre[ (df_pre.clin_class != 'VUS') ]
df_s = df_x_pre.groupby('pfam').size().reset_index()
multi_pfam = set( df_s[df_s[0]>1]['pfam'].values )
df_x_pre.loc[:, 'multi_pfam'] = df_x_pre.apply(lambda row: row['pfam'] in multi_pfam, axis=1)
df_x = df_x_pre[ (df_x_pre.multi_pfam) & (df_x_pre.eff=='missense_variant') & (df_x_pre.mpc>0)]
df_x.loc[:, 'y'] = df_x.apply(lambda row: 1 if row['clin_class'] in ('PATHOGENIC', 'LIKLEY_PATHOGENIC')
else 0, axis=1)
df_x.head()
train_keys = {':'.join([str(x) for x in v]):True for v in df_x[['chrom', 'pos', 'ref', 'alt']].values}
print(len(train_keys))
hash={'LIKELY_BENIGN':'Benign',
'BENIGN':'Benign',
'PATHOGENIC':'Pathogenic',
'LIKLEY_PATHOGENIC':'Pathogenic'
}
df_x.loc[:, 'plot_class'] = df_x.apply(lambda row: hash[row['clin_class']], axis=1)
flatui = ["#e74c3c", "#2ecc71"]
sns.set(font_scale=3)
ax = sns.countplot(x="plot_class", data=df_x, palette=sns.color_palette(flatui))
ax.set_ylabel('Missense variant count')
ax.set_xlabel('')
ax.set_title('GeneDx training data')
plt.xticks(rotation=45)
#ax.set_xticklabels(rotation=30)
# +
clin_file = '../data/interim/clinvar/clinvar.dat'
clinvar_df_pre = pandas.read_csv(clin_file, sep='\t').fillna(0)
def calc_final_sig(row):
sig_set = set(str(row['clinSig'].split('|')))
has_benign = '2' in sig_set or '3' in sig_set
has_path = '4' in sig_set or '5' in sig_set
if has_path and not has_benign:
return 1
if not has_path and has_benign:
return 0
return -1
focus_gene_ls = ('SCN1A','SCN2A','KCNQ2', 'KCNQ3', 'CDKL5', 'PCDH19', 'SCN1B', 'SCN8A', 'SLC2A1', 'SPTAN1', 'STXBP1', 'TSC1')
# & (clinvar_df_pre.is_focus)
clinvar_df_pre.loc[:, "y"] = clinvar_df_pre.apply(calc_final_sig, axis=1)
clinvar_df_pre.loc[:, "key"] = clinvar_df_pre.apply(lambda row: ':'.join([str(row[x]) for x in ['chrom', 'pos', 'ref', 'alt']]), axis=1)
clinvar_df_pre.loc[:, "not_in_training"] = clinvar_df_pre.apply(lambda row: not row['key'] in train_keys, axis=1)
clinvar_df_pre.loc[:, "is_focus"] = clinvar_df_pre.apply(lambda row: row['gene'] in focus_gene_ls, axis=1)
print(len(clinvar_df_pre[~clinvar_df_pre.not_in_training]))
# & (clinvar_df_pre.not_in_training)
clinvar_df = clinvar_df_pre[(clinvar_df_pre.eff=='missense_variant')
& (clinvar_df_pre.not_in_training)
& (clinvar_df_pre.mpc>0)
& (clinvar_df_pre.is_focus)
& (clinvar_df_pre.y!=-1) ].drop_duplicates()
clinvar_df.loc[:, 'path_frac_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[0], axis=1)
clinvar_df.loc[:, 'size_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[1], axis=1)
clinvar_df.loc[:, 'path_freq'] = clinvar_df.apply(lambda row: match(row, domain_info)[2], axis=1)
clinvar_df.loc[:, 'path_avg'] = clinvar_df.apply(lambda row: match(row, domain_info)[3], axis=1)
clinvar_df.loc[:, 'path_ent'] = clinvar_df.apply(lambda row: match(row, domain_info)[4], axis=1)
clinvar_df.loc[:, 'path_cnt'] = clinvar_df.apply(lambda row: match(row, domain_info)[5], axis=1)
clinvar_df.loc[:, 'benign_freq'] = clinvar_df.apply(lambda row: match(row, domain_info)[6], axis=1)
clinvar_df.loc[:, 'benign_avg'] = clinvar_df.apply(lambda row: match(row, domain_info)[7], axis=1)
clinvar_df.loc[:, 'benign_ent'] = clinvar_df.apply(lambda row: match(row, domain_info)[8], axis=1)
clinvar_df.loc[:, 'benign_cnt'] = clinvar_df.apply(lambda row: match(row, domain_info)[9], axis=1)
clinvar_df.loc[:, 'path_benign_freq_r'] = clinvar_df.apply(lambda row: match(row, domain_info)[10], axis=1)
clinvar_df.loc[:, 'path_na_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[-1], axis=1)
clinvar_df.loc[:, 'in_none_pfam'] = clinvar_df.apply(lambda row: 1 if 'none' in row['pfam'] else 0, axis=1)
# need a smarter match to domain here
#m = pandas.merge(clinvar_df, ff, on='pfam', how='left')
#m.head()
# -
print(len(clinvar_df_pre))
print(len(clinvar_df_pre[clinvar_df_pre.y==1]))
print(len(clinvar_df_pre[clinvar_df_pre.y==0]))
print(len(clinvar_df))
print(len(clinvar_df[clinvar_df.y==1]))
print(len(clinvar_df[clinvar_df.y==0]))
hash={0:'Benign',
1:'Pathogenic',
}
clinvar_df.loc[:, 'plot_class'] = clinvar_df.apply(lambda row: hash[row['y']], axis=1)
flatui = ["#e74c3c", "#2ecc71"]
sns.set(font_scale=1.75)
ax = sns.countplot(x="plot_class", data=clinvar_df, palette=sns.color_palette(flatui))
ax.set_ylabel('Missense variant count')
ax.set_xlabel('')
ax.set_title('ClinVar subset (w/o GeneDx) testing data')
plt.xticks(rotation=45)
# +
def eval_pred(row):
if (row['tree_pred']>.9 and row['y']==1) or (row['tree_pred']<.1 and row['y']==0):
return 'right'
if (row['tree_pred']>.9 and row['y']==0) or (row['tree_pred']<.1 and row['y']==1):
return 'wrong'
return 'vus'
# train new tree and apply to clinvar
forest = ExtraTreesClassifier(n_estimators=300,
random_state=13,
bootstrap=True,
max_features=7,
min_samples_split=2,
max_depth=8,
min_samples_leaf=5,
n_jobs=4)
#tree_clf = linear_model.LogisticRegression(penalty='l1', fit_intercept=True)
#poly = PolynomialFeatures(degree=6, interaction_only=False, include_bias=False)
all_preds = []
all_truth = []
#
cols = ['mpc', 'size_t', 'path_frac_t', 'in_none_pfam',
'path_freq', 'path_avg', 'path_ent', 'path_cnt',
'benign_freq', 'benign_avg', 'benign_ent', 'benign_cnt',
'af_1kg_all', 'mtr', 'path_benign_freq_r']
X, y = df_x[cols], df_x['y']
forest.fit(X, y)
#tree_clf.fit(X, y)
X_clin, y_clin = clinvar_df[cols], clinvar_df['y']
preds = [ x[1] for x in forest.predict_proba(X_clin) ]
clinvar_df['tree_pred'] = preds
clinvar_df.loc[:, 'PredictionStatus'] = clinvar_df.apply(eval_pred, axis=1)
fpr_tree, tpr_tree, _ = metrics.roc_curve(y_clin, preds, pos_label=1)
tree_auc = metrics.auc(fpr_tree, tpr_tree)
print(tree_auc)
importances = forest.feature_importances_
std = numpy.std([atree.feature_importances_ for atree in forest.estimators_],
axis=0)
indices = numpy.argsort(importances)[::-1]
# Print the feature ranking
feature_ls = []
print("Feature ranking:")
for f in range(X.shape[1]):
ls = (cols[indices[f]],
f + 1, indices[f],
importances[indices[f]])
print("%s, %d. feature %d (%f)" % ls)
feature_ls.append([ls[0], ls[-1]])
fhash={'mpc':'MPC',
'size_t':'Domain GeneDx var count',
'path_na_t':'No variants',
'path_frac_t':'Domain fraction of pathogenic GeneDx vars',
'in_none_pfam':'Outside Pfam domain flag',
'path_freq':'Domain pathogenic GeneDx freq',
'path_avg':'Domain avg pathogenic GeneDx freq',
'path_ent':'Entropy of domain pathogenic GeneDx freq',
'path_cnt':'Domain pathogenic var GeneDx count',
'benign_freq':'Domain benign GeneDx freq',
'benign_avg':'Domain avg benign GeneDx freq',
'benign_ent':'Entropy of domain benign GeneDx freq',
'benign_cnt':'Domain benign var GeneDx count',
'af_1kg_all':'1KG var freq',
'mtr':'MTR',
'path_benign_freq_r':'Ratio of domain benign:pathogenic GeneDx freqs'}
feature_df = pandas.DataFrame({'feature':[fhash[x[0]] for x in feature_ls], 'importance':[x[1] for x in feature_ls]})
ax = sns.barplot(data=feature_df, x='feature', y='importance', palette="Greens")
ax.set_ylabel('Feature importance')
ax.set_xlabel('')
#ax.set_title('ClinVar subset (w/o GeneDx) testing data')
plt.xticks(rotation=90)
# -
#plt.rcParams['figure.figsize'] = 20, 6
#plt.figure(figsize=(40,6))
#f, ax = plt.subplots(figsize=(40,6))
#sns.set_context("talk")
g_df = (clinvar_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatus']]
.groupby(['gene','PredictionStatus'])
.size().reset_index().rename(columns={0:'size'}))
dd = g_df.groupby('gene').sum().reset_index()
use_genes = set(dd[dd['size']>10]['gene'].values)
g_df.loc[:, 'keep'] = g_df.apply(lambda row: row['gene'] in use_genes, axis=1)
sns.set(font_scale=1.75)
flatui = ["#2ecc71", "#3498db", "#e74c3c",]
ss = sns.factorplot(x='gene', hue='PredictionStatus', y='size', data=g_df[g_df['keep']],
kind='bar', palette=sns.color_palette(flatui), size=5, aspect=3)
ss.set_ylabels('ClinVar missense variants')
ss.set_xlabels('')
ss.savefig("../docs/plots/clinvar_gene_eval.png")
#plt.figure(figsize=(50, 3))
# +
# train new tree and apply to clinvar: just pathogenic frac
tree_clf = linear_model.LogisticRegression(penalty='l1', fit_intercept=True)
poly = PolynomialFeatures(degree=6, interaction_only=False, include_bias=False)
all_preds = []
all_truth = []
cols = ['size_t', 'path_na_t', 'path_frac_t', 'in_none_pfam','path_freq', 'path_avg', 'path_ent',
'benign_freq', 'benign_avg', 'benign_ent',
'af_1kg_all', 'mtr', 'path_benign_freq_r']#['size_t', 'path_na_t', 'path_frac_t', 'path_freq', 'benign_freq', 'in_none_pfam',]
X, y = poly.fit_transform(df_x[cols]), df_x['y'] #X, y = df_x[cols], df_x['y']
tree_clf.fit(X, y)
X_clin, y_clin = poly.fit_transform(clinvar_df[cols]), clinvar_df['y'] #clinvar_df[cols], clinvar_df['y']
preds = [ x[1] for x in tree_clf.predict_proba(X_clin) ]
fpr_tree_nm, tpr_tree_nm, _ = metrics.roc_curve(y_clin, preds, pos_label=1)
tree_auc_nm = metrics.auc(fpr_tree_nm, tpr_tree_nm)
# -
scores = clinvar_df['mpc'].values
truth = clinvar_df['y'].values
fpr_mpc, tpr_mpc, _ = metrics.roc_curve(truth, scores, pos_label=1)
mpc_auc = metrics.auc(fpr_mpc, tpr_mpc)
sns.set(font_scale=1.5)
plt.plot(fpr_tree, tpr_tree, label='Domain Burden + MPC (%.2f)' % (tree_auc,), color='green')
plt.plot(fpr_tree_nm, tpr_tree_nm, label='Domain Burden (%.2f)' % (tree_auc_nm,), color='orange')
plt.plot(fpr_mpc, tpr_mpc, label='MPC (%.2f)' % (mpc_auc,), color='black')
plt.legend(loc=4)
plt.title('ClinVar subset (w/o GeneDx) missense variant ROC')
clinvar_df[clinvar_df.gene=='TSC1']
clinvar_df[clinvar_df.gene=='SPTAN1']
| notebooks/clinvar_test_focus_pos_wo_training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# `wsgiref.sim_server` 모듈로부터 `make_server` 함수를 가지고 옴
from wsgiref.simple_server import make_server
# 01: wsgi 함수 헤더 작성, wsgi 함수는 CGI환경 변수를 담고 있는 `environ` 인자와 웹 브라우저에 응답을 반환하는 `start_response` 함수를 인자로 받음
# 02~03: '키:값'의 문자열 형태로 CGI 환경 변수의 키와 값을 반복하여 파이썬 리스트로 만들어 reponse_body에 저장
#
def application(environ, start_response):
response_body = ['%s: %s' % (key, value)
for key, value in sorted(environ.items())]
response_body = '\n'.join(response_body)
status = '200 OK'
response_headers = [('Content-Type', 'text/plain'),
('Content-Length', str(len(response_body)))]
start_response(status, response_headers)
return [response_body.encode("utf8")]
# +
httpd = make_server(
'localhost',
8051,
application
)
httpd.handle_request()
# -
| simple_wsgi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Hospital General Information from Hospital Compare
#
# The main CMS Hospital Compare website is [here](https://www.medicare.gov/hospitalcompare).
# The website for data is [here](https://data.medicare.gov/data/hospital-compare).
url <- "https://data.medicare.gov/views/bg9k-emty/files/7825b9e4-e595-4f25-86e0-32a68d7ac7a4?content_type=application%2Fzip%3B%20charset%3Dbinary&filename=Hospital_Revised_Flatfiles.zip"
f <- tempfile()
download.file(url, f, mode="wb")
file.info(f)
unzip(f, list=TRUE)
unzip(f, exdir=tempdir())
library(data.table)
D <- fread(file.path(tempdir(), "Hospital General Information.csv"))
old <- names(D)
new <- gsub("\\s", "", old)
setnames (D, old, new)
str(D)
D[State == "OR", .N, .(State, HospitalType, HospitalOwnership, EmergencyServices)]
| HospitalCompare/readHospitalGeneralInformation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit ('.venv')
# metadata:
# interpreter:
# hash: 67b393f23005f5647497c50fa99fb25b525d8642232b1bdc07a39bdb19f3ee4f
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import re
import math
from scipy import interpolate
plt.rc('font',family='Times New Roman')
L=420e-6
H=80e-6
Pe = 0.01
DO2 = 7.63596e-6
H = 80e-6
w=20e-6
U_0 = Pe*DO2/w
umax=1.5*U_0
Tref=773
rhof=4.4908
Mwf=0.02888
x_O2=0.22
# # Read COMSOL Data
# ## Read Centerline x from COMSOL
x_centerline_file_comsol="./plots/output-x-centerline.txt"
with open(x_centerline_file_comsol,"r") as fp:
lines=fp.readlines()
header=lines[8]
header=re.split(r" +(?![t@(])",header)
header.pop(0)
header[-1]=header[-1].strip()
df_comsol_x_centerline = pd.read_csv(x_centerline_file_comsol, comment='%', sep='\\s+', header=None,names=header)
df_comsol_x_centerline.sort_values(by="x",inplace=True)
df_comsol_x_centerline.reset_index(drop=True,inplace=True)
df_comsol_x_centerline.fillna(0,inplace=True)
df_comsol_x_centerline.head()
# ## read first-obstacle-y-centerline from COMSOL
x_centerline_2_file_comsol="./plots/output-first-obstacle-y-centerline.txt"
with open(x_centerline_2_file_comsol,"r") as fp:
lines=fp.readlines()
header=lines[8]
header=re.split(r" +(?![t@(])",header)
header.pop(0)
header[-1]=header[-1].strip()
df_comsol_x_centerline_2 = pd.read_csv(x_centerline_2_file_comsol, comment='%', sep='\\s+', header=None,names=header)
df_comsol_x_centerline_2.sort_values(by="y",inplace=True)
df_comsol_x_centerline_2.reset_index(drop=True,inplace=True)
df_comsol_x_centerline_2.fillna(0,inplace=True)
print(f"shape: {df_comsol_x_centerline_2.shape}")
df_comsol_x_centerline_2.head()
# # Validate
# ## Function Dev
def validate(df_comsol=df_comsol_x_centerline,time=0.002,file="x-centerline_T_O2_CO2.csv",axis='x',obj='T',refLength=L,refValue=Tref):
path=f"../postProcessing/singleGraph/{str(time)}/{file}"
df_dbs=pd.read_csv(path)
df_norm_dbs=pd.DataFrame(columns=["NormalizedLength","NormalizedValue"])
if obj=="T":
df_norm_dbs["NormalizedLength"]=df_dbs[axis]/refLength
df_norm_dbs["NormalizedValue"]=df_dbs[obj]/refValue
else:
df_norm_dbs["NormalizedLength"]=df_dbs[axis]/refLength
df_norm_dbs["NormalizedValue"]=df_dbs[obj]
df_norm_dbs.head()
if obj=='T':
comsol_label=f"T (K) @ t={time}"
elif obj=="O2" or obj=="CO2":
comsol_label=f"c_{obj} (mol/m^3) @ t={time}"
df_norm_comsol=pd.DataFrame(columns=["NormalizedLength","NormalizedValue"])
df_norm_comsol["NormalizedLength"]=df_comsol[axis]/refLength
df_norm_comsol["NormalizedValue"]=df_comsol[comsol_label]/refValue
interp_f=interpolate.interp1d(df_norm_comsol["NormalizedLength"],df_norm_comsol["NormalizedValue"],kind="linear")
df_norm_comsol_interpolated=interp_f(df_norm_dbs["NormalizedLength"])
relative_error=0.0
num=0
if obj=="T":
reduce=1
else:
reduce=0
for i in df_norm_dbs.index:
benmark=df_norm_comsol_interpolated[i]
dbs=df_norm_dbs["NormalizedValue"][i]
if(benmark>1e-16):
num+=1
error=(dbs-benmark)/(benmark-reduce) #relative to the temperature increase
relative_error+=pow(error,2)
relative_error=math.sqrt(relative_error)/num
# print(f"non-zero value num: {num}")
print(f"relative_error: {relative_error*100}%")
df_norm_dbs_sampling=df_norm_dbs[df_norm_dbs.index%5==0]
fig, ax = plt.subplots()
ax.plot(df_norm_comsol["NormalizedLength"],df_norm_comsol["NormalizedValue"],label="COMSOL")
ax.scatter(df_norm_dbs_sampling["NormalizedLength"],df_norm_dbs_sampling["NormalizedValue"],color="",marker="o",s=15,edgecolors="r",label="DBS")
ax.set_xlabel(f"Dimensionless {axis}")
ax.set_ylabel(f"Dimensionless {obj}")
ax.set_title(f"{obj} centerline: DBS vs LB")
# ax.text(0.7,0.2,f" relative error: {:.2f}%".format(relative_error_ux*100))
ax.legend(loc="upper right")
# ## Validate Temperature
validate(df_comsol=df_comsol_x_centerline,time=0.002,file="x-centerline_T_O2_CO2.csv",axis='x',obj='T',refLength=L,refValue=Tref)
validate(df_comsol=df_comsol_x_centerline,time=0.004,file="x-centerline_T_O2_CO2.csv",axis='x',obj='T',refLength=L,refValue=Tref)
validate(df_comsol=df_comsol_x_centerline,time=0.006,file="x-centerline_T_O2_CO2.csv",axis='x',obj='T',refLength=L,refValue=Tref)
# ## Validate O2
validate(df_comsol=df_comsol_x_centerline,time=0.002,file="x-centerline_T_O2_CO2.csv",axis='x',obj='O2',refLength=L,refValue=rhof/Mwf)
validate(df_comsol=df_comsol_x_centerline,time=0.004,file="x-centerline_T_O2_CO2.csv",axis='x',obj='O2',refLength=L,refValue=rhof/Mwf)
validate(df_comsol=df_comsol_x_centerline,time=0.006,file="x-centerline_T_O2_CO2.csv",axis='x',obj='O2',refLength=L,refValue=rhof/Mwf)
# ## Validate CO2
validate(df_comsol=df_comsol_x_centerline,time=0.002,file="x-centerline_T_O2_CO2.csv",axis='x',obj='CO2',refLength=L,refValue=rhof/Mwf)
validate(df_comsol=df_comsol_x_centerline,time=0.004,file="x-centerline_T_O2_CO2.csv",axis='x',obj='CO2',refLength=L,refValue=rhof/Mwf)
validate(df_comsol=df_comsol_x_centerline,time=0.006,file="x-centerline_T_O2_CO2.csv",axis='x',obj='CO2',refLength=L,refValue=rhof/Mwf)
| applications/test/dbsReactiveThermalFoam/TimeSplitting/differentCokeCombustionODESolvers/reactiveThermalValidations_4thRK/validate/validate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
iris = sns.load_dataset("iris")
# -
sns.boxplot( x= iris["species"],y=iris["sepal_length"] ) #better
#sns.boxplot (x= "species" , y = "sepal_length" , data= iris) #worse
iris_crop = iris[iris["species"] == "virginica"]
sns.boxplot(x="species", y= "sepal_length" ,data=iris_crop)
sns.scatterplot(x = iris_crop.index, y= "sepal_length" , data=iris_crop)
| exercises/minimal_seaborn/Iris_boxplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Working with functions
# In this exercise we will look at defining your own functions in Python. Functions provides a neat way of reusing code, resulting in shorter and clearer programs. This notebook covers the input to functions, output from functions and how they are used. There are exercises throughout the notebook, and the solution are available at the bottom.
# ### Defining and calling a function
# Here a simple python function is defined. The def keyword is used to start a definition of a function and is followed by the name of the function. The input for the function is within parenthesis, but our first print_hello function has no input. After the input a colon is used to signify the start of the function body. The function body is executed whenever the function is called, and need to be indented. The function body ends at the first line that is not indented.
# +
def print_hello():
print("Hello World")
print_hello() # Calling our defined function
# -
# ### Function input
# Functions can take arguments, which makes them more flexible. Consider the following.
# +
def print_twice(x):
print(2*x) # When an integer is multiplied by a string, it is printed that number of times
print_twice("bye ")
# -
# It is allowed to have several arguments, making the function yet more flexible. Note however that the number of arguments used when calling the function must match the function definition.
# +
def print_N(x, N):
print(N*x)
print_N("-><-", 10)
print_N("-||-", 10)
# -
# **(1) Task:** <br>
# Before running the function below, what would you expect to happen when swapping the input arguments?
print_N(10, "-><-")
print_N(10, "-||-")
# ### Keyword arguments
# When a function takes several arguments, it can be difficult to remember the order and purpose of each. Keyword arguments have a default value, and can be called with their name. They can also be called in any order, and even neglected as a default value is available.
# +
def print_N_kw(string="", repeat=1):
print(repeat*string)
print_N_kw(string="/<>", repeat=10)
print_N_kw(repeat=10, string="/<>")
print_N_kw(string="______________________________")
# -
# ### Function output
# Functions can also return output directly to the caller using the return keyword.
# +
def add_five(x):
return x + 5
y = add_five(4)
print(y)
# -
# It is even possible to have multiple outputs from a function, they are returned as a tuple, but can be extracted as they are returned. If the output of a function with multiple return values are assigned to a single variable, the entire tuple is stored there.
# +
def add_and_subtract_five(x):
return x+5, x-5
a, b = add_and_subtract_five(11) # The returned tuple is extracted into a and b
print(a)
print(b)
# -
c = add_and_subtract_five(11) # two outputs assigned to a single variable
print(c) # c now contains a tuple
print(c[0], c[1]) # tuples are accessed like lists
# ### Exercises
# Plenty of exercises can be done with the ingredients available now!
# **(2) Task:** <br>
# Write a function that multiply two inputs and returns the result, but if the result is larger than 100, it should be halved.
# +
# -- YOUR CODE HERE --
# ---------------------
# -
# **(3) Task:** <br>
# Write a function that combines two lists
# +
# -- YOUR CODE HERE --
# ---------------------
# -
# **(4) Task:** <br>
# Combine the following three functions into one, choosing between the styles using a keyword argument.
# +
def print_among_stars(x):
print(10*"*" + x + 10*"*")
def print_among_lines(x):
print(10*"-" + x + 10*"-")
def return_among_lines(x):
return 10*"-" + x + 10*"-"
# -- YOUR CODE HERE --
# ---------------------
# -
# ### Variable number of input parameters
# In most cases where a function has to work on some scalable input, it is easiest to pass a list to the function, as in this example.
# +
def sum_list(x):
output = 0
for element in x:
output += element
return output
s = sum_list([1, 2, 10])
print(s)
# -
# Python can however make the input of a function into a list automatically by defining the input parameter with a *.
# +
def sum_input(*args):
output = 0
for element in args:
output += element
return output
s = sum_input(1, 2, 10, 100)
print(s)
# -
# The same principle can be applied for keyword arguments by using two stars, and these are then assembled into a dictionary. The name kwargs is often used, denoting "keyword arguments". The next example of a function just assembles the given keyword variables into a dict, but prints "quack" in case a variable named "duck" is among them.
# +
def make_dict(**kwargs):
if "duck" in kwargs:
print("quack")
return kwargs
d = make_dict(h=3, speed=2000, string="hello")
print(d)
# -
e = make_dict(duck=3)
print(e)
# ### Doc strings
# It is good practice to comment ones code, and functions are no exception. Python functions support what is called doc strings which is placed at the start of the function, and can then later be showed with the help function.
# +
def make_dict(**kwargs):
"""
Returns given keyword arguments as dictionary
Prints quack in case a duck was contained in the keyword arguments
"""
if "duck" in kwargs:
print("quack")
return kwargs
d = make_dict(h=3, speed=2000, string="hello")
print(d)
# -
help(make_dict)
# ### Escalating scope
# If a variable is used within a Python function that is not defined as input or within the function, the python interpreter will look for it outside of the function. This can be surprising, and is not necessarily considered a good practice.
# +
def check_below_3():
if outside_par < 3: # outside_par never defined!
print("outside_par is below 3!")
else:
print("outside_par is 3 or above!")
outside_par = 5 # outside_par defined before running the function
check_below_3()
outside_par = 2 # change in outside_par reflected in next function call
check_below_3()
# -
# ## Storing function definitions in variables
# It is possible to store a reference to a function as a variable, or pass the name of the function to another function. This can also aid in adding flexibility to functions.
# +
def compare_smaller(x, y):
if x < y:
return True
else:
return False
def sort(x, func):
"""
sorts a list according to an comparison function
Parameters
----------
x : list
List to be sorted
func : function
Function that takes two input from list and returns logical
Returns
-------
list
Sorted list based on x
"""
is_not_sorted = True # Assume it is not sorted
while(is_not_sorted): # Keep sorting until list is sorted
# This loop runs through the array and switches elements out of order
for i in range(len(x)-1):
if func(x[i], x[i+1]):
temp = x[i]
x[i] = x[i+1]
x[i+1] = temp
is_not_sorted = False # Assume list is now sorted, but check
# This loop checks that the list is sorted
for i in range(len(x)-1):
if func(x[i], x[i+1]): # If two adjecent elements are out of order, it is not sorted
is_not_sorted = True
# Since the while loop is only stopped when is_not_sorted is false, x must now be sorted
return x
input_list = [1, 2, 5, 3, 19, 8, 8]
output_list = sort(input_list, compare_smaller)
print(output_list)
# -
# **(5) Python Task:** <br>
# Define your own comparison function and use it with sort.
# +
# -- YOUR CODE HERE --
# ---------------------
# -
# **(6) Python Task:** <br>
# Define a comparison function where odd numbers are considered smaller than even numbers, and two numbers of the same type are compared normally. Use the modulus function % to check for even/odd numbers, an example function is given to show how modulus is used.
# +
def is_even(x):
if x % 2 == 0:
return True
else:
return False
# -- YOUR CODE HERE --
# ---------------------
| notebooks/2_python_basics/4_working-with-functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Analyze Orcas Queries in Anchor Context
# -
# !pip3 install nltk termcolor
# +
def normalize(text):
import nltk
nltk.data.path = ['/mnt/ceph/storage/data-in-progress/data-research/web-archive/EMNLP-21/emnlp-web-archive-questions/cluster-libs/nltk_data']
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
ps = PorterStemmer()
return [ps.stem(i) for i in word_tokenize(text) if i not in stop_words]
def weighted_representation(texts):
from collections import defaultdict
absolute_count = defaultdict(lambda: 0)
for text in texts:
for word in normalize(text):
absolute_count[word] += 1
return {k: v/len(texts) for k,v in absolute_count.items()}
def similarity(weights, text):
text = set(normalize(text))
ret = 0
if text is None or len(text) == 0:
return (0, 0.0)
for k,v in weights.items():
if k in text:
ret += v
covered_terms = 0
for k in text:
if k in weights:
covered_terms += 1
return (ret, covered_terms/len(text))
def __add_to_path(p):
import sys
if p not in sys.path:
sys.path.append(p)
def domain(url):
__add_to_path('/mnt/ceph/storage/data-in-progress/data-research/web-archive/EMNLP-21/emnlp-web-archive-questions/cluster-libs/tld')
from tld import get_tld
ret = get_tld(url, as_object=True, fail_silently=True)
if ret:
return ret.domain
else:
return 'None'
def identical_domain(i):
return domain(i['document']['srcUrl']).lower() == domain(i['targetUrl']).lower()
def enrich_similarity(i):
i = dict(i)
weights = weighted_representation(i['orcasQueries'])
contextSim = similarity(weights, i['anchorContext'])
anchorSim = similarity(weights, i['anchorText'])
i['anchorContextScore'] = contextSim[0]
i['anchorContextCoveredTerms'] = contextSim[1]
i['anchorTextScore'] = anchorSim[0]
i['anchorTextScoreCoveredTerms'] = anchorSim[1]
return i
def enrich_domain(i):
i = dict(i)
i['identical_domain'] = identical_domain(i)
return i
# -
normalize("Programmers program with programming languages")
weighted_representation(["Programmers program with programming languages"])
weighted_representation(["Programmers program with programming languages", "gameboys", "program the gameboy"])
enrich_similarity({"anchorText":"no match","anchorContext":"no match", "orcasQueries": ["Programmers program with programming languages", "gameboys", "program the gameboy"]})
enrich_similarity({"anchorText":"no match","anchorContext":"programmers programmers program", "orcasQueries": ["Programmers program with programming languages", "gameboys", "program the gameboy"]})
enrich_similarity({"anchorText":"gameboy program","anchorContext":"no match", "orcasQueries": ["Programmers program with programming languages", "gameboys", "program the gameboy"]})
# +
# Evaluate it on many data points
# +
import pyspark
sc = pyspark.SparkContext()
sc
# -
sc.parallelize(['https://www.definitions.net/definition/start', 'https://www.vizientinc.com/', 'https://foo.vizientinc.ca/']).map(domain).collect()
# +
import json
#sc.textFile('ecir2022/anchor2query/anchor-text-with-orcas-queries-2019-47.jsonl')\
sc.textFile('ecir2022/anchor2query/anchor-text-with-orcas-queries-2019-47.jsonl/part*0{0,1,2,3,4}')\
.repartition(5000)\
.map(lambda i: json.loads(i))\
.map(enrich_similarity)\
.map(enrich_domain)\
.map(lambda i: json.dumps(i))\
.saveAsTextFile(path='ecir2022/anchor2query/anchor-text-with-orcas-queries-2019-47-enriched-5-percent-sample.jsonl', compressionCodecClass='org.apache.hadoop.io.compress.GzipCodec')
# +
import json
anchor_text_sample = sc.textFile('ecir2022/anchor2query/anchor-text-with-orcas-queries-small.jsonl')\
.map(lambda i: json.loads(i))\
.map(enrich_similarity)\
.collect()
print(len(anchor_text_sample))
# +
def keep_interesting_fields(i):
return {
'orcasQueries': len(i['orcasQueries']),
'anchorContextScore': i['anchorContextScore'],
'anchorContextCoveredTerms': i['anchorContextCoveredTerms'],
'anchorTextScore': i['anchorTextScore'],
'anchorTextScoreCoveredTerms': i['anchorTextScoreCoveredTerms'],
'identical_domain': i['identical_domain']
}
sc.textFile('ecir2022/anchor2query/anchor-text-with-orcas-queries-2019-47-enriched-5-percent-sample.jsonl')\
.map(json.loads)\
.map(keep_interesting_fields)\
.map(json.dumps)\
.repartition(100)\
.saveAsTextFile(path='ecir2022/anchor2query/anchor-text-with-orcas-queries-2019-47-enriched-5-percent-sample-projection-for-overview.jsonl', compressionCodecClass='org.apache.hadoop.io.compress.GzipCodec')
# +
import pandas as pd
df = pd.read_json('/mnt/ceph/storage/data-in-progress/data-research/web-search/ECIR-22/ecir21-anchor2query/anchor-text-with-orcas-queries-2019-47-enriched-5-percent-sample-projection-for-overview.jsonl.gz', lines=True)
df
# +
import pandas as pd
df = pd.DataFrame(anchor_text_sample)
df
# -
df['diff'] = df['anchorContextScore'] - df['anchorTextScore']
df.sort_values('diff')
# +
import seaborn as sb
sb.distplot(df['diff'])
# -
len(df[(df['anchorTextScore'] <= 0.00001) & (df['anchorContextScore'] >= 0.1)])
# +
df_anchor_useless = df[(df['anchorTextScore'] <= 0.00001) & (df['anchorContextScore'] >= 0.1)].copy()
# -
sb.distplot(df_anchor_useless['anchorContextScore'])
tmp = df_anchor_useless.sample(5)
tmp
# +
from termcolor import colored
def pretty_print_text(entry):
weights = weighted_representation(entry['orcasQueries'])
tmp = entry['anchorContext'].replace('\\s+', ' ')
ret = ''
for w in tmp.split(' '):
crnt = w
normalized_w = normalize(w)
tmp_str = []
for nw in normalized_w:
if nw in weights:
tmp_str += [nw + ':' + str(weights[nw])]
if len(tmp_str) > 0:
crnt += '[' + ( ';'.join(tmp_str) ) + ']'
crnt = colored(crnt, 'red')
ret += ' ' + crnt
return ret.strip()
def pretty_print(entry):
print('Document: ' + str(entry['document']['srcUrl']))
print('OrcasQueries: ' + str(entry['orcasQueries']))
print('Target\n\tUrl: ' + entry['targetUrl'])
print('\tAnchor:\n\t\t\'' + entry['anchorText'] + '\'\n')
print('\tAnchorContext: \'' + pretty_print_text(entry)+ '\'')
# -
tmp_2 = df_anchor_useless.sample(5)
tmp_2
for _, i in tmp_2.iterrows():
pretty_print(i)
print('\n\n\n')
tmp_3 = df_anchor_useless.sample(5)
tmp_3
for _, i in tmp_3.iterrows():
pretty_print(i)
print('\n\n\n')
tmp_4 = df_anchor_useless.sample(5)
for _, i in tmp_4.iterrows():
pretty_print(i)
print('\n\n\n')
tmp_5 = df_anchor_useless.sample(5)
for _, i in tmp_5.iterrows():
pretty_print(i)
print('\n\n\n')
len(df_anchor_useless[df_anchor_useless['identical_domain'] == False])
tmp_6 = df_anchor_useless[df_anchor_useless['identical_domain'] == False].sample(5)
df_anchor_useless['identical_domain'] = df_anchor_useless.apply(lambda i: identical_domain(i), axis=1)
df_anchor_useless['numberOrcasQueries'] = df_anchor_useless['orcasQueries'].apply(lambda i: len(i))
df_anchor_useless.sort_values('numberOrcasQueries')
for _, i in tmp_6.iterrows():
pretty_print(i)
print('\n\n\n')
len(df_anchor_useless[(df_anchor_useless['identical_domain'] == False) & (df_anchor_useless['numberOrcasQueries'] < 10)])
tmp_7 = df_anchor_useless[(df_anchor_useless['identical_domain'] == False) & (df_anchor_useless['numberOrcasQueries'] < 10)].sample(5)
for _, i in tmp_7.iterrows():
pretty_print(i)
print('\n\n\n')
tmp_8 = df_anchor_useless[(df_anchor_useless['identical_domain'] == False) & (df_anchor_useless['numberOrcasQueries'] < 10)].sample(5)
for _, i in tmp_8.iterrows():
pretty_print(i)
print('\n\n\n')
tmp_9 = df_anchor_useless[(df_anchor_useless['identical_domain'] == False) & (df_anchor_useless['numberOrcasQueries'] < 10)].sample(5)
for _, i in tmp_9.iterrows():
pretty_print(i)
print('\n\n\n')
df_anchor_and_context_useful = df[(df['anchorTextScore'] >= 0.2) & (df['anchorContextScore'] > df['anchorTextScore'])].copy()
df_anchor_and_context_useful['identical_domain'] = df_anchor_and_context_useful.apply(lambda i: identical_domain(i), axis=1)
df_anchor_and_context_useful['numberOrcasQueries'] = df_anchor_and_context_useful['orcasQueries'].apply(lambda i: len(i))
len(df_anchor_and_context_useful)
tmp_10 = df_anchor_and_context_useful[(df_anchor_and_context_useful['identical_domain'] == False) & (df_anchor_and_context_useful['numberOrcasQueries'] < 10)].sample(5)
for _, i in tmp_10.iterrows():
pretty_print(i)
print('\n\n\n')
tmp_11 = df_anchor_and_context_useful[(df_anchor_and_context_useful['identical_domain'] == False) & (df_anchor_and_context_useful['numberOrcasQueries'] < 10)].sample(5)
for _, i in tmp_11.iterrows():
pretty_print(i)
print('\n\n\n')
tmp_12 = df_anchor_and_context_useful[(df_anchor_and_context_useful['identical_domain'] == False) & (df_anchor_and_context_useful['numberOrcasQueries'] < 10)].sample(5)
for _, i in tmp_12.iterrows():
pretty_print(i)
print('\n\n\n')
| src/jupyter/analyze-orcas-queries-in-anchor-context.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
from nbdev import *
import numpy as np
import pandas as pd
import ipywidgets as widgets
from tqdm import tqdm
from pathlib import Path
from ipyannotator.im2im_annotator import Im2ImAnnotator
# # Select Dataset
# You can choose between 3 datasets ['cifar10', 'oxford_flowers', 'CUB_200'] that you can download.
# We use a artifical generated classification dataset by default that doesn't require downloading.
dataset = 'artifical'
# dataset = 'cifar10'
# dataset = 'oxford_flowers'
# dataset = 'CUB_200'
# ## prepare dataset
# ! mkdir -p data
# +
from ipyannotator.datasets.generators import create_color_classification
if dataset == 'artifical':
import tempfile
tmp_dir = tempfile.TemporaryDirectory()
path = Path(tmp_dir.name)
# Convert artifical dataset annotations to ipyannotator format inplace
from PIL import Image
create_color_classification(path=path, n_samples=50, size=(500, 500))
annotations = pd.read_json(path/'annotations.json').T
anno = annotations.T.to_dict('records')[0]
anno = {str(path / 'images' / k): [f'{v}.jpg'] for k,v in anno.items()}
with open(path/'annotations.json', 'w') as f:
json.dump(anno, f)
project_path = path
project_file = path/'annotations.json'
image_dir = 'images'
label_dir = 'class_images'
im_width=50
im_height=50
label_width=30
label_height=30
n_cols = 3
# +
from ipyannotator.datasets.download import get_cifar10, get_cub_200_2011, get_oxford_102_flowers
if dataset == 'cifar10':
cifar_train_p, cifar_test_p = get_cifar10('data')
project_path = 'data/cifar10/'
project_file = cifar_test_p
image_dir = 'test'
label_dir = None
im_width=50
im_height=50
label_width=140
label_height=30
n_cols = 2
if dataset == 'oxford_flowers':
flowers102_train_p, flowers102_test_p = get_oxford_102_flowers('data')
project_path = 'data/oxford-102-flowers'
project_file = flowers102_test_p
image_dir = 'jpg'
label_dir = None
im_width=50
im_height=50
label_width=40
label_height=30
n_cols = 7
if dataset == 'CUB_200':
cub200_train_p, cub200_test_p = get_cub_200_2011('data')
project_path = 'data/CUB_200_2011'
project_file = cub200_test_p
image_dir='images'
label_dir = None
im_width=50
im_height=50
label_width=450
label_height=15
n_cols = 7
# -
# ### ToDo convert datasets / create helper function
#
# for all three dataset, each has a different file / folder structure
#
# - should be possible to either look at train or test images
# - should be possible to look at unlabeled or labeled data
#
# comment: maybe we can borrow code from fastai `DataBunch` supports all this file/folder structures,
# however we shouldn't have fastai as dependency because this would also require pytorch which is fairly big
# # explore
# Lets visualize existing annotated dataset.
#
# As we don't have images for each class we do not provide `label_dir=None` for ippyannotator, thus class labels will be generrated automatically based on `annotation.json` file.
#
# We use `results_dir` param to indicate directory where `annotation.json` file with existing annotations is located.
#
# You can explore dataset with `next/previous` buttons to check visualized labels.
# !cat {project_path}
im2im = Im2ImAnnotator(project_path=project_path,
file_name=project_file,
image_dir=image_dir,
step_down=True,
label_dir=label_dir,
im_width=im_width, im_height=im_height,
label_width=label_width, label_height=label_height,
n_cols=n_cols
)
im2im
# +
# Let's explore only subset of ds
import json
from random import sample
with project_file.open() as f:
data = json.load(f)
all_labels = data.values()
unique_labels = set(label for item_labels in all_labels for label in item_labels)
# get <some> random labels and generate annotation file with them:
some = 3
assert (some <= len(unique_labels))
subset_labels = sample([[a] for a in unique_labels], k=some)
subset_annotations = {k:v for k, v in data.items() if v in subset_labels}
subset_file = Path(project_path) / 'subset_anno.json'
with subset_file.open('w', encoding='utf-8') as fi:
json.dump(subset_annotations, fi, ensure_ascii=False, indent=4)
# use it in annotator
im2im = Im2ImAnnotator(project_path=project_path,
file_name=subset_file,
image_dir=image_dir,
step_down=True,
label_dir=label_dir,
im_width=im_width, im_height=im_height,
label_width=label_width, label_height=label_height,
n_cols=n_cols,
label_autosize=False
)
display(im2im)
# -
# # create
# Load unannotated dataset and create classification labels.
#
# - real
# - generated
# Now we set `label_dir='class_images'`, because we have existing folder, where one image per class with proper name saved beforehand corespondinlgy
#
# Also, setting `results_dir="out"` we define that final `annotation.json` file will be generated from scratch and saved to `{project_path}/out` direcory
# Try to annotate some pieces incorrectly, thus you prepare good set for `improve` step below
# +
# while we don't have class_labels for real datasets, let's combine train and test annotations to generate them
all_annotations = Path(project_path) / "annotations.json"
if dataset != 'artifical': # combine train/test for real ds
import json
import glob
with open(Path(project_path) / "annotations_train.json", "rb") as train:
tr = json.load(train)
with open(Path(project_path) / "annotations_test.json", "rb") as test:
te = json.load(test)
result = {**tr, **te}
with open(all_annotations, "w") as outfile:
json.dump(result, outfile)
# -
gen_class_labels = Im2ImAnnotator(project_path=project_path,
image_dir=image_dir,
file_name=all_annotations,
label_dir=label_dir,
results_dir=None,
im_width=im_width, im_height=im_height,
label_width=label_width, label_height=label_height,
n_cols=n_cols,
question="Classification")
label_dir = gen_class_labels._model.label_dir.stem
label_dir
# +
# now we can generate new annotaation file from scratch,
# by using empty folder for <results_dir> and <label_dir> from previous step
output_dir = 'results'
print(Path(project_path) / output_dir)
# !rm -rf {Path(project_path) / output_dir}
# +
im2im = Im2ImAnnotator(project_path=project_path,
image_dir=image_dir,
file_name=None,
label_dir=label_dir,
results_dir=output_dir,
im_width=im_width, im_height=im_height,
label_width=label_width, label_height=label_height,
n_cols=n_cols,
question="Classification")
im2im
# -
all_labelss = im2im._model.labels_files
all_labelss[:3]
with all_annotations.open() as f:
anno_ = json.load(f)
# +
import numpy as np
filt = np.random.uniform(low=0, high=1, size=len(anno_))
label_noise = 0.1
# +
# dummy annotator
from random import choice
def get_random_class():
return choice (all_labelss)
get_random_class()
# +
# assign random label for subset of all annotations to imitate human work with <label_noise> amount of errors
filtererd = {x: [get_random_class()] if f_ < label_noise else y for (x, y), f_ in zip(anno_.items(), filt)}
# +
# update ipyannotator's annotations bassed on previous step and save
im2im._model.annotations.update((k, filtererd.get(k, [])) for k in im2im._model.annotations.keys())
im2im._save_btn.click()
# +
# im2im._model.annotations
# +
# check annotation file on disk
# # !cat {im2im._model.annotation_file_path}
# -
#same in memory
from IPython import display
# im2im.to_dict()
# # improve
# Load annotated dataset and mark wrongly annotated samples.
#
# - real
# - generated
# Let's create corresponding map for each class from annotations obtained on `create` step above
# +
#open labels generated on [create] step
with open(Path(project_path) / output_dir / 'annotations.json') as infile:
loaded_image_annotations = json.load(infile)
# +
# loaded_image_annotations
# +
from collections import defaultdict
def group_files_by_class(annotations):
grouped = defaultdict(list)
for file, labels in annotations.items():
for class_ in labels:
grouped[class_].append(file)
return grouped
# -
classes_to_files = group_files_by_class(loaded_image_annotations)
# Lets group some annotators together, so we can go through all annotated images but for each classs separately.
#
# Each grid shows images belonging to the __same__ class.
#
# You should __mark all errors__ (images, which belongs to __different__ class)
from ipyannotator.capture_annotator import CaptureAnnotator
# !! Dont forget to click __SAVE__ button when finished with each class:
items = [CaptureAnnotator(project_path, image_dir, 50, 50, 2, 5,
question=f'Check incorrect annotation for [{class_name[:-4]}] class',
filter_files=class_anno,
results_dir=f'{output_dir}/missed/{class_name[:-4]}') for class_name, class_anno in tqdm(classes_to_files.items())]
#let's select first two classes to mark the errors
widgets.VBox(children = items[:2])
# +
# mark spoiled on create step, imitating human correction
for i in tqdm(items):
for k, v in i._model.annotations.items():
i._model.annotations[k] = {'answer': anno_[k] != filtererd[k]}
i._model._update_state()
i._save_btn.click()
# print(i._model.annotations)
# -
# Now we can get list of all marked images, which should be reclassified:
# +
reclasify_this = [[c for c, v in i.to_dict().items() if v['answer']] for i in items]
# show 10 files with incorrect label for the first class
reclasify_this[1][:10]
# -
# Also, auttomatically generarted json file can be used for each class.
#
# Let's load one random json and select filenames marked as incorrect on previous step for this class:
# +
from glob import glob
random_class = sample(glob(str(Path(project_path) / output_dir/'missed')+'/*'), 1)[0]
print(random_class)
random_class_annotation = pd.read_json(Path(random_class) / 'annotations.json').T
random_misssed = list(random_class_annotation[random_class_annotation['answer']==True].index.values)
# show 10 files with incorrect label for the random class
random_misssed[:10]
# -
| nbs/01b_tutorial_image_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import smatrix as sm
filename = ''
data = sm.Sparse4DData.from_4Dcamera_file(filename)
center, radius = sm.util.determine_center_and_radius(data, manual=False)
data.crop_symmetric_center_(center)
rotation_deg = 0 #sm.util.determine_rotation(data)
E_ev = 80e3
lam = wavelength(E_ev)
alpha_rad = 20e-3
alpha_max = data.diffraction_shape / radius * alpha_rad
k_max = alpha_max / lam
metadata = sm.Metadata4D(E_ev = E_ev,
alpha_rad = alpha_rad,
dr=[0.3,0.3],
k_max = k_max,
rotation_deg = rotation_deg)
options = sm.ReconstructionOptions()
out = sm.reconstruct(data, metadata, options)
S = out.smatrix
r = out.r
Psi = out.Psi
R_factor = out.R_factors
sm.util.visualize_smatrix(S)
| examples/new_interface.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # scikit-learn中的逻辑回归
# +
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(666)
X = np.random.normal(0,1, size=(200,2))
y = np.array(X[:,0]**2 + X[:,1] < 1.5, dtype=int)
# 生成一些噪音
for _ in range(20):
y[np.random.randint(200)] = 1
# -
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
from sklearn.model_selection._split import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=666)
# ## 线性的逻辑回归
# +
from sklearn.linear_model.logistic import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
# -
log_reg.score(X_train, y_train), log_reg.score(X_test, y_test)
def plot_decision_boundary(model, axis):
"""绘制不规则决策边界"""
x0, x1 = np.meshgrid(
np.linspace(axis[0], axis[1], int((axis[1] - axis[0])*100)).reshape(1,-1),
np.linspace(axis[2], axis[3], int((axis[3] - axis[2])*100)).reshape(1,-1)
)
X_new= np.c_[x0.ravel(), x1.ravel()]
y_predict = model.predict(X_new)
zz = y_predict.reshape(x0.shape)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#EF9A9A','#FFF59D','#90CAF9'])
plt.contourf(x0, x1, zz, linewidth=5, cmap=custom_cmap)
# 画出决策边界
plot_decision_boundary(log_reg, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
# ## 多项式逻辑回归
# +
from sklearn.pipeline import Pipeline
from sklearn.preprocessing.data import PolynomialFeatures
from sklearn.preprocessing.data import StandardScaler
def PolynomialLogisticRegression(degree):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression())
])
# -
poly_log_reg = PolynomialLogisticRegression(degree=2)
poly_log_reg.fit(X_train, y_train)
poly_log_reg.score(X_train, y_train), poly_log_reg.score(X_test, y_test)
# 画出决策边界
plot_decision_boundary(poly_log_reg, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
# ### 增加多项式项的阶数(看看过拟合的效果)
poly_log_reg2 = PolynomialLogisticRegression(degree=20)
poly_log_reg2.fit(X_train, y_train)
poly_log_reg2.score(X_train, y_train), poly_log_reg2.score(X_test, y_test)
# 画出决策边界
plot_decision_boundary(poly_log_reg2, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
# 边界比较奇怪,很可能已经发生过拟合现象
# #### 使用(L2)正则化解决过拟合问题
def PolynomialLogisticRegression2(degree, C):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(C=C))
])
poly_log_reg3 = PolynomialLogisticRegression2(degree=20, C=0.1)
poly_log_reg3.fit(X_train, y_train)
poly_log_reg3.score(X_train, y_train), poly_log_reg3.score(X_test, y_test)
# 画出决策边界
plot_decision_boundary(poly_log_reg3, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
# #### 使用(L1)正则化解决过拟合问题
def PolynomialLogisticRegression3(degree, C, penalty='l2'):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(C=C, penalty=penalty))
])
poly_log_reg4 = PolynomialLogisticRegression3(degree=20, C=0.1, penalty='l1')
poly_log_reg4.fit(X_train, y_train)
poly_log_reg4.score(X_train, y_train), poly_log_reg4.score(X_test, y_test)
# 画出决策边界
plot_decision_boundary(poly_log_reg4, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
| c6_logistic_regression/07_Logistic_Regression_in_scikit_learn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''convlnote'': conda)'
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, metrics
import io
import imageio
from IPython.display import Image, display
from ipywidgets import widgets, Layout, HBox
from PIL import Image
from tqdm import tqdm
import os
import math
from scipy import stats
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error,mean_absolute_error
from Seq2Seq import Seq2Seq
def show(x_test,target,idx,model):
a=np.expand_dims(x_test[target+idx], axis=0)
prd=model.predict(a)
aa=[]
for b in prd[0][-1]:
bb=[]
for c in b:
bb.append([c,c,c])
aa.append(bb)
aa=np.array(aa)[:,:,:,0]
if idx==0:
predict=np.expand_dims(aa,axis=0)
else:
predict = np.concatenate((predict, np.expand_dims(aa,axis=0)), axis=0)
def MAPE(y_test, y_pred,vervose=1):
# print(y_test.shape, y_pred.shape)
all=(zip(y_test,y_pred))
cnt=0
cost=0
up=0
down=0
for t,p in all:#t로나눠
if t==0:
# c=np.abs(t-p) / p
continue
else:
c=np.abs(t-p) / t
cnt+=1
cost+=c
# if c>0.5:
# if t> 40:
# up+=1
# else:
# down+=1
# if c>0.2:
# print(t)
if vervose==1:
print(f"up: {up} down : {down}")
return cost/cnt*100
def compute_metrics(original,predict,start,end,is_pval=0):
start-=1
end-=1
y=original[:,start:end,:,:]
y_pred=predict[:,start:end,:,:]
# mape=MAPE(y.reshape(-1,),y_pred.reshape(-1,))
y=(y)*100
y_pred=(y_pred)*100
y_flatten=y.flatten()
y_pred_flatten=y_pred.flatten()
mape=MAPE(y_flatten,y_pred_flatten,0)
rmse=np.sqrt(mean_squared_error(y_flatten,y_pred_flatten))
mae=mean_absolute_error(y_flatten,y_pred_flatten)
p_val=stats.chisquare(y_flatten,y_pred_flatten)[1]
if is_pval==1:
return np.array([rmse,mape,mae,p_val])
return np.array([rmse,mape,mae])
def metrics_(y,y_pred):
y=(y)*100
y_pred=(y_pred)*100
y_flatten=y.flatten()
y_pred_flatten=y_pred.flatten()
mape=MAPE(y_flatten,y_pred_flatten)
mse=mean_squared_error(y_flatten,y_pred_flatten)
mae=mean_absolute_error(y_flatten,y_pred_flatten)
return [mse,mape,mae]
def metrics_jam(y,y_pred):
y=(y)*100
y_pred=(y_pred)*100
# 속도 40이하만 필터링
y_filtered=y[y <40]
y_pred_filtered=y_pred[y < 40]
mape=MAPE(y_filtered,y_pred_filtered)
mse=mean_squared_error(y_filtered,y_pred_filtered)
mae=mean_absolute_error(y_filtered,y_pred_filtered)
return [mse,mape,mae]
def _predict(models,i, x_test ,target):
for idx in range(7):
a=np.expand_dims(x_test[target+idx], axis=0)
#1 7 24 31 1
prd=models[i](a)
#gray에서 이미지보여주려고 ch3만듬
all=[]
#예측된거 마지막꺼만 가져옴
for img in prd[0][-1]:
pixel=[]
for gray in img:
pixel.append([gray,gray,gray])
all.append(pixel)
all=np.array(all)[:,:,:,0]
if idx==0:
predict=np.expand_dims(all,axis=0)
else:
predict = np.concatenate((predict, np.expand_dims(all,axis=0)), axis=0)
return predict
def make_predict(models, model_num, x_test ,target,original):
predicts=[]
for i in range(model_num):
predict=_predict(models,i,x_test,target)
print()
print(f"{i}번째")
print("속도 전체 에러율")
mse,mape,mae=metrics_(original[:,:,:,0],predict[:,:,:,0])
print(f"rmse : {np.sqrt(mse)} , mape : {mape} , mae : {mae}")
# print("속도 40이하 에러율")
# mse,mape,mae=metrics_jam(original[:,:,:,0],predict[:,:,:,0])
# print(f"rmse : {np.sqrt(mse)} , mape : {mape} , mae : {mae}")
#모든 모델 확인하기 위해 리스트에 저장
predicts.append(predict)
return predicts
# -
# # test all 2020
# +
path="D:/npz_gray_7_64_fix"
models=[]
tf.keras.backend.set_floatx('float32')
# model = Seq2Seq(16, 3, 3)
# model.build(input_shape=(1,7,24,31,1))
# model.load_weights("seq2seq_inside_64_0.005_mse_4_3000_0.1.h5")
# models.append(model)
# model2 = Seq2Seq(16,3, 3)
# model2.build(input_shape=(1,7,24,31,1))
# model2.load_weights("seq2seq_inside_64_0.0006_mse_4_100_0.1.h5")
# models.append(model2)
model3 = Seq2Seq(16, 3, 3)
model3.build(input_shape=(1,7,24,31,1))
model3.load_weights("seq2seq_인코더만layernorm_inside_64_5e-05_mse_3_3000_0.1.h5")
models.append(model3)
model4 = Seq2Seq(16, 3, 3)
model4.build(input_shape=(1,7,24,31,1))
model4.load_weights("seq2seq_inside_64_5e-05_mse_3_3000_0.h5")
models.append(model4)
# +
# 모든 모델 훈련시킨걸로 확인하기
x_test = np.load(f"{path}/batch/x/3.npz")['x']
target=2 #
originals=[]
predicts=[]
model_num=len(models)
#원본데이터
original=x_test[target+7]
all=[]
for img in original:
# print(a.shape)
one_img=[]
for pixels in img:
pixel=[]
for gray in pixels:
pixel.append([gray,gray,gray])
one_img.append(pixel)
all.append(one_img)
original=np.array(all)[:,:,:,:,0]
predicts=make_predict(models, model_num, x_test ,target,original)
fig, axes = plt.subplots(model_num+1, 7, figsize=(20, 10))
# Plot the original frames.
for idx, ax in enumerate(axes[0]):
#inverse여서 1에서 빼준다
ax.imshow((original[idx]))
ax.set_title(f"Original Frame {idx}")
ax.axis("off")
for i in range(model_num):
for idx, ax in enumerate(axes[i+1]):
ax.imshow(predicts[i][idx])
ax.set_title(f"Predicted Frame {idx}")
ax.axis("off")
# +
# 모든 모델 새로운데이터로 확인하기
x_test = np.load(f"{path}/2020/1.npz")['arr_0']
target=8 #
originals=[]
predicts=[]
model_num=len(models)
#원본데이터
original=x_test[target+7]
all=[]
for img in original:
# print(a.shape)
one_img=[]
for pixels in img:
pixel=[]
for gray in pixels:
pixel.append([gray,gray,gray])
one_img.append(pixel)
all.append(one_img)
original=np.array(all)[:,:,:,:,0]
predicts=make_predict(models, model_num, x_test ,target,original)
fig, axes = plt.subplots(model_num+1, 7, figsize=(20, 10))
# Plot the original frames.
for idx, ax in enumerate(axes[0]):
ax.imshow(original[idx])
ax.set_title(f"Original Frame {idx}")
ax.axis("off")
for i in range(model_num):
for idx, ax in enumerate(axes[i+1]):
ax.imshow(predicts[i][idx])
ax.set_title(f"Predicted Frame {idx}")
ax.axis("off")
# -
# ## 2020년 1개만 먼저 테스트해보기
# 2020년 (훈련안시킨거) 모든거 예측하고 매트릭 확인
batch_size=64
win=7
total=[]
for k in range(len(models)):
before_list=[]
after_list=[]
peak_list=[]
rest_list=[]
# for i in tqdm(list):
x_test = np.load(f"{path}/2020/4.npz")['arr_0']
for target in range(batch_size-win):
predict=_predict(models,k,x_test,target)
original=x_test[target+7]
all=[]
for a in original:
aa=[]
for b in a:
bb=[]
for c in b:
bb.append([c,c,c])
aa.append(bb)
all.append(aa)
original=np.array(all)[:,:,:,:,0]
#before peak hour - 7~12
before=compute_metrics(original,predict,7,12)
#peak 12~19
peak=compute_metrics(original,predict,12,19)
#after 19~21
after=compute_metrics(original,predict,19,21)
#rest 22~24 , 0~6
y=original[:,21:23,:,:]
y_pred=predict[:,21:23,:,:]
# 22~24 0~6 시간대 합치기
y=np.concatenate((y,original[:,0:5,:,:]),axis=1)
y_pred=np.concatenate((y_pred,predict[:,0:5,:,:]),axis=1)
# rest 에러 계산
y=(y)*100
y_pred=(y_pred)*100
y_flatten=y.flatten()
y_pred_flatten=y_pred.flatten()
mape=MAPE(y_flatten,y_pred_flatten,0)
rmse=np.sqrt(mean_squared_error(y_flatten,y_pred_flatten))
mae=mean_absolute_error(y_flatten,y_pred_flatten)
rest=[rmse,mape,mae]
#전체 저장
before_list.append(before)
after_list.append(after)
peak_list.append(peak)
rest_list.append(rest)
# print(len(before),len(after),len(peak),len(rest))
# print(before.shape,after.shape,peak.shape,rest.shape)
total.append(np.array((np.array(before_list),np.array(peak_list),np.array(after_list),np.array(rest_list))))
total=np.array(total)
# mse,mape,mae
for i in range(len(models)):
print(f"{i}번째")
print("before")
print(np.mean(total[i][0],axis=0))
print("peak")
print(np.mean(total[i][1],axis=0))
print("after")
print(np.mean(total[i][2],axis=0))
print("rest")
print(np.mean(total[i][3],axis=0))
print("표준편차")
print("before")
print(np.std(total[i][0],axis=0))
print("peak")
print(np.std(total[i][1],axis=0))
print("after")
print(np.std(total[i][2],axis=0))
print("rest")
print(np.std(total[i][3],axis=0))
# ## 지정 시간대 매트릭구하기
# +
# 2020년 (훈련안시킨거) 모든거 예측하고 매트릭 확인
batch_size=64
win=7
total_all=[]
model_num=0
num_2020=10 # 강변 10
for k in range(num_2020):
before_list=[]
after_list=[]
peak_list=[]
rest_list=[]
# for i in tqdm(list):
x_test = np.load(f"{path}/2020/{k}.npz")['arr_0']
for target in range(batch_size-win):
predict=_predict(models,model_num,x_test,target)
original=x_test[target+7]
all=[]
for a in original:
aa=[]
for b in a:
bb=[]
for c in b:
bb.append([c,c,c])
aa.append(bb)
all.append(aa)
original=np.array(all)[:,:,:,:,0]
#before peak hour - 7~12
before=compute_metrics(original,predict,7,12)
#peak 12~19
peak=compute_metrics(original,predict,12,19)
#after 19~21
after=compute_metrics(original,predict,19,21)
#rest 22~24 , 0~6
y=original[:,21:23,:,:]
y_pred=predict[:,21:23,:,:]
# 22~24 0~6 시간대 합치기
y=np.concatenate((y,original[:,0:5,:,:]),axis=1)
y_pred=np.concatenate((y_pred,predict[:,0:5,:,:]),axis=1)
# rest 에러 계산
y=(y)*100
y_pred=(y_pred)*100
y_flatten=y.flatten()
y_pred_flatten=y_pred.flatten()
mape=MAPE(y_flatten,y_pred_flatten,0)
rmse=np.sqrt(mean_squared_error(y_flatten,y_pred_flatten))
mae=mean_absolute_error(y_flatten,y_pred_flatten)
rest=[rmse,mape,mae]
#전체 저장
before_list.append(before)
after_list.append(after)
peak_list.append(peak)
rest_list.append(rest)
total_all.append(np.array((np.array(before_list),np.array(peak_list),np.array(after_list),np.array(rest_list))))
total_all=np.array(total_all)
# -
# mse,mape,mae
print("before")
print(np.mean(total_all[0][0],axis=0))
print("peak")
print(np.mean(total_all[0][1],axis=0))
print("after")
print(np.mean(total_all[0][2],axis=0))
print("rest")
print(np.mean(total_all[0][3],axis=0))
print("표준편차")
print("before")
print(np.std(total_all[0][0],axis=0))
print("peak")
print(np.std(total_all[0][1],axis=0))
print("after")
print(np.std(total_all[0][2],axis=0))
print("rest")
print(np.std(total_all[0][3],axis=0))
# ## 모든 시간대 메트릭, 그래프구하기
# +
# 2020년 (훈련안시킨거) 모든거 예측하고 매트릭 확인
# 7일 단위로 끊기
total_7=[]
for k in range(num_2020):
times=[]
x_test = np.load(f"{path}/2020/{k}.npz")['arr_0']
for target in range(0,batch_size-win,win):
predict=_predict(models,model_num,x_test,target)
original=x_test[target+7]
all=[]
for a in original:
aa=[]
for b in a:
bb=[]
for c in b:
bb.append([c,c,c])
aa.append(bb)
all.append(aa)
original=np.array(all)[:,:,:,:,0]
time=[]
for i in range(1,25):
time.append(compute_metrics(original,predict,i,i+1,is_pval=1))
#전체 저장
times.append(np.array(time))
total_7.append(np.array(times))
total_7=np.array(total_7)
total_7=total_7.reshape(-1,24,4)
# -
rmse_list=[]
mape_list=[]
pval_list=[]
for time in range(24):
#rmse
rmse_list.append(np.mean(np.sqrt(total_7[:,time,0].astype(float))))
#mape
mape_list.append(np.mean(total_7[:,time,1]))
#p_value
pval_list.append(np.mean(total_7[:,time,3]))
rmse_std=[]
mape_std=[]
for time in range(24):
#rmse
rmse_std.append(np.std(np.sqrt(total_7[:,time,0].astype(float)),axis=0))
#mape
mape_std.append(np.std(total_7[:,time,1],axis=0))
#p_value
# pval_list.append(np.mean(total_7[:,time,3]))
rmse_list
mape_std
plt.plot(range(24),pval_list)
rmse_list
plt.plot(range(1,25),rmse_std)
fig, ax= plt.subplots()
ax.boxplot(np.sqrt(total_7[:,:,0].astype(float)))
ax.set_ylim(0,10)
plt.show()
mape_list
plt.plot(range(1,25),mape_std)
fig, ax= plt.subplots()
ax.boxplot((total_7[:,:,1].astype(float)))
ax.set_ylim(0,30)
plt.show()
# +
# 모든 모델 새로운데이터로 확인하기
x_test = np.load(f"{path}/2020/7.npz")['arr_0']
target=6 #
originals=[]
predicts=[]
#원본데이터
original=x_test[target+7]
all=[]
for img in original:
# print(a.shape)
one_img=[]
for pixels in img:
pixel=[]
for gray in pixels:
pixel.append([gray,gray,gray])
one_img.append(pixel)
all.append(one_img)
original=np.array(all)[:,:,:,:,0]
predicts=make_predict(models, len(models), x_test ,target,original)
fig, axes = plt.subplots(7, len(models)+1, figsize=(10, 10))
for i in range(7):
for idx, ax in enumerate(axes[i]):
if idx==0:
ax.imshow(original[i])
ax.set_title(f"Original Frame {i}")
ax.axis("off")
elif idx==1:
ax.imshow(predicts[0][i])
ax.set_title(f"predicted Frame {i}")
ax.axis("off")
elif idx==2:
ax.imshow(predicts[1][i])
ax.set_title(f"predicted Frame {i}")
ax.axis("off")
else:
ax.imshow(predicts[2][i])
ax.set_title(f"predicted Frame {i}")
ax.axis("off")
# -
| check_seq2seq_inside.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <style>div.container { width: 100% }</style>
# <img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/holoviz-logo-unstacked.svg" />
# <div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 3: Interlinked Panels</h2></div>
# +
import panel as pn
pn.extension()
# -
# In the previous section we learned the very basics of working with Panel. Specifically we looked at the different types of components, how to update them, and how to serve a Panel application or dashboard. However, to start building actual apps with Panel we need to be able to add interactivity by linking different components together. In this section we will learn how to link widgets to outputs to start building some simple interactive applications.
#
# In this section we will once again make use of the earthquake dataset we loaded previously and compute some statistics:
# +
import dask.dataframe as dd
df = dd.read_parquet('../data/earthquakes.parq', columns=['time', 'place', 'mag'])
df['time'] = df.time.dt.strftime('%m/%d/%Y %H:%M:%S')
df = df.reset_index(drop=True).persist()
# -
# ## Widgets and reactive components
#
# `pn.interact` constructs widgets automatically that can then be reconfigured, but if you want more control, you'll want to instantiate widgets explicitly. A widget is an input control that allows a user to change a ``value`` using some graphical UI. A simple example is a `RangeSlider`:
# +
mag_filter = pn.widgets.RangeSlider(name='Magnitude', start=0, end=df.mag.max().compute())
mag_filter
# -
# Here the widget `value` is a [Parameter](https://param.pyviz.org) that is set to a tuple of the selected upper and lower bound. Parameters are an extended type of Python attribute that declare their type, range, etc. so that other code can interact with them in a consistent way. When we change the range using the widget the ``value`` parameter updates, and vice versa if you change the value parameter manually:
mag_filter.value
# Now we will declare a second widget:
# +
place_filter = pn.widgets.TextInput(placeholder='Enter a placename')
place_filter
# -
# In addition to the fully automated `pn.interact`, Panel offers a very concise, powerful approach of declaring dependencies between the parameters of a object and the arguments to a function. In practice, this middle ground provides enough control for nearly any app, without the complexity of explicit chains of callbacks that would otherwise be required when customizing the behavior.
#
# Here we will take the two widgets we have instantiated and create a little function that `depends` on the values of these widgets and filters the dataframe. Using the ``pn.depends`` decorator we can then declare that this function depends on the widget values:
@pn.depends(mag_filter, place_filter)
def filter_df(mag_range, place):
lower = df.mag>mag_range[0]
upper = df.mag<mag_range[1]
dffilter = lower & upper
if place:
dffilter &= df.place.str.contains(place)
return df[dffilter].head()
# Finally we lay out the widget and the function:
# +
filtered_view = pn.Row(
pn.Column(mag_filter, place_filter),
pn.panel(filter_df, width=400)
)
filtered_view
# -
# Whenever one of the widgets is changed the `filter_df` function will be triggered and the DataFrame pane will update with the updated data.
#
# Let us also take a look at the repr():
print(filtered_view)
# The `ParamFunction` pane is what listens to changes in the parameters on the widgets and updates the displayed output.
# #### Exercise
#
# Declare two ``Spinner`` widgets with an initial value of 1, then declare a function that depends on the values of both widgets and adds them together. Finally lay out the two widgets and the function in a Panel:
# <details><summary>Solution</summary><br>
#
# ```python
# a = pn.widgets.Spinner(value=1, width=60)
# b = pn.widgets.Spinner(value=1, width=60)
#
# @pn.depends(a.param.value, b.param.value)
# def adder(a, b):
# return a + b
#
# pn.Row(a, '+', b, '=', adder)
# ```
#
# </details>
# ## Callbacks
#
# The `depends` API is still a very high level way of declaring interactive components. Panel also supports the more low-level approach of writing explicit callbacks that are executed in response to changes in some parameter, e.g. the ``value`` of a widget. All parameters can be watched using the ``.param.watch`` API, which will call the provided callback with an event object containing the old and new value of the widget.
# Now that it is loaded we will create a slider which we will eventually use to select the row of the dataframe that we want to display:
row_slider = pn.widgets.IntSlider(value=0, start=0, end=len(df)-1)
# Next we create a Pane to display the current row of the dataframe with times formatted nicely:
row_pane = pn.panel(df.loc[row_slider.value].compute())
# Now that we have defined both the widget and the object we want to update, we can declare a callback to link the two. As we learned in the previous section, assigning a new value to the ``object`` of a pane will update the display. In the callback we select the row of the dataframe and then assign it to the ``pane.object``:
def df_callback(event):
row_pane.object = df.loc[event.new].compute()
# Lastly we actually have to register this callback. To do so we provide the callback and the parameter we want to trigger the event on the slider's ``.param.watch`` method:
row_slider.param.watch(df_callback, 'value')
# Now that everything is connected up, we can put both the widget and the pane in a panel and display them:
pn.Column(row_slider, row_pane, width=400)
# As you can see, this process is slightly more laborious than `pn.interact` or even the `pn.depends` approach, but doing it in this way should help you see how everything fits together and can be useful to more precisely control callbacks that update particular parameters or the contents of a larger layout.
# # Moving onwards
#
# Now that we have learned to link parameters between displayed objects and build interactive components, we can start building actual apps and dashboards. Before we move on to plotting and visualization let us quickly use what we have learned by adding interactivity to [the dashboard we built in the previous exercise](./exercises/Building_a_Dashboard.ipynb).
| examples/tutorial/03_Interlinked_Panels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
set.seed(1)
x = rnorm(100)
eps = rnorm(100, mean=0, sd=0.25)
y = -1 + 0.5*x + eps
length(y)
# Length: 100
#
# B0: -1
#
# B1: 0.5
plot(x, y)
# There is a linear relation between x and y, x ranging between -3 to 3, and y ranging between -3 to 0.5
lm.fit = lm(y~x)
summary(lm.fit)
# Our B0: -1, Model B0: -1.00942
#
# Our B1: 0.5, Model B1: -.49973
#
# They are extremely close
plot(x, y)
abline(lm.fit, col=2, lwd=3)
abline(-1, 0.5, col=3, lwd=3)
legend(-1, legend = c("model fit", "pop. regression"), col=2:3, lwd=3)
lm.fit2 = lm(y~x + I(x^2))
plot(x, y)
abline(lm.fit, col=2, lwd=3)
abline(-1, 0.5, col=3, lwd=3)
legend(-1, legend = c("model fit", "pop. regression", "Polynomial"), col=2:4, lwd=3)
summary(lm.fit2)
# No, there is no evidence that Polynomial makes a better fit since its p value is not close to 0
# #### With less noise in data
x = rnorm(100)
eps = rnorm(100, mean=0, sd=0.1)
y = -1 + 0.5*x + eps
lm.fit = lm(y~x)
summary(lm.fit)
plot(x, y)
abline(lm.fit, col=2, lwd=3)
abline(-1, 0.5, col=3, lwd=3)
legend(-1, legend = c("model fit", "pop. regression"), col=2:3, lwd=3)
# R squared is increased, the line is a little skewed, but RSE is also reduced.
confint(lm.fit)
# original model
confint(lm.fit)
# Model with less variance has Smaller confidence intervals
| Chapter3/Exercises/Ex13.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: geopy2020
# language: python
# name: geopy2020
# ---
# +
# DON'T
def function():
# these return the actual strings that you put here
return "Error: LineString or Polygon geometries required!"
# or
return("Error! Please insert a list of Shapely Points or coordinate tuples!")
length=function(xyz)
print(length)
# DO
def function():
# ...
print("Error: LineString or Polygon geometries required!")
# no return statement or return explicitly None
return None
# also
# don't return in brackets
return (polygon)
# do
return polygon
# tuple
PList = (point1, point2)
# list
PList = [point1, point2]
# do not make spaces before the brackets, for readability, but after commas
# all work though :-) but for style
# no
Point (x, y)
# no
Point(x,y)
# yes, ideal
Point(x, y)
# variable and function names, lower case first letters is better
# no
Point1 = createPointGeom(1.5,3.2)
# no
point1 = CreatePointGeom(1.5,3.2)
# yes
point1 = createPointGeom(1.5, 3.2)
point1 = create_point_geom(1.5, 3.2)
# reserved words
def getCentroid(object):
return object.centroid
# it works, but it's dangerous and might be misleading
# -
# # Lesson3: Point in Polygon & Intersect
#
# - https://kodu.ut.ee/~kmoch/geopython2020/L3/point-in-polygon.html
# +
from shapely.geometry import Point, Polygon
# Create Point objects
p1 = Point(24.952242, 60.1696017)
p2 = Point(24.976567, 60.1612500)
# Create a Polygon
coords = [(24.950899, 60.169158), (24.953492, 60.169158), (24.953510, 60.170104), (24.950958, 60.169990)]
poly = Polygon(coords)
# +
# Let's check what we have
print(p1)
print(p2)
print(poly)
# +
# Check if p1 is within the polygon using the within function
print(p1.within(poly))
# Check if p2 is within the polygon
print(p2.within(poly))
# +
# Our point
print(p1)
# The centroid
print(poly.centroid)
# +
# Does polygon contain p1?
print(poly.contains(p1))
# Does polygon contain p2?
print(poly.contains(p2))
# +
from shapely.geometry import LineString, MultiLineString
# Create two lines
line_a = LineString([(0, 0), (1, 1)])
line_b = LineString([(1, 1), (0, 2)])
# -
line_a.intersects(line_b)
line_a.touches(line_b)
line_a.touches(line_a)
line_a.contains(line_a)
line_a.intersects(line_a)
# +
import geopandas as gpd
# protected species under class 3 monitoring sightings
species_fp = "category_3_species_porijogi.gpkg"
species_data = gpd.read_file(species_fp, layer='category_3_species_porijogi', driver='GPKG')
# +
# porijogi_sub_catchments
polys_fp = "porijogi_sub_catchments.geojson"
polys = gpd.read_file(polys_fp, driver='GeoJSON')
polys.head(5)
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 15)
subcatch = polys.loc[polys['NAME_1']=='Idaoja']
subcatch.reset_index(drop=True, inplace=True)
fig, ax = plt.subplots()
polys.plot(ax=ax, facecolor='gray')
subcatch.plot(ax=ax, facecolor='red')
species_data.plot(ax=ax, color='blue', markersize=5)
plt.title("species sightings in Porijõgi")
plt.tight_layout()
# +
import shapely.speedups
shapely.speedups.enable()
# -
pip_mask = species_data.within(subcatch.loc[0, 'geometry'])
display(pip_mask)
pip_data = species_data.loc[pip_mask]
pip_data
# +
subcatch = polys.loc[polys['NAME_1']=='Idaoja']
subcatch.reset_index(drop=True, inplace=True)
fig, ax = plt.subplots()
polys.plot(ax=ax, facecolor='gray')
subcatch.plot(ax=ax, facecolor='red')
pip_data.plot(ax=ax, color='gold', markersize=10)
plt.tight_layout()
# -
# # Lesson3: Spatial join
#
# - https://kodu.ut.ee/~kmoch/geopython2020/L3/spatial-join.html
# +
In [1]: import geopandas as gpd
# Filepath
fp = "porijogi_corine_landuse.shp"
# Read the data
lulc = gpd.read_file(fp)
lulc.head(5)
# -
lulc.columns
# +
import pandas as pd
codes = pd.read_csv('corine_landuse_codes.csv', sep=';')
codes
# -
lulc = lulc.merge(codes, left_on='clc_int', right_on='CLC_CODE')
lulc.sample(10)
selected_cols = ['Landuse', 'LABEL2','geometry']
lulc = lulc[selected_cols]
lulc.sample(10)
# +
# protected species under class 3 monitoring sightings
species_fp = "category_3_species_porijogi.gpkg"
species = gpd.read_file(species_fp, layer='category_3_species_porijogi', driver='GPKG')
display(species.sample(5))
display(species.crs)
display(lulc.crs)
# +
join = gpd.sjoin(species, lulc, how="inner", op="within")
join.head()
# +
# Output path
outfp = "landuse_per_species.shp"
# Save to disk
join.to_file(outfp)
# -
join['NIMI'].value_counts()
join['LABEL2'].value_counts()
# +
data_list = []
for species_id, species_group in join.groupby('NIMI'):
lulc_count = species_group['LABEL2'].value_counts()
top = lulc_count.head(1)
# display(type(top))
# print(top)
data_list.append(
{
'species_id':species_id,
'all_sights': len(species_group),
'top_lulc': top.index[0],
'sights_in_top': top[0]
}
)
print("species_id: {}, number of sightings: {}, top lulc: {}, number: {}".format(species_id, len(species_group), top.index[0], top[0] ))
# +
# Creates DataFrame.
top_sights = pd.DataFrame(data_list)
# Print the data
top_sights.sort_values(by=['all_sights','sights_in_top'], ascending=False)
# -
| L3/lesson3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # KNN-CLASSIFIER for teleCustomer
# <h4>The K- Nearest Neighbour Algorithm to classify the peoples on the basis of services.</h4>
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import matplotlib.ticker as ticker
from sklearn import preprocessing
import os
# %matplotlib inline
# +
DATASET_PATH = os.path.join("../","datasets")
csv_file_path1 = os.path.join(DATASET_PATH,"teleCust1000t.csv")
df = pd.read_csv(csv_file_path,error_bad_lines=False)
df.head()
# -
df['custcat'].value_counts()
df.hist(column = 'income',bins=50)
df.columns
X = df[['region','tenure', 'age', 'marital', 'address', 'income', 'ed',
'employ', 'retire', 'gender', 'reside']].values
X[0:5]
y = df['custcat'].values
y[0:5]
X = preprocessing.StandardScaler().fit(X).transform(X.astype(float))
X[0:5]
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=4)
print('Train Set : ', X_train.shape,y_train.shape)
print('Test Set : ', X_test.shape,y_test.shape)
# ###### CLASSIFICATION::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::>
from sklearn.neighbors import KNeighborsClassifier as KNC
k = 38
neighbor = KNC(n_neighbors = k).fit(X_train,y_train)
yhat = neighbor.predict(X_test)
yhat[0:5]
from sklearn import metrics
print('Train accuracy :: ', metrics.accuracy_score(y_train,neighbor.predict(X_train)))
print('Test accuracy :: ', metrics.accuracy_score(y_test,yhat))
# +
Ks = 100
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
ConfustionMx = [];
for n in range(1,Ks):
#Train Model and Predict
neighbor = KNC(n_neighbors = n).fit(X_train,y_train)
yhat=neighbor.predict(X_test)
mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)
std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])
mean_acc
# -
plt.plot(range(1,Ks),mean_acc,'g')
plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.legend(('Accuracy ', '+/- 3xstd'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of Nabors (K)')
plt.tight_layout()
plt.show()
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
| Classifier/KNN-Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ---
#
# _You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
#
# ---
# # Assignment 1
#
# In this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data.
#
# Each line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.
#
# The goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates.
#
# Here is a list of some of the variants you might encounter in this dataset:
# * 04/20/2009; 04/20/09; 4/20/09; 4/3/09
# * Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;
# * 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009
# * Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009
# * Feb 2009; Sep 2009; Oct 2010
# * 6/2008; 12/2009
# * 2009; 2010
#
# Once you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:
# * Assume all dates in xx/xx/xx format are mm/dd/yy
# * Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)
# * If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).
# * If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).
# * Watch out for potential typos as this is a raw, real-life derived dataset.
#
# With these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.
#
# For example if the original series was this:
#
# 0 1999
# 1 2010
# 2 1978
# 3 2015
# 4 1985
#
# Your function should return this:
#
# 0 2
# 1 4
# 2 0
# 3 1
# 4 3
#
# Your score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.
#
# *This function should return a Series of length 500 and dtype int.*
# +
import pandas as pd
doc = []
with open('dates.txt') as file:
for line in file:
doc.append(line)
df = pd.Series(doc)
# -
def date_sorter():
import re
from calendar import month_name
import dateutil.parser
from datetime import datetime
# There are 4 formats for the dates: 1) Dates are in numbers 2) Dates are in text 3) Dates without days, only month and year
# 1) Dates are in numbers
format_one = df.str.extract(r"((?:\d{1,2})(?:(?:\/|-)\d{1,2})(?:(?:\/|-)\d{2,4}))")
# 2) Dates are in text
format_two = df.str.extract(r"((?:\d{,2}\s)?(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[a-z]*(?:-|\.|\s|,)\s?\d{,2}[a-z]*(?:-|,|\s)?\s?\d{2,4})")
# 3) Dates without days, only month and year
format_three = df.str.extract(r'((?:\d{1,2}(?:-|\/))?\d{4})')
dates = pd.to_datetime(format_one.fillna(format_two).fillna(format_three).replace('Decemeber','December',regex=True).replace('Janaury','January',regex=True))
return pd.Series(dates.sort_values().index)
| Assignment/Assignment+1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Datetime - epoch is from Unix time 0 (midnight 1/1/1970)
# ## PS2 - What day of the week was 1/1/1970
myArray = np.array([1,2,3,4,5,6,7])
mySeries = pd.Series(myArray)
mySeries.index = pd.date_range(start='1/1/1970', periods=7)
mySeries.index.dayofweek
import calendar
dayNumber = calendar.weekday(1970, 1, 1) ## First approach
days =["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
dayNumber
days[dayNumber]
import datetime ## Second
today = datetime.date(2021, 3, 9) #Today's date (It is a Tuesday, today! )
past_date = datetime.date(1970, 1, 1) #Jan 1 1970
days[(today - past_date).dayss%7 - 2 ] # -1 because today is tuesday and another -1 becuase zero indexing.
# ## PS2 - UFO Data - Handling Time, Day of Week etc
# ### What day of the week has the most sightings?
# ### On the day with most sightings plot a histogram the time of day the sightings occured
# ### Do the same for the day with the 2nd most sightings
# ### For extra credit - Are there any deductions or patterns you see in the data? Justify with plots or data.
import pandas as pd
import numpy as np
import matplotlib as plt
ufo = pd.read_csv('http://bit.ly/uforeports', parse_dates=['Time'])
ufo
dates=ufo.Time.map(lambda t: t.date()).values
times=ufo.Time.map(lambda t: t.time()).values
unique_dates=ufo.Time.map(lambda t: t.date()).unique()
# +
counts=[]
for date_unique in unique_dates:
Q=0
for date in dates:
if date == date_unique:
Q=Q+1
counts.append(Q)
sorted_idx=np.argsort(counts);
max_date=unique_dates[sorted_idx[-1]]
dayNumber_max = calendar.weekday(max_date.year, max_date.month, max_date.day)
max2_date=unique_dates[sorted_idx[-2]]
dayNumber_max2 = calendar.weekday(max2_date.year, max2_date.month, max2_date.day)
print('day with most sightings is:',max_date,' which is:',days[dayNumber_max])
print('day with second most sightings is:',max2_date,' which is:',days[dayNumber_max2])
# -
time_max=[];
for i in range(len(dates)):
if dates[i]==max_date:
time_max.append(times[i].hour+times[i].minute/60)
time_max2=[];
for i in range(len(dates)):
if dates[i]==max2_date:
time_max2.append(times[i].hour+times[i].minute/60)
plt.hist(time_max, bins='auto');
plt.title("times of the day for the day with most sightings ")
plt.hist(time_max2, bins='auto');
plt.title("times of the day for the day with second most sightings ")
| PS2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # How to store a light curve in FITS format?
#
# Once you have detrended or altered a lightcurve in some way, you may want to save it as a FITS file. This allows you to easily share the file with your collaborators or submit your lightcurves as a [MAST High Level Science Product](https://archive.stsci.edu/hlsp/hlsp_guidelines.html) (HLSP). Lightkurve provides a `to_fits()` method which will easily convert your `LightCurve` object into a fits file.
#
# Below is a brief demostration showing how `to_fits()` works.
#
# Note: if you are considering contributing a HLSP you may want to read the [guidelines](https://archive.stsci.edu/hlsp/hlsp_guidelines_timeseries.html) for contributing fits files. These include which fits headers are required/suggested for your HLSP to be accepted.
# ## Example: editing and writing a lightcurve
#
# First we'll obtain a random Kepler lightcurve from MAST.
from lightkurve import KeplerLightCurveFile
lcf = KeplerLightCurveFile.from_archive(757076, quarter=3)
# Now we'll make some edits to the lightcurve. Below we use the PDCSAP flux from MAST, remove NaN values and clip out any outliers.
lc = lcf.PDCSAP_FLUX.remove_nans().remove_outliers()
lc.plot();
# Now we can use the `to_fits` method to save the lightcurve to a file called *output.fits*.
hdu = lc.to_fits(path='output.fits', overwrite=True)
# Let's take a look at the file and check that it behaved as we expect
from astropy.io import fits
hdu = fits.open('output.fits')
hdu
# `hdu` is a set of astropy.io.fits objects, which is what we would expect. Lets take a look at the header of the first extension.
hdu[0].header
# Looks like it has all the correct information about the target. What about the second extension?
hdu[1].header
# This extension has 4 columns, `TIME`, `FLUX`, `FLUX_ERR` and `CADENCENO`. This is great! What about if we wanted to add new keywords to our fits file? HLSP products require some extra keywords. Let's add some keywords to explain who made the data, and what our HLSP is.
hdu = lc.to_fits(path='output.fits',
overwrite=True,
HLSPLEAD='Kepler/K2 GO office',
HLSPNAME='TUTORIAL',
CITATION='HEDGES2018')
hdu[0].header
# Now our new keywords are included in the primary header! What about if we want to add more data columns to our fits file? We can simply add this in the same way. Let's add the data quality to our fits file.
hdu = lc.to_fits(path='output.fits',
overwrite=True,
HLSPLEAD='Kepler/K2 GO office',
HLSPNAME='TUTORIAL',
CITATION='HEDGES2018',
QUALITY=lc.quality)
hdu[1].header
# Now the quality from our lightcurve appears in the second extension. Once all your lightcurves are saved as fits files and you have a README file, you can consider submitting your data produces to MAST.
| docs/source/tutorials/2.08-making-fits-files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Lab 2: Kernelization
# Support Vector Machines are powerful methods, but they also require careful tuning. We'll explore SVM kernels and hyperparameters on an artificial dataset. We'll especially look at model underfitting and overfitting.
# + slideshow={"slide_type": "skip"}
# Auto-setup when running on Google Colab
if 'google.colab' in str(get_ipython()):
# !pip install openml
# General imports
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import openml as oml
from matplotlib import cm
# -
# ## Getting the data
# We fetch the Banana data from OpenML: https://www.openml.org/d/1460
bananas = oml.datasets.get_dataset(1460) # Banana data has OpenML ID 1460
X, y, _, _ = bananas.get_data(target=bananas.default_target_attribute, dataset_format='array');
# Quick look at the data:
plt.scatter(X[:,0], X[:,1], c=y,cmap=plt.cm.bwr, marker='.');
# +
# Plotting helpers. Based loosely on https://github.com/amueller/mglearn
def plot_svm_kernel(X, y, title, support_vectors, decision_function, dual_coef=None, show=True):
"""
Visualizes the SVM model given the various outputs. It plots:
* All the data point, color coded by class: blue or red
* The support vectors, indicated by circling the points with a black border.
If the dual coefficients are known (only for kernel SVMs) if paints support vectors with high coefficients darker
* The decision function as a blue-to-red gradient. It is white where the decision function is near 0.
* The decision boundary as a full line, and the SVM margins (-1 and +1 values) as a dashed line
Attributes:
X -- The training data
y -- The correct labels
title -- The plot title
support_vectors -- the list of the coordinates of the support vectores
decision_function - The decision function returned by the SVM
dual_coef -- The dual coefficients of all the support vectors (not relevant for LinearSVM)
show -- whether to plot the figure already or not
"""
# plot the line, the points, and the nearest vectors to the plane
#plt.figure(fignum, figsize=(5, 5))
plt.title(title)
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10, cmap=plt.cm.bwr, marker='.')
if dual_coef is not None:
plt.scatter(support_vectors[:, 0], support_vectors[:, 1], c=dual_coef[0, :],
s=70, edgecolors='k', zorder=10, marker='.', cmap=plt.cm.bwr)
else:
plt.scatter(support_vectors[:, 0], support_vectors[:, 1], facecolors='none',
s=70, edgecolors='k', zorder=10, marker='.', cmap=plt.cm.bwr)
plt.axis('tight')
x_min, x_max = -3.5, 3.5
y_min, y_max = -3.5, 3.5
XX, YY = np.mgrid[x_min:x_max:300j, y_min:y_max:300j]
Z = decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-1, 0, 1])
plt.pcolormesh(XX, YY, Z, vmin=-1, vmax=1, cmap=plt.cm.bwr, alpha=0.1)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
if show:
plt.show()
def heatmap(values, xlabel, ylabel, xticklabels, yticklabels, cmap=None,
vmin=None, vmax=None, ax=None, fmt="%0.2f"):
"""
Visualizes the results of a grid search with two hyperparameters as a heatmap.
Attributes:
values -- The test scores
xlabel -- The name of hyperparameter 1
ylabel -- The name of hyperparameter 2
xticklabels -- The values of hyperparameter 1
yticklabels -- The values of hyperparameter 2
cmap -- The matplotlib color map
vmin -- the minimum value
vmax -- the maximum value
ax -- The figure axes to plot on
fmt -- formatting of the score values
"""
if ax is None:
ax = plt.gca()
# plot the mean cross-validation scores
img = ax.pcolor(values, cmap=cmap, vmin=None, vmax=None)
img.update_scalarmappable()
ax.set_xlabel(xlabel, fontsize=10)
ax.set_ylabel(ylabel, fontsize=10)
ax.set_xticks(np.arange(len(xticklabels)) + .5)
ax.set_yticks(np.arange(len(yticklabels)) + .5)
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
ax.set_aspect(1)
ax.tick_params(axis='y', labelsize=12)
ax.tick_params(axis='x', labelsize=12, labelrotation=90)
for p, color, value in zip(img.get_paths(), img.get_facecolors(), img.get_array()):
x, y = p.vertices[:-2, :].mean(0)
if np.mean(color[:3]) > 0.5:
c = 'k'
else:
c = 'w'
ax.text(x, y, fmt % value, color=c, ha="center", va="center", size=10)
return img
# -
# ## Exercise 1: Linear SVMs
# First, we'll look at linear SVMs and the different outputs they produce. Check the [documentation of LinearSVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC)
#
# The most important inputs are:
# * C -- The C hyperparameter controls the misclassification cost and therefore the amount of regularization. Lower values correspond to more regularization
# * loss - The loss function, typically 'hinge' or 'squared_hinge'. Squared hinge is the default. Normal hinge is less strict.
# * dual -- Whether to solve the primal optimization problem or the dual (default). The primal is recommended if you have many more data points than features (although our datasets is very small, so it won't matter much).
#
# The most important outputs are:
# * decision_function - The function used to classify any point. In this case on linear SVMs, this corresponds to the learned hyperplane, or $y = \mathbf{wX} + b$. It can be evaluated at every point, if the result is positive the point is classified as the positive class and vice versa.
# * coef_ - The model coefficients, i.e. the weights $\mathbf{w}$
# * intercept_ - the bias $b$
#
# From the decision function we can find which points are support vectors and which are not: the support vectors are all
# the points that fall inside the margin, i.e. have a decision value between -1 and 1, or that are misclassified. Also see the lecture slides.
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Exercise 1.1: Linear SVMs
# Train a LinearSVC with C=0.001 and hinge loss. Then, use the plotting function `plot_svm_kernel` to plot the results. For this you need to extract the support vectors from the decision function. There is a hint below should you get stuck.
# Interpret the plot as detailed as you can. Afterwards you can also try some different settings. You can also try using the primal instead of the dual optimization problem (in that case, use squared hinge loss).
# +
from sklearn.svm import LinearSVC
svc = LinearSVC(C=0.001).fit(X, y)
support_vector_indices = np.where((2 * y - 1) * svc.decision_function(X) <= 1)[0]
support_vectors = X[support_vector_indices]
plot_svm_kernel(X, y, support_vectors=support_vectors, decision_function=svc.decision_function(X), title="plot")
# Hint: how to compute the support vectors from the decision function (ignore if you want to solve this yourself)
# support_vectors = X[support_vector_indices]
# Note that we can also calculate the decision function manually with the formula y = w*X
# decision_function = np.dot(X, svc.coef_[0]) + svc.intercept_[0]
# -
# ## Exercise 2: Kernelized SVMs
#
# Check the [documentation of SVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)
#
# It has a few more inputs. The most important:
# * kernel - It must be either ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, or your custom defined kernel.
# * gamma - The kernel width of the `rbf` (Gaussian) kernel. Smaller values mean wider kernels.
# Only relevant when selecting the rbf kernel.
# * degree - The degree of the polynomial kernel. Only relevant when selecting the poly kernel.
#
# There also also more outputs that make our lifes easier:
# * support_vectors_ - The array of support vectors
# * n_support_ - The number of support vectors per class
# * dual_coef_ - The coefficients of the support vectors (the dual coefficients)
# ### Exercise 2.1
#
# Evaluate different kernels, with their default hyperparameter settings.
# Outputs should be the 5-fold cross validated accuracy scores for the linear kernel (lin_scores), polynomial kernel (poly_scores) and RBF kernel (rbf_scores). Print the mean and variance of the scores and give an initial interpretation of the performance of each kernel.
# ## Exercise 2: Visualizing the fit
# To better understand what the different kernels are doing, let's visualize their predictions.
# ### Exercise 2.1
# Call and fit SVM with linear, polynomial and RBF kernels with default parameter values. For RBF kernel, use kernel coefficient value (gamma) of 2.0. Plot the results for each kernel with "plot_svm_kernel" function. The plots show the predictions made for the different kernels. The background color shows the prediction (blue or red). The full line shows the decision boundary, and the dashed line the margin. The encircled points are the support vectors.
# + pycharm={"name": "#%%\n"}
from sklearn.svm import SVC
a = SVC(kernel="linear").fit(X, y)
b = SVC(kernel="poly").fit(X, y)
c = SVC(kernel="rbf", gamma=2.0).fit(X, y)
plot_svm_kernel(X, y, "linear", a.support_vectors_, a.decision_function)
plot_svm_kernel(X, y, "poly", b.support_vectors_, b.decision_function)
plot_svm_kernel(X, y, "rbf", c.support_vectors_, c.decision_function)
# -
# ### Exercise 2.2
# Interpret the plots for each kernel. Think of ways to improve the results.
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Exercise 3: Visualizing the RBF models and hyperparameter space
# Select the RBF kernel and optimize the two most important hyperparameters (the $C$ parameter and the kernel width $\gamma$ ).
#
# Hint: values for $C$ and $\gamma$ are typically in [$2^{-15}..2^{15}$] on a log scale.
# -
# ### Exercise 3.1
# First try 3 very different values for $C$ and $\gamma$ (for instance [1e-3,1,1e3]). For each of the 9 combinations, create the same RBF plot as before to understand what the model is doing. Also create a standard train-test split and report the train and test score. Explain the performance results. When are you over/underfitting? Can you see this in the predictions?
# + pycharm={"name": "#%%\n"}
options = [0.001, 1, 1000]
combinations = []
for i in options:
for j in options:
current = (SVC(kernel="rbf", gamma=i, C=j).fit(X, y))
plot_svm_kernel(X, y, f"RBF {i, j}", current.support_vectors_, current.decision_function)
# -
# ### Exercise 3.2
# Optimize the hyperparameters using a grid search, trying every possible combination of C and gamma. Show a heatmap of the results and report the optimal hyperparameter values. Use at least 10 values for $C$ and $\gamma$ in [$2^{-15}..2^{15}$] on a log scale. Report accuracy under 3-fold CV. We recommend to use sklearn's [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and the `heatmap` function defined above. Check their documentation.
| labs/Lab 2 - Kernelization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CompEngine dataset analysis
# ## Analysis #4: Get TS-Pymfe meta-features accuracy
#
# **Project URL:** https://www.comp-engine.org/
#
# **Get data in:** https://www.comp-engine.org/#!browse
#
# **Date:** May 31 2020
#
# ### Objectives:
# 1. Extract the meta-features using the ts-pymfe from train and test data
# 2. Drop metafeatures with NaN.
# 3. Apply PCA in the train meta-dataset.
# 4. Use a simple machine learning model to predict the test set.
#
# ### Results (please check the analysis date):
# 1. All metafeatures from all methods combined with all summary functions in pymfe were extracted from both train and test data. This totalizes 5165 candidate meta-features.
# 2. Meta-features with infinities or NaN where dropped. Those are mostly related to seasonality, since it seems that only roughly 30% of the used time-series present seasonal behaviour. Dropped 1352 of 5165 meta-features (26.18% from the total), and therefore 3813 meta-feature remains.
# 3. The next step is to apply PCA retaining 95% of variance explained by the original meta-features. Before applying PCA we need to choose a normalization strategy. Two methods were considered:
# 1. (Pipeline A) Standard Scaler (traditional standardization): 251 of 3813 dimensions were kept. This corresponds to a dimension reduction of 93.42%.
# 2. (Pipeline B) Robust Sigmoid Scaler (see reference [1]): 223 of 3813 dimensions were kept. This corresponds to a dimension reduction of 94.15%.
# 4. Now it is time for some predictions. I'm using a sklearn RandomForestClassifier model with default hyper-parameters with a fixed random seed.
# 1. The expected accuracy of random guessing is 2.17%.
# 2. (Pipeline A1) It was obtained an accuracy score of 62.50%.
# 3. (Pipeline B1) It was obtained an accuracy score of 62.50%.
# 4. Even though both pipelines presented the same accuracy score, the Pipeline B is still superior since it needed less dimensions.
# 5. Repeating the last two steps, but with PCA retaining only 75% of variance explained:
# 1. (Pipeline A2) Standard Scaler: 50 of 3813 dimensions were kept. This corresponds to a dimension reduction of 98.69%.
# 2. (Pipeline B2) Robust Sigmoid Scaler: 30 of 3813 dimensions were kept. This corresponds to a dimension reduction of 99.21%.
# 6. Accuracy (PCA 75%):
# 2. (Pipeline A2) It was obtained an accuracy score of 62.50%.
# 3. (Pipeline B2) It was obtained an accuracy score of 67.39%.
#
#
# ## references:
#
# .. [1] Fulcher, <NAME>. and <NAME>. and <NAME>., "Highly comparative time-series analysis: the empirical structure of time series and their methods" (Supplemental material #1, page 11), Journal of The Royal Society Interface, 2013, doi: 10.1098/rsif.2013.0048, https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2013.0048.
# +
# %matplotlib inline
import typing
import warnings
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import sklearn.decomposition
import sklearn.pipeline
import sklearn.preprocessing
import sklearn.ensemble
import sklearn.metrics
import robust_sigmoid
import pymfe.tsmfe
# +
# Note: using only groups that has at least one meta-feature that can be extracted
# from a unsupervised dataset
groups = "all"
summary = "all"
extractor = pymfe.tsmfe.TSMFE(features="all",
summary=summary,
groups=groups)
# -
data_train = pd.read_csv("../2_exploring_subsample/subsample_train.csv", header=0, index_col="timeseries_id")
data_test = pd.read_csv("../2_exploring_subsample/subsample_test.csv", header=0, index_col="timeseries_id")
# +
assert data_train.shape[0] > data_test.shape[0]
data_train.head()
# +
# Note: using at most the last 1024 observations of each time-series
size_threshold = 1024
# Number of iterations until to save results to .csv
to_csv_it_num = 8
# Note: using dummy data to get the metafeature names
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
mtf_names = extractor.fit(np.random.randn(128),
ts_period=1,
suppress_warnings=True).extract(suppress_warnings=True)[0]
# Note: filepath to store the results
filename_train = "metafeatures_tspymfe_train.csv"
filename_test = "metafeatures_tspymfe_test.csv"
def recover_data(filepath: str,
index: typing.Collection[str],
def_shape: typing.Tuple[int, int]) -> typing.Tuple[pd.DataFrame, int]:
"""Recover data from the previous experiment run."""
filled_len = 0
try:
results = pd.read_csv(filepath, index_col=0)
assert results.shape == def_shape
# Note: find the index where the previous run was interrupted
while filled_len < results.shape[0] and not results.iloc[filled_len, :].isnull().all():
filled_len += 1
except (AssertionError, FileNotFoundError):
results = pd.DataFrame(index=index, columns=mtf_names)
return results, filled_len
results_train, start_ind_train = recover_data(filepath=filename_train,
index=data_train.index,
def_shape=(data_train.shape[0], len(mtf_names)))
results_test, start_ind_test = recover_data(filepath=filename_test,
index=data_test.index,
def_shape=(data_test.shape[0], len(mtf_names)))
# +
assert results_train.shape == (data_train.shape[0], len(mtf_names))
assert results_test.shape == (data_test.shape[0], len(mtf_names))
print("Train start index:", start_ind_train)
print("Test start index:", start_ind_test)
# -
print("Number of candidate meta-features per dataset:", len(mtf_names))
def extract_metafeatures(data: pd.DataFrame, results: pd.DataFrame, start_ind: int, output_file: str) -> None:
print(f"Starting extraction from index {start_ind}...")
for i, (cls, _, vals) in enumerate(data.iloc[start_ind:, :].values, start_ind):
ts = np.asarray(vals.split(",")[-size_threshold:], dtype=float)
extractor.fit(ts, verbose=1, suppress_warnings=True)
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
res = extractor.extract(verbose=1, suppress_warnings=True)
results.iloc[i, :] = res[1]
if i % to_csv_it_num == 0:
results.to_csv(output_file)
print(f"Saved results at index {i} in file {output_file}.")
results.to_csv(output_file)
# +
extract_metafeatures(data=data_train,
results=results_train,
start_ind=start_ind_train,
output_file=filename_train)
extract_metafeatures(data=data_test,
results=results_test,
start_ind=start_ind_test,
output_file=filename_test)
# -
# Note: analysing the NaN count.
results_train.replace([np.inf, -np.inf], np.nan, inplace=True)
nan_count = results_train.isnull().sum()
pd_nan_count = nan_count.iloc[nan_count.to_numpy().nonzero()].value_counts()
pd_nan_count = pd.concat([pd_nan_count, pd_nan_count / results_train.shape[1]], axis=1)
pd_nan_count = pd_nan_count.rename(columns={0: "Number of meta-features", 1: "Proportion of meta-features"})
pd_nan_count.index = map("{} (missing on {:.2f}% of all train time-series)".format, pd_nan_count.index, 100. * pd_nan_count.index / results_train.shape[0])
pd_nan_count.index.name = "Missing values count"
pd_nan_count
# +
# Note: some meta-features with high rates of missing values. Which are those?
ind = (nan_count >= 0.7 * data_train.shape[0]).to_numpy().nonzero()
print(list(results_train.columns[ind]))
# Post note: all meta-features with high rates of missing values seems to be
# related to seasonality or cyclicity. This says something about the used subset:
# roughly 70% of the time-series are not seasonal.
# +
results_train.dropna(axis=1, inplace=True)
print("Train shape after dropping NaN column:", results_train.shape)
print(f"Dropped {len(mtf_names) - results_train.shape[1]} of {len(mtf_names)} meta-features "
f"({100 * (1 - results_train.shape[1] / len(mtf_names)):.2f}% from the total).")
results_test = results_test.loc[:, results_train.columns]
# Note: remove NaN values in the test set with the mean value, since we
# can't drop its columns as in the train data
results_test.fillna(results_test.mean(), inplace=True)
# Note: sanity check if the columns where dropped correctly
assert np.all(results_train.columns == results_test.columns)
# -
def get_accuracy(pipeline: sklearn.pipeline.Pipeline,
X_train: np.ndarray,
X_test: np.ndarray,
y_train: np.ndarray,
y_test:np.ndarray) -> float:
pipeline.fit(results_train)
X_subset_train = pipeline.transform(X_train)
X_subset_test = pipeline.transform(X_test)
assert X_subset_train.shape[1] == X_subset_test.shape[1]
# Note: sanity check if train project is zero-centered
assert np.allclose(X_subset_train.mean(axis=0), 0.0)
print("Train shape after PCA:", X_subset_train.shape)
print("Test shape after PCA :", X_subset_test.shape)
print(f"Total of {X_subset_train.shape[1]} of {X_train.shape[1]} "
f"dimensions kept for {100. * var_explained:.2f}% variance explained "
f"(reduction of {100. * (1 - X_subset_train.shape[1] / X_train.shape[1]):.2f}%).")
rf = sklearn.ensemble.RandomForestClassifier(random_state=16)
rf.fit(X=X_subset_train, y=y_train)
y_pred = rf.predict(X_subset_test)
# Note: since the test set is balanced, we can use the traditional accuracy
test_acc = sklearn.metrics.accuracy_score(y_test, y_pred)
return test_acc
# +
var_explained = 0.95
pipeline_a1 = sklearn.pipeline.Pipeline((
("zscore", sklearn.preprocessing.StandardScaler()),
("pca95", sklearn.decomposition.PCA(n_components=var_explained, random_state=16))
))
test_acc_a1 = get_accuracy(pipeline=pipeline_a1,
X_train=results_train.values,
X_test=results_test.values,
y_train=data_train.category.values,
y_test=data_test.category.values)
# This is equivalent of guessing only the majority class, which can be any class
# in this case since the dataset is perfectly balanced
print(f"Expected accuracy by random guessing: {1 / data_test.category.unique().size:.4f}")
print(f"Test accuracy (pipeline A1 - StandardScaler (VE 95%)): {test_acc_a1:.4f}")
# +
var_explained = 0.75
pipeline_a2 = sklearn.pipeline.Pipeline((
("zscore", sklearn.preprocessing.StandardScaler()),
("pca75", sklearn.decomposition.PCA(n_components=var_explained, random_state=16))
))
test_acc_a2 = get_accuracy(pipeline=pipeline_a2,
X_train=results_train.values,
X_test=results_test.values,
y_train=data_train.category.values,
y_test=data_test.category.values)
# This is equivalent of guessing only the majority class, which can be any class
# in this case since the dataset is perfectly balanced
print(f"Expected accuracy by random guessing: {1 / data_test.category.unique().size:.4f}")
print(f"Test accuracy (pipeline A2 - StandardScaler (VE 75%)): {test_acc_a2:.4f}")
# +
var_explained = 0.95
pipeline_b1 = sklearn.pipeline.Pipeline((
("robsigmoid", robust_sigmoid.RobustSigmoid()),
("pca95", sklearn.decomposition.PCA(n_components=var_explained, random_state=16))
))
test_acc_b1 = get_accuracy(pipeline=pipeline_b1,
X_train=results_train.values,
X_test=results_test.values,
y_train=data_train.category.values,
y_test=data_test.category.values)
# This is equivalent of guessing only the majority class, which can be any class
# in this case since the dataset is perfectly balanced
print(f"Expected accuracy by random guessing: {1 / data_test.category.unique().size:.4f}")
print(f"Test accuracy (pipeline B1 - RobustSigmoid (VE 95%)) : {test_acc_b1:.4f}")
# +
var_explained = 0.75
pipeline_b2 = sklearn.pipeline.Pipeline((
("robsigmoid", robust_sigmoid.RobustSigmoid()),
("pca75", sklearn.decomposition.PCA(n_components=var_explained, random_state=16))
))
test_acc_b2 = get_accuracy(pipeline=pipeline_b2,
X_train=results_train.values,
X_test=results_test.values,
y_train=data_train.category.values,
y_test=data_test.category.values)
# This is equivalent of guessing only the majority class, which can be any class
# in this case since the dataset is perfectly balanced
print(f"Expected accuracy by random guessing: {1 / data_test.category.unique().size:.4f}")
print(f"Test accuracy (pipeline B2 - RobustSigmoid (VE 75%)) : {test_acc_b2:.4f}")
| 4_extracting_mtf_tsmfe/tspymfe_mtf_extraction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="k_5UhNA8gI5N"
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM, Bidirectional, Conv2D, Reshape, Dropout
from tensorflow.keras.callbacks import EarlyStopping, LearningRateScheduler
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.backend import clear_session
from tensorflow import math
import matplotlib.pyplot as plt
import numpy as np
from utils import load_dataset, train
# + id="NL2JkcK0gQaR"
X_train, y_train, X_test, y_test = load_dataset('m2')
X_train = np.expand_dims(X_train, axis=-1)
X_test = np.expand_dims(X_test, axis=-1)
# -
def schedule(epoch, lr) -> float:
if epoch >= 200 and epoch % 25 == 0:
lr = lr * math.exp(-0.1)
return lr
# + id="CDZonLvgUg2R"
clear_session()
conv_size = 8
model = Sequential([
Conv2D(conv_size,
kernel_size=4,
activation='relu',
input_shape=(4, 1000, 1)
),
Reshape((997, conv_size)), # squeeze
LSTM(4, dropout=0.3),
Dense(64, activation='relu'),
Dropout(0.2),
Dense(1, activation='relu')
])
# model.summary()
# + id="-18cJKWvVagq"
scheduler = LearningRateScheduler(schedule)
es = EarlyStopping(monitor='loss', patience=15, verbose=1)
callbacks = [scheduler, es]
optimizer = Adam(lr=1e-3)
epochs = 1500
validation_freq = 5
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 8184059, "status": "ok", "timestamp": 1606688892036, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhVOO2f8bSU9D2WCOuhm9D6m40pljIpR5gmOLVhCA=s64", "userId": "15273504440532916403"}, "user_tz": -120} id="l7zuT5LpXDKx" outputId="95109dee-e554-4c6c-8e8c-9d5bfd6939bd"
model = train(dataset=(X_train, y_train, X_test, y_test),
model=model,
epochs=epochs,
verbose=1,
validation_freq=validation_freq,
optimizer=optimizer,
callbacks=callbacks)
# -
| src/conv-lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tf_gpu
# language: python
# name: tf_gpu
# ---
# + [markdown] id="x1MC8MkgV2gW" colab_type="text"
#
# + [markdown] id="4suMUv4UKzGU" colab_type="text"
# [](https://github.com/ym001/Manteia/blob/master/notebook/notebook_Manteia_classification_visualisation.ipynb)
#
# Run this nootbook with GPU : ->Modifier->Parametre du nootbook->accélérateur matériel->GPU
# + id="nXesee8RWngy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="88e96085-98bb-4464-c700-8634e5a75327"
pip install manteia
# + id="IGPczm5WV1Gg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["b49388e8595e4ff9abe6e8a737d982a1", "009cd66663124764b66c149f32943500", "6bef63fcf039400fbac3e03a4a4e96e0", "04923d79ab1f4299a0baafffad307949", "2d74d3006ce24412bec6df97f4adc7ad", "<KEY>", "<KEY>", "2f517f0d78794362a189d744fd6161f6", "<KEY>", "d30779b02e6c434c8f0282fa314fcaa5", "25d8bf953dad46aa85435ec2c62a445d", "ad6802f270db461f86e5594a846c2175", "a6105d9e5c4540e5a9b6e000848be40d", "<KEY>", "efd7f9ca917c41dd9a82001aac70a919", "<KEY>", "<KEY>", "17f802c4b18e484daed66ec1414713e5", "<KEY>", "<KEY>", "<KEY>", "8e83f7c48ce949aa8a99551c7d57af51", "5165d6955d6645809d5e3ed560017a60", "94ab32ed12104a11ad4f7ba8a4f4d986"]} outputId="7c0aefa9-fdbb-4706-a4c7-cb898bd02996"
from Manteia.Classification import Classification
from Manteia.Model import Model
from Manteia.Dataset import Dataset
from Manteia.Statistic import Statistic
from Manteia.Visualisation import Visualisation
ds=Dataset('SST-5',path='path to the dataset dir')
model = Model(model_name ='bert',num_labels=5,epochs=20)
cl=Classification(model,documents_train=ds.documents_train[:2000],labels_train=ds.labels_train[:2000],process_classif=True)
visu = Visualisation()
visu.plot_train(model.history['loss'],model.history['accuracy'],granularity=5)
# + id="5ekaykpJV1Gl" colab_type="code" colab={}
# + id="daUZAFFeV1Gp" colab_type="code" colab={}
| notebook/notebook_Manteia_classification_visualisation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pytorchenv
# language: python
# name: pytorchenv
# ---
# # Numpy Tutorial
# > A tutorial on Numpy
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [jupyter]
# - image: images/Numpy/logo.png
# ### Numpy Tutorial
## import numpy
import numpy as np
my_lst = [1, 2, 3]
my_lst
np.array(my_lst)
my_matrix = [[1, 2, 3],[4, 5, 6],[7, 8, 9]]
my_matrix
np.array(my_matrix)
# ### Built-ins
np.arange(0,11)
np.arange(0,11,2)
# ### Zeros and ones
np.zeros(3)
np.zeros((3, 3))
np.ones(3)
np.ones((3, 3))
# ### linspace
np.linspace(0, 10, 3)
np.linspace(0,5,20)
# ### eye
np.eye(4)
# ### Random
# ##### rand
np.random.rand(2)
np.random.rand(5, 5)
# ###### randn
np.random.randn(2)
np.random.randn(5, 5)
# ##### randint
np.random.randint(1,100)
np.random.randint(1,100,10)
# ### seed
np.random.seed(42)
np.random.rand(4)
np.random.seed(42)
np.random.rand(4)
# ### Array Attributes and Methods
arr = np.arange(25)
ranarr = np.random.randint(0,50,10)
ranarr
arr
# ### Reshape
arr.reshape(5,5)
# ### max, min, argmax, argmin
ranarr
ranarr.max()
ranarr.min()
ranarr.argmax()
ranarr.argmin()
# ### Shape
arr.shape
# Notice the two sets of brackets
arr.reshape(1,25)
arr.reshape(1,25).shape
arr.reshape(25,1)
arr.reshape(25,1).shape
# ### dtype
arr.dtype
arr2 = np.array([1.2, 3.4, 5.6])
arr2.dtype
# ### Bracket Indexing and Selection
#Get a value at an index
arr[8]
#Get values in a range
arr[1:5]
#Get values in a range
arr[0:5]
# ### Broadcasting
# +
#Setting a value with index range (Broadcasting)
arr[0:5]=100
#Show
arr
# +
# Reset array, we'll see why I had to reset in a moment
arr = np.arange(0,11)
#Show
arr
# +
#Important notes on Slices
slice_of_arr = arr[0:6]
#Show slice
slice_of_arr
# +
#Change Slice
slice_of_arr[:]=99
#Show Slice again
slice_of_arr
# -
arr
# +
#To get a copy, need to be explicit
arr_copy = arr.copy()
arr_copy
# -
# ### Indexing a 2D array (matrices)
# +
arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45]))
#Show
arr_2d
# -
#Indexing row
arr_2d[1]
#Indexing 2nc column
arr_2d[:,1]
# +
# Format is arr_2d[row][col] or arr_2d[row,col]
# Getting individual element value
arr_2d[1][0]
# +
# 2D array slicing
#Shape (2,2) from top right corner
arr_2d[:2,1:]
# -
#Shape bottom row
arr_2d[2]
#Shape bottom row
arr_2d[2,:]
# ### Conditional Selection
arr = np.arange(1,11)
arr
arr > 4
bool_arr = arr>4
bool_arr
arr[bool_arr]
arr[arr>2]
# ### Arithmetic
arr = np.arange(0, 11)
arr
arr + arr
arr * arr
arr - arr
# This will raise a Warning on division by zero, but not an error!
# It just fills the spot with nan
arr/arr
# Also a warning (but not an error) relating to infinity
1/arr
arr**3
# ### Universal Array Functions
# Taking Square Roots
np.sqrt(arr)
# Calculating exponential (e^)
np.exp(arr)
# Trigonometric Functions like sine
np.sin(arr)
# Taking the Natural Logarithm
np.log(arr)
# ### Axis Logic
arr_2d = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])
arr_2d
arr_2d.sum(axis=0) #(columnwise)
arr_2d.sum(axis=1) #(rowwise)
| _notebooks/2021-04-30-Numpy Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# ## Analyze A/B Test Results
#
# This project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck!
#
# ## Table of Contents
# - [Introduction](#intro)
# - [Part I - Probability](#probability)
# - [Part II - A/B Test](#ab_test)
# - [Part III - Regression](#regression)
#
#
# <a id='intro'></a>
# ### Introduction
#
# A/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these
#
# For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.Datasets used in this project can be found [here](https://d17h27t6h515a5.cloudfront.net/topher/2017/December/5a32c9db_analyzeabtestresults-2/analyzeabtestresults-2.zip)<br>
# **As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question.** The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the [RUBRIC](https://review.udacity.com/#!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric).
#
#
# <a id='probability'></a>
# ### Part I - Probability
#
# To get started, let's import our libraries.
# +
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
# %matplotlib inline
#We are setting the seed to assure you get the same answers on quizzes as we set up
random.seed(42)
# -
# `1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**
#
# a. Read in the dataset and take a look at the top few rows here:
df = pd.read_csv('ab_data.csv')
df.head()
# b. Use the below cell to find the number of rows in the dataset.
print('No. of rows in Dataset:',df.shape[0])
# c. The number of unique users in the dataset.
print('No. of unique users in Dataset:',df.user_id.nunique())
# d. The proportion of users converted.
print('Proportion of converted users:',format(100*df.converted.mean(),'.3f'),'%')
# e. The number of times the `new_page` and `treatment` don't line up.
df.query("(group == 'treatment' and landing_page == 'old_page') or (group == 'control' and landing_page == 'new_page')" ).shape[0]
# f. Do any of the rows have missing values?
df.info()
# There are no rows with missing values (NaNs).
# `2.` For the rows where **treatment** is not aligned with **new_page** or **control** is not aligned with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to provide how we should handle these rows.
#
# a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in **df2**.
idx = df.index[((df['group'] == 'treatment') & (df['landing_page'] == 'old_page')) | ((df['group'] == 'control') & (df['landing_page'] == 'new_page'))]
idx ## Store the index of the mismatched rows
df2 = df.drop(idx)
df2.info()
# Double Check all of the correct rows were removed - this should be 0
df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]
# `3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom.
# a. How many unique **user_id**s are in **df2**?
print('No. of unique user_ids in df2 Dataset:',df2['user_id'].nunique())
# b. There is one **user_id** repeated in **df2**. What is it?
df2[df2.duplicated('user_id')]
# c. What is the row information for the repeat **user_id**?
# >The index number for this duplicate *user_id* is **2893**, with:
# * **user_ID**: 773192
# * **group**: treatment
# * **landing_page**: new_page
# d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**.
df2 = df2.drop_duplicates(['user_id'], keep = 'first')
df2.info()
# `4.` Use **df2** in the below cells to answer the quiz questions related to **Quiz 4** in the classroom.
#
# a. What is the probability of an individual converting regardless of the page they receive?
print('Probability of an individual converting:',df2['converted'].mean())
# b. Given that an individual was in the `control` group, what is the probability they converted?
df2.groupby(['group'])['converted'].mean()
# >Probability of an individual converting while being in the *control* group = **0.1203**
# c. Given that an individual was in the `treatment` group, what is the probability they converted?
# >Probability of an individual converting while being in the *treatment* group = **0.1188**
# d. What is the probability that an individual received the new page?
print('Probability that an individual received the new page:',(df2['landing_page'] == 'new_page').mean())
# e. Consider your results from a. through d. above, and explain below whether you think there is sufficient evidence to say that the new treatment page leads to more conversions.
# >Given the probability values of **0.1188** (probability of an individual converting while in the treatment group) and **0.500** (probability that an individual received the new page) we have evidence that the new treatment page does not lead to more conversions. There is at best a 50% chance the new treatment will lead to conversions.
# <a id='ab_test'></a>
# ### Part II - A/B Test
#
# Notice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed.
#
# However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another?
#
# These questions are the difficult parts associated with A/B tests in general.
#
#
# `1.` For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of **$p_{old}$** and **$p_{new}$**, which are the converted rates for the old and new pages.
# The NULL and Alternative Hypothesis can be framed as follows:
# $$ H_0: p_{new} \leq p_{old} $$
# $$ H_1: p_{new} > p_{old} $$
# `2.` Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have "true" success rates equal to the **converted** success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the **converted** rate in **ab_data.csv** regardless of the page. <br><br>
#
# Use a sample size for each page equal to the ones in **ab_data.csv**. <br><br>
#
# Perform the sampling distribution for the difference in **converted** between the two pages over 10,000 iterations of calculating an estimate from the null. <br><br>
#
# Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use **Quiz 5** in the classroom to make sure you are on the right track.<br><br>
# a. What is the **convert rate** for $p_{new}$ under the null?
p_new = df2['converted'].mean()
p_new
# b. What is the **convert rate** for $p_{old}$ under the null? <br><br>
p_old = df2['converted'].mean()
p_old
# c. What is $n_{new}$?
n_new = df2[df2['group']== 'treatment'].shape[0]
n_new
# d. What is $n_{old}$?
n_old = df2[df2['group']== 'control'].shape[0]
n_old
# e. Simulate $n_{new}$ transactions with a convert rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**.
new_page_converted = np.random.binomial(n_new,p_new)
# f. Simulate $n_{old}$ transactions with a convert rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**.
old_page_converted = np.random.binomial(n_old,p_old)
# g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).
(new_page_converted/n_new) - (old_page_converted/n_old)
# h. Simulate 10,000 $p_{new}$ - $p_{old}$ values using this same process similarly to the one you calculated in parts **a. through g.** above. Store all 10,000 values in a numpy array called **p_diffs**.
p_diffs = []
for _ in range(10000):
new_page_converted = np.random.binomial(n_new,p_new)
old_page_converted = np.random.binomial(n_old,p_old)
diffs = new_page_converted/n_new - old_page_converted/n_old
p_diffs.append(diffs)
# i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
plt.hist(p_diffs)
# j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?
act_diff = df2[df2['group'] == 'treatment']['converted'].mean() - df2[df2['group'] == 'control']['converted'].mean()
(act_diff < p_diffs).mean()
# k. In words, explain what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?
# >**This is a *p* value.** A *p* value is the probability of observing our statistic, or one more extreme in favor of the alternative if the null hypothesis is actually true. Given that our *p* value is so large, this suggests that observing the data from the NULL is likely. We therefore, would fail to reject the NULL ($H_0$) in favor of an alternative ($H_1$) that suggests that conversion rate of the new page is higher than the old page.
# l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let `n_old` and `n_new` refer the the number of rows associated with the old page and new pages, respectively.
import statsmodels.api as sm
convert_old = df2[(df2["landing_page"] == "old_page") & (df2["converted"] == 1)]["user_id"].count()
convert_new = df2[(df2["landing_page"] == "new_page") & (df2["converted"] == 1)]["user_id"].count()
n_old = df2[df2['landing_page']== 'old_page'].shape[0]
n_new = df2[df2['landing_page']== 'new_page'].shape[0]
convert_old, convert_new, n_old, n_new
# m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](http://knowledgetack.com/python/statsmodels/proportions_ztest/) is a helpful link on using the built in.
z_score, p_value = sm.stats.proportions_ztest([convert_new, convert_old], [n_new, n_old], alternative = 'larger')
print('z-Score:',z_score,'\np-Value:', p_value)
# n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**?
from scipy.stats import norm
norm.cdf(z_score) # Tells us how significant our z-score is
norm.ppf(1-(0.05/2)) # Tells us what our critical value at 95% confidence is
# >The p-value of **0.905** from the *sm.stats.proportions_ztest* matches with the p-value we computed manually in 2j.<br><br>
# The Z Score is a test of statistical significance that helps us to decide whether or not we can reject the NULL ($H_0$) Hypothesis. Since the z-score of -1.3109 does not exceed the critical value of 1.9599, we fail to reject the NULL ($H_0$) Hypothesis.<br><br>
# In A/B testing we are testing whether the new page leads to higher convert rate, However, if we are testing whether the new page is *better* or *worse* we should do two two-tailed tests, but since we only care whether the new page is *better* or not than the old page, here we do a One- tailed test
# <a id='regression'></a>
# ### Part III - A regression approach
#
# `1.` In this final part, you will see that the result you acheived in the previous A/B test can also be acheived by performing regression.<br><br>
#
# a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?
# >*We will do Logistic Regression because of the binary values (True or False) of the **converted** column.*
# b. The goal is to use **statsmodels** to fit the regression model you specified in part **a.** to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create a column for the intercept, and create a dummy variable column for which page each user received. Add an **intercept** column, as well as an **ab_page** column, which is 1 when an individual receives the **treatment** and 0 if **control**.
df2['intercept'] = 1
df2[['intercept','ab_page']]= pd.get_dummies(df2['group'])
df2.head()
# c. Use **statsmodels** to import your regression model. Instantiate the model, and fit the model using the two columns you created in part **b.** to predict whether or not an individual converts.
import statsmodels.api as sm
logit = sm.Logit(df2['converted'],df2[['intercept','ab_page']])
results = logit.fit()
# d. Provide the summary of your model below, and use it as necessary to answer the following questions.
results.summary()
# e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**?<br><br> **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in the **Part II**?
# >The p-value associated with **ab_page** is **0.000** which suggests that there is not much difference in conversion rate between the newly treated page and old page, i.e both *new_page* and *old_page* are equally statistically significant with regards to converting users.
# $$ H_0: p_{new} = p_{old} $$
# $$ H_1: p_{new} \neq p_{old} $$
# f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?
# >_Doing multiple linear regression adding more than one predictor variable (quantitative or categorical) to predict a response variable is a good way to check if the outcome is influenced by more than one variable. Here we can consider adding **timestamps** (a categorical variable) to see if the timestamp has any influence on the conversion rate._
# <br><br>
# **Disadvantages or problems encountered in Multiple Linear Regression:**<br>
# We may not get our model to converge because of inherenet problems in Linear Mixed Effect Models such as:
# * Non-linearity of the response-predictor relationships
# * Correlation of error terms
# * Non-constant Variance and Normally Distributed Errors
# * Outliers/ High leverage points
# * Collinearity
# g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives. You will need to read in the **countries.csv** dataset and merge together your datasets on the approporiate rows. [Here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) are the docs for joining tables.
#
# Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - **Hint: You will need two columns for the three dummy variables.** Provide the statistical output as well as a written response to answer this question.
countries_df = pd.read_csv('./countries.csv')
df_new = countries_df.set_index('user_id').join(df2.set_index('user_id'), how='inner')
df_new.head()
df_new['country'].value_counts()
### Create the necessary dummy variables
df_new[['CA','US']] = pd.get_dummies(df_new['country'])[["CA","US"]]
# h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model.
#
# Provide the summary results, and your conclusions based on the results.
### Fit Your Linear Model And Obtain the Results
logit = sm.Logit(df_new['converted'],df_new[['intercept','US','CA']])
results = logit.fit()
results.summary()
np.exp(results.params)
1/np.exp(results.params)
# <a id='conclusions'></a>
# ### Conclusions
#
#
# Though the users from the USA have a marginally higher conversion rate, this is not necessarily practically significant for us to make realistic conclusions. Of course in this dataset there are more users from USA compared to UK or Canada.
| Project 4 - Analyze A-B Test Results/Project 4 - Analyze AB Test Results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FloPy $-$ MODFLOW 6
#
# ## Plotting Model Arrays and Results
#
# This notebook demonstrates the simple array and results plotting capabilities of flopy when working with MODFLOW6 simulations. It demonstrates these capabilities by loading and running an existing model, and then showing how the .plot() method can be used to make simple plots of the model data and model results
# +
import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from IPython.display import Image
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
# -
# load the Freyberg model into mf6-flopy
sim_name = 'mfsim.nam'
sim_path = "../data/mf6-freyberg"
sim = flopy.mf6.MFSimulation.load(sim_name=sim_name, version='mf6', exe_name='mf6',
sim_ws=sim_path)
# Multiple models can be attached to a mf6 simulation.
#
# Model names can be obtained by using the .model_names attribute of the MFSimulation object
sim.model_names
# A single model can be returned by using the .get_model() method
ml = sim.get_model('gwf_1')
ml
# ## Plotting Model Data
#
# Once a simulation or model object is created, MODFLOW 6 package data can be plotted using the `.plot()` method.
#
# The model grid can be plotted simply by calling the `.plot()` method
ml.modelgrid.plot()
# Two-dimensional data (for example, the model bottom) can be plotted by calling the `.plot()` method for each data array
ml.dis.botm.plot()
# As you can see, the `.plot()` method returns a `matplotlib.pyplot.axis` object, which can be used to add additional data to the figure. Below we will add black contours to the axis returned in the first line
ax = ml.dis.botm.plot()
ml.dis.botm.plot(axes=ax, contour=True, pcolor=False)
# You will notice that we passed several keywords in the second line. There are a number of keywords that can be passed to the `.plot()` method to control plotting. Available keywords are:
#
# 1. `axes` - if you already have plot axes you can pass them to the method
# 2. `pcolor` - turns pcolor on if `pcolor=True` or off if `pcolor=False`, default is `pcolor=True`
# 3. `colorbar` - turns on colorbar if `colorbar=True` or off if `colorbar=False`, default is `colorbar=False` and is only used if `pcolor=True`
# 4. `inactive` - turns on a black inactive cell overlay if inactive=True or turns off the inactive cell overlay if `inactive=False`, default is `inactive=True`
# 5. `contour` - turns on contours if `contour=True` or off if `contour=False`, default is `contour=False`
# 6. `clabel` - turns on contour labels if `clabel=True` or off if `clabel=False`, default is `clabel=False` and is only used if `contour=True`
# 7. `grid` - turns on model grid if `grid=True` or off if `grid=False`, default is `grid=False`
# 8. `masked_values` - list with unique values to be excluded from the plot (for example, HNOFLO)
# 9. `mflay` - for three-dimensional data (for example layer bottoms or simulated heads) mflay can be used to plot data for a single layer - note mflay is zero-based
# 10. `kper` - for transient two-dimensional data (for example recharge package data) kper can be used to plot data for a single stress period - note kper is zero-based
# 11. `filename_base` - a base file name that will be used to automatically generate file names for two-dimensional, three-dimensional, and transient two-dimensional data, default is filename_base=None
# 12. `file_extension` - valid matplotlib file extension, default is png and is only used if filename_base is specified
# 13. `matplotlib.pyplot` keywords are also accepted
#
# The previous code block is recreated in a single line using keywords in the code block below.
ml.dis.botm.plot(contour=True)
# We can save the same image to a file
fname = os.path.join(sim_path, "freyberg")
ml.dis.botm.plot(contour=True, filename_base=fname)
# The image file that was just created is shown below
fname = os.path.join(sim_path, "freyberg_b_Layer1.png")
Image(filename=fname)
# ## Plotting package data
#
# Single layer and three-dimensional data can be plotted using the `.plot()` method. User's do not actually need to know that the data are two- or three-dimensional. The `.plot()` method is attached to the two- and three-dimensional data objects so it knows how to process the model data. Examples of three dimensional data are horizontal hydraulic conductivity (hk), layer bottoms (botm), specific storage (ss), etc...
#
# Here we plot the horizontal conductivity for all layers in the model, since Freyberg is a one layer model, it only returns data for layer 1. If our model was multiple layers, the same function call would return a `matplotlib.pyplot.axes` object for each layer.
ml.npf.k.plot(masked_values=[0.], colorbar=True)
# ### Plotting data for a single layer
#
# If the `mflay=` keyword is provided to the .plot() method then data for an individual layer is plotted. Remeber mflay is zero based
#
# If we want to plot a single layer ex. layer 1 (`mflay=0`)
ml.npf.k.plot(mflay=0, masked_values=[0.], colorbar=True)
# ## Plotting transient two-dimensional data
#
# Transient two-dimensional data can be plotted using the `.plot()` method. User's do not actually need to know that the data are two- or three-dimensional. The `.plot()` method is attached to the data type, therefore transient data types already know how to process the model data.
#
# Examples of transient data are recharge rates (`rch.recharge`) and pumpage from wells (`wel.q`).
#
# Here we plot recharge rates for the single stress period in the Freyberg model. If our model has transient stress periods, the keword `kper='all'` can be used to plot data from all stress periods
ml.rch.recharge.plot(kper='all', masked_values=[0.],
colorbar=True)
# We can also save the image to a file by providing the `filename_base=` keyword with an appropriate base file name
fbase = os.path.join(sim_path, "freyberg_rch")
ml.rch.recharge.plot(kper=0, masked_values=[0.], colorbar=True,
filename_base=fbase)
# If the kper keyword is not provided, images are saved for each stress period in the model
#
# The image file that was just created of recharge rates for stress period 1 is shown below
fname = os.path.join(sim_path, 'freyberg_rch_00001.png')
Image(filename=fname)
# ## Plotting simulated model results
#
# Simulated model results can be plotted using the `.plot()` method.
#
# First we create an instance of the HeadFile class with the simulated head file (freyberg.hds) and extract the simulation times available in the binary head file using the `.get_times()` method. Here we plot last simulated heads in the binary heads file (`totim=times[-1]`). We are also masking cells having the HDRY (1e+30) value and adding a colorbar
hds_name = os.path.join(sim_path, 'freyberg.hds')
headobj = flopy.utils.HeadFile(hds_name, model=ml)
times = headobj.get_times()
head = headobj.plot(totim=times[-1],
masked_values=[1e30],
colorbar=True)
times
# We can also save the plots of read results for a single layer (or every layer) to a file by providing the filename_base keyword with an appropriate base file name
# +
fbase = os.path.join(sim_path, "freyberg_head")
head = headobj.plot(totim=times[-1],
masked_values=[1e30],
colorbar=True,
contour=True, colors='black',
filename_base=fbase,
mflay=0)
# -
fname = os.path.join(sim_path, 'freyberg_head_Layer1.png')
Image(filename=fname)
# ## Passing other `matplotlib.pyplot` keywords to `.plot()` methods
#
# We can also pass `matplotlib.pyplot` keywords to `.plot()` methods attached to the model input data arrays. For example you can pass a `matplotlib` colormap (`cmap=`) keyword to the `.plot()` method to plot contours of simulated heads over a color flood of hk. We can also use the `norm=LogNorm()` keyword to use a log color scale when plotting hydraulic conductivity.
#
# Available `matplotlib` colormaps can be found at http://matplotlib.org/examples/color/colormaps_reference.html
from matplotlib.colors import LogNorm
ax = ml.npf.k.plot(mflay=0, cmap='GnBu', norm=LogNorm(), colorbar=True)
t = headobj.plot(axes=ax, mflay=0, masked_values=[1e30],
pcolor=False, contour=True,
colors='black')
# ## Plotting data for a package, a model, or a simulation
#
# The input data for a model or an individual package can also be plotted using the `.plot()` method. The `.plot()` methods attached to a model or an individual package are meant to provide a method to quickly evaluate model or package input. As a result, there is limited ability to customize the plots. Example of using the `.plot()` method with a model or and individual packages is demonstrated below.
#
# ### Plot all data for a package
#
# All input data for a package can be plotted using the `.plot()` method. Below all of the data for the dis package is plotted.
ml.dis.plot()
# ### Plot package input data for a specified layer
#
# Package input data for a specified layer can be plotted by passing the `mflay` keyword to the package `.plot()` method. Below lpf package input data for layer 1 (`mflay=0`) is plotted.
ml.npf.plot(mflay=0)
# ### Plot all input data for a model
#
# All of the input data for a model can be plotted using the `.plot()` method. Alternatively a user can pass the `mflay` keyword to plot all model input data for a single layer
ml.plot()
# ### Plot all input data for a simulation
#
# All of the input data for a simulation can be plotted using the `.plot()` method. Alternatively a user can pass the `mflay` keyword to plot all model input data for a single layer
sim.plot()
# # Summary
#
# This notebook demonstrates some of the simple plotting functionality available with flopy when working with MODFLOW 6 simulations and models. Although not described here, the plotting functionality tries to be general by passing keyword arguments passed to the `plot()` and `plot_data()` methods down into the `matplotlib.pyplot` routines that do the actual plotting. For those looking to customize these plots, it may be necessary to search for the available keywords. The `plot()` method return the `matplotlib.pyplot` axis objects that are created (or passed). These axes objects can be used to plot additional data (except when plots are saved as image files).
#
# Hope this gets you started!
| examples/Notebooks/flopy3_mf6_plotting_freyberg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import math
import numpy as np
from torch import nn
import torch
from tltorch.functional import factorized_linear
from tltorch.factorized_tensors import TensorizedTensor
from tltorch.factorized_layers.factorized_linear import FactorizedLinear
import tensorly as tl
# -
# For a **linear layer**
# $\mathbf{y = Wx + b}$
# where
# $\mathbf{y} \in \mathbb{R}^M$
# $\mathbf{W} \in \mathbb{R}^{M \times N}$
# $\mathbf{x} \in \mathbb{R}^N$
# $\mathbf{b} \in \mathbb{R}^M$
#
#
# its **tensorized linear layer** is
# $\mathcal{Y = WX + B}$
# where
# $\mathcal{Y} \in \mathbb{R}^{m_1 \times m_2 \times \cdots m_{d_M}}$
# $\mathcal{W} \in \mathbb{R}^{m_1 \times m_2 \times \cdots m_{d_M} \times n_1 \times n_2 \times \cdots n_{d_N}}$
# $\mathcal{X} \in \mathbb{R}^{n_1 \times n_2 \times \cdots n_{d_N}}$
# $\mathcal{B} \in \mathbb{R}^{m_1 \times m_2 \times \cdots m_{d_M}}$
# $M = \prod_{k=1}^{d_M} m_k$
# $N = \prod_{k=1}^{d_N} n_k$
fl = FactorizedLinear(in_tensorized_features=(3, 2, 2, 4), out_tensorized_features=(2, 2, 4), bias=True, factorization='cp', rank=10, n_layers=1, factorized_forward=False)
# In this example
# `out_features` $M=16$ and `out_tensorized_features` $(m_1, m_2, m_3) = (2, 2, 4)$
# `in_features` $N=48$ and `in_tensorized_features` $(n_1, n_2, n_3, n_4) = (3, 2, 2, 4)$
#
#
# Therefore
# $\mathbf{W}$ has the `weight_shape` of $(M, N) = (16, 48)$
# $\mathcal{W}$ has the `tensorized_shape` of $(m_1, m_2, m_3, n_1, n_2, n_3, n_4) = (2, 2, 4, 3, 2, 2, 4)$ with the the `order` of 7
print('out_features: {}'.format(fl.out_features))
print('in_features: {}'.format(fl.in_features))
print('out_tensorized_features: {}'.format(fl.out_tensorized_features))
print('in_tensorized_features: {}'.format(fl.in_tensorized_features))
print('weight_shape: {}'.format(fl.weight_shape))
print('tensorized_shape: {}'.format(fl.tensorized_shape))
print('order: {}'.format(fl.weight.order))
# $\mathcal{W}$ is **factorized** into `rank` $R=10$ `factors` = [`out_factors`, `in_factors`] using **CP-decomposition**
#
# This can be expressed as
# $\mathcal{W} = \sum_{r=1}^{R} \mathbf{gm}_1[:,r] \otimes \mathbf{gm}_2[:,r] \otimes \mathbf{gm}_3[:,r] \otimes \mathbf{gn}_1[:,r] \otimes \mathbf{gn}_2[:,r] \otimes \mathbf{gn}_3[:,r] \otimes \mathbf{gn}_4[:,r]$
# where
# `out_factors` $\mathbf{gm}_k \in \mathbb{R}^{m_k \times R}\ \forall k \in [1,2,...,d_M]$
# `in_factors` $\mathbf{gn}_k \in \mathbb{R}^{n_k \times R}\ \forall k \in [1,2,...,d_N]$
print('decomposition: {}'.format(fl.weight.name))
print('rank: {}'.format(fl.rank))
print('factors: {}'.format(fl.weight.factors))
out_factors = fl.weight.factors[:len(fl.out_tensorized_features)]
print("out_factors: {}".format(out_factors))
in_factors = fl.weight.factors[len(fl.out_tensorized_features):]
print("in_factors: {}".format(in_factors))
# The original **tltorch** `FactorizedLinear` implementation reconstructs the $\mathbf{W}$ from the `factors` and use regular `Linear` layer during the forward propagation
# + tags=[]
vector_input = torch.rand(size=(fl.in_features,))
regular_forward_output = fl.forward(vector_input)
print('Regular Forward Propagation Output:\n{}'.format(regular_forward_output))
# + [markdown] tags=[]
# In order to do **tensorized forward propagation** and **factorized forward propagation**, we need to tensorize $\mathbf{x}$ into $\mathcal{X}$ and $\mathbf{b}$ into $\mathcal{B}$ using the same **bijective mapping functions** that tensorize $\mathbf{W}$ into $\mathcal{W}$
# -
# `out_index_to_tensorized_out_index` $\mathbf{f}_i: \mathbb{Z}_+ \rightarrow \mathbb{Z}_+^{d_M}$ is a function that transforms `out_index` $p \in \{1,2,...,M\}$ into `tensorized_out_index` $\mathbf{f}_i(p)=[i_1(p),i_2(p),...,i_M(p)]$
# +
out_indices = torch.arange(fl.out_features)
tensorized_out_indices = tl.base.vec_to_tensor(vec=out_indices, shape=fl.out_tensorized_features)
def out_index_to_tensorized_out_index(out_index):
tensorized_out_index = (tensorized_out_indices == out_index).nonzero().squeeze().tolist()
tensorized_out_index = tuple(tensorized_out_index)
return tensorized_out_index
out_index = torch.randint(low=0, high=fl.out_features, size=(1,)).item()
tensorized_out_index = out_index_to_tensorized_out_index(out_index=out_index)
print("Tensorized Out Index for {}: {}".format(out_index, tensorized_out_index))
out_index_check = tensorized_out_indices[tensorized_out_index]
print("Out Index for {}: {}".format(tensorized_out_index, out_index_check))
# -
# `in_index_to_tensorized_in_index` $\mathbf{f}_j: \mathbb{Z}_+ \rightarrow \mathbb{Z}_+^{d_N}$ is a function that transforms `in_index` $q \in \{1,2,...,N\}$ into `tensorized_in_index` $\mathbf{f}_j(q)=[j_1(q),j_2(q),...,j_N(q)]$
# +
in_indices = torch.arange(fl.in_features)
tensorized_in_indices = tl.base.vec_to_tensor(vec=in_indices, shape=fl.in_tensorized_features)
def in_index_to_tensorized_in_index(in_index):
tensorized_in_index = (tensorized_in_indices == in_index).nonzero().squeeze().tolist()
tensorized_in_index = tuple(tensorized_in_index)
return tensorized_in_index
in_index = torch.randint(low=0, high=fl.in_features, size=(1,)).item()
tensorized_in_index = in_index_to_tensorized_in_index(in_index=in_index)
print("Tensorized In Index for {}: {}".format(in_index, tensorized_in_index))
in_index_check = tensorized_in_indices[tensorized_in_index]
print("In Index for {}: {}".format(tensorized_in_index, in_index_check))
# -
# A reality check that $\mathbf{W}$ and $\mathcal{W}$ have equal elements
matrix_weight = fl.weight.to_matrix()
tensor_weight = fl.weight.to_tensor()
print('Matrix form element sum: {}'.format(matrix_weight.sum().item()))
print('Tensor form element sum: {}'.format(tensor_weight.sum().item()))
# Another reality check that $\mathbf{W}(p,q)$ equals $\mathcal{W}(\mathbf{f}_i(p),\mathbf{f}_j(q))$
matrix_index = (out_index, in_index)
print('Matrix form element at (out_index, in_index): {}'.format(matrix_weight[matrix_index].item()))
tensorized_index = tensorized_out_index + tensorized_in_index
print('Tensor form element at (tensorized_out_index, tensorized_in_index): {}'.format(tensor_weight[tensorized_index].item()))
# Final reality check that $\mathbf{x}(p,q)$ equals $\mathcal{X}(\mathbf{f}_i(p),\mathbf{f}_j(q))$ and $\mathbf{b}(p,q)$ equals $\mathcal{B}(\mathbf{f}_i(p),\mathbf{f}_j(q))$
tensorized_input = tl.base.vec_to_tensor(vec=vector_input, shape=fl.in_tensorized_features)
vector_bias = fl.bias
tensorized_bias = tl.base.vec_to_tensor(vec=vector_bias, shape=fl.out_tensorized_features)
print('Vector form input element at (in_index): {}'.format(tensorized_input[tensorized_in_index]))
print('Tensor form input element at (tensorized_in_index): {}'.format(tensorized_input[tensorized_in_index]))
print('Vector form bias element at (out_index): {}'.format(tensorized_bias[tensorized_out_index]))
print('Tensor form bias element at (tensorized_out_index): {}'.format(tensorized_bias[tensorized_out_index]))
# **Tensorized forward propagation** is
# $\mathcal{Y}(\mathbf{f}_i(p)) = \sum_{q=1}^N \mathcal{W}(\mathbf{f}_i(p),\mathbf{f}_j(q)) \mathcal{X}(\mathbf{f}_j(q)) + \mathcal{B}(\mathbf{f}_i(p))$
dims = torch.arange(fl.weight.order)
in_dims = tuple(dims[len(fl.out_tensorized_features):].tolist())
tensorized_forward_output = (tensor_weight * tensorized_input).sum(dim=in_dims) + tensorized_bias
print('Tensorized Forward Propagation Output:\n{}'.format(tensorized_forward_output))
tensorized_regular_forward_output = tl.base.vec_to_tensor(vec=regular_forward_output, shape=fl.out_tensorized_features)
print('Tensorized Regular Forward Output:\n{}'.format(tensorized_regular_forward_output))
# We can reduce the use of mapping functions with the following equation
#
# $\mathbf{y}(p) = \sum_{q=1}^N \mathcal{W}(\mathbf{f}_i(p),\mathbf{f}_j(q)) \mathbf{x}(q) + \mathbf{b}(p)$
# This can be re-written in **factorized forward propagation** as
# $\mathbf{y}(p) = \sum_{q=1}^N \left( \sum_{r=1}^R \left( \prod_{k=1}^{d_M} \mathbf{gm}_{k,r}(i_k(p)) \prod_{k=1}^{d_N} \mathbf{gn}_{k,r}(j_k(q)) \right) \right) \mathbf{x}(q) + \mathbf{b}(p)$
factorized_forward_output = torch.zeros(size=(fl.out_features,))
for out_index in range(fl.out_features):
for in_index in range(fl.in_features):
out = 1
tensorized_out_index = out_index_to_tensorized_out_index(out_index=out_index)
for (factor, index) in zip(out_factors, tensorized_out_index):
out *= factor[index]
tensorized_in_index = in_index_to_tensorized_in_index(in_index=in_index)
for (factor, index) in zip(in_factors, tensorized_in_index):
out *= factor[index]
out = out.sum()
out *= vector_input[in_index]
factorized_forward_output[out_index] += out
factorized_forward_output[out_index] += vector_bias[out_index]
print('Regular Foward Output:\n{}'.format(regular_forward_output))
print('Factorized Forward Output:\n{}'.format(factorized_forward_output))
| notebooks/cp_factorized_forward.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
from __future__ import print_function
import numpy as np
import pandas as pd
import cv2 as cv
import os
import h5py
import matplotlib.pyplot as plt
import scipy.misc
import scipy.ndimage
from tqdm import tqdm
from copy import deepcopy
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, ZeroPadding2D, Convolution2D, Deconvolution2D, merge
from keras.layers.core import Activation, Dropout, Flatten, Lambda
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD, Adam, Nadam
from keras.utils import np_utils
from keras.callbacks import TensorBoard
from keras import objectives, layers
from keras.applications import vgg16
from keras.applications.vgg16 import preprocess_input
from keras import backend as K
import cv2
from PIL import Image
from scipy.misc import imresize
# +
np.random.seed(1337) # for reproducibility
# -
base_model = vgg16.VGG16(weights='imagenet', include_top=False)
vgg = Model(input=base_model.input, output=base_model.get_layer('block2_conv2').output)
# +
# def load_file_names(path):
# return os.listdir(path)
# -
def imshow(x, gray=False):
plt.imshow(x, cmap='gray' if gray else None)
plt.show()
def get_features(Y):
Z = deepcopy(Y)
Z = preprocess_vgg(Z)
features = vgg.predict(Z, batch_size = 5, verbose = 0)
return features
def preprocess_vgg(x, data_format=None):
if data_format is None:
data_format = K.image_data_format()
assert data_format in {'channels_last', 'channels_first'}
x = 255. * x
if data_format == 'channels_first':
# 'RGB'->'BGR'
x = x[:, ::-1, :, :]
# Zero-center by mean pixel
x[:, 0, :, :] = x[:, 0, :, :] - 103.939
x[:, 1, :, :] = x[:, 1, :, :] - 116.779
x[:, 2, :, :] = x[:, 2, :, :] - 123.68
else:
# 'RGB'->'BGR'
x = x[:, :, :, ::-1]
# Zero-center by mean pixel
x[:, :, :, 0] = x[:, :, :, 0] - 103.939
x[:, :, :, 1] = x[:, :, :, 1] - 116.779
x[:, :, :, 2] = x[:, :, :, 2] - 123.68
return x
# +
def feature_loss(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_true - y_pred)))
def pixel_loss(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_true - y_pred))) + 0.00001*total_variation_loss(y_pred)
def adv_loss(y_true, y_pred):
return K.mean(K.binary_crossentropy(y_pred, y_true), axis=-1)
def total_variation_loss(y_pred):
if K.image_data_format() == 'channels_first':
a = K.square(y_pred[:, :, :m - 1, :n - 1] - y_pred[:, :, 1:, :n - 1])
b = K.square(y_pred[:, :, :m - 1, :n - 1] - y_pred[:, :, :m - 1, 1:])
else:
a = K.square(y_pred[:, :m - 1, :n - 1, :] - y_pred[:, 1:, :n - 1, :])
b = K.square(y_pred[:, :m - 1, :n - 1, :] - y_pred[:, :m - 1, 1:, :])
return K.sum(K.pow(a + b, 1.25))
# -
def preprocess_VGG(x, dim_ordering='default'):
if dim_ordering == 'default':
dim_ordering = K.image_dim_ordering()
assert dim_ordering in {'tf', 'th'}
# x has pixels intensities between 0 and 1
x = 255. * x
norm_vec = K.variable([103.939, 116.779, 123.68])
if dim_ordering == 'th':
norm_vec = K.reshape(norm_vec, (1,3,1,1))
x = x - norm_vec
# 'RGB'->'BGR'
x = x[:, ::-1, :, :]
else:
norm_vec = K.reshape(norm_vec, (1,1,1,3))
x = x - norm_vec
# 'RGB'->'BGR'
x = x[:, :, :, ::-1]
return x
def generator_model(input_img):
# Encoder
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = Conv2D(32, (2, 2), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(64, (2, 2), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
res = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
x = layers.add([x, res])
res = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
encoded = layers.add([x, res])
# Decoder
res = Conv2D(256, (3, 3), activation='relu', padding='same', name='block5_conv1')(encoded)
x = layers.add([encoded, res])
res = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
x = layers.add([x, res])
res = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
x = layers.add([x, res])
x = Conv2D(128, (2, 2), activation='relu', padding='same', name='block6_conv1')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block7_conv1')(x)
res = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = layers.add([x, res])
res = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = layers.add([x, res])
x = Conv2D(64, (2, 2), activation='relu', padding='same', name='block8_conv1')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block9_conv1')(x)
res = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = layers.add([x, res])
res = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = layers.add([x, res])
x = Conv2D(32, (2, 2), activation='relu', padding='same', name='block10_conv1')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same', name='block11_conv1')(x)
res = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = layers.add([x, res])
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x)
return decoded
def feat_model(img_input):
# extract vgg feature
vgg_16 = vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=None)
# freeze VGG_16 when training
for layer in vgg_16.layers:
layer.trainable = False
vgg_first2 = Model(input=vgg_16.input, output=vgg_16.get_layer('block2_conv2').output)
Norm_layer = Lambda(preprocess_VGG)
x_VGG = Norm_layer(img_input)
feat = vgg_first2(x_VGG)
return feat
def full_model():
input_img = Input(shape=(m, n, 1))
generator = generator_model(input_img)
feat = feat_model(generator)
model = Model(input=input_img, output=[generator, feat], name='architect')
return model
def compute_vgg():
base_model = vgg16.VGG16(weights='imagenet', include_top=False)
model = Model(input=base_model.input, output=base_model.get_layer('block2_conv2').output)
num_batches = num_images // batch_size
for batch in range(num_batches):
_, Y = get_batch(batch, X = False);
Y = preprocess_vgg(Y)
features = model.predict(Y, verbose = 1)
f = h5py.File('features/feat_%d' % batch, "w")
dset = f.create_dataset("features", data=features)
m = 200
n = 200
sketch_dim = (m,n)
img_dim = (m, n,3)
model = full_model()
optim = Adam(lr=1e-4,beta_1=0.9, beta_2=0.999, epsilon=1e-8)
model.compile(loss=[pixel_loss, feature_loss], loss_weights=[1, 1], optimizer=optim)
model.load_weights('../newWeights/weights_77')
def predictAndPlot(input_path, label_path):
m = 200
n = 200
sketch_dim = (m,n)
img_dim = (m, n,3)
sketch = cv.imread(input_path, 0)
sketch = imresize(sketch, sketch_dim)
sketch = sketch / 255.
sketch = sketch.reshape(1,m,n,1)
actual = cv.imread(label_path)
actual = imresize(actual, img_dim)
result, _ = model.predict(sketch)
#### Plotting ####
fig = plt.figure()
a = fig.add_subplot(1,3,1)
imgplot = plt.imshow(sketch[0].reshape(m,n), cmap='gray')
a.set_title('Sketch')
plt.axis("off")
a = fig.add_subplot(1,3,2)
imgplot = plt.imshow(result[0])
a.set_title('Prediction')
plt.axis("off")
a = fig.add_subplot(1,3,3)
plt.imshow(cv2.cvtColor(actual, cv2.COLOR_BGR2RGB))
a.set_title('label')
plt.axis("off")
plt.show()
# +
#predictAndPlot('rsketch/f1-001-01-sz1.jpg','rphoto/f1-001-01.jpg')
# +
def predictAndPlot2(input_path, label_path, num_images, trunc = 4):
count = 0;
m = 200
n = 200
sketch_dim = (m,n)
img_dim = (m, n,3)
for file in os.listdir(input_path):
print(file)
sketch = cv.imread(str(input_path + '/' + file), 0)
print(sketch.shape)
sketch = imresize(sketch, sketch_dim)
sketch = sketch / 255.
sketch = sketch.reshape(1,m,n,1)
actual = cv.imread(str(label_path + '/' + file[:-trunc] + '.jpg'))
print(str(label_path + '/' + file[:-trunc]))
actual = imresize(actual, img_dim)
result, _ = model.predict(sketch)
fig = plt.figure()
a = fig.add_subplot(1,3,1)
imgplot = plt.imshow(sketch[0].reshape(m,n), cmap='gray')
a.set_title('Sketch')
plt.axis("off")
a = fig.add_subplot(1,3,2)
imgplot = plt.imshow(result[0])
# write_path1 = str('../images/prediction/' + file )
# plt.imsave(write_path1, result[0])
# write_path = str('../images/qp/' + file )
a.set_title('Prediction')
plt.axis("off")
a = fig.add_subplot(1,3,3)
act2 = cv2.cvtColor(actual, cv2.COLOR_BGR2RGB)
# plt.imsave(write_path, act2)
plt.imshow(cv2.cvtColor(actual, cv2.COLOR_BGR2RGB))
a.set_title('label')
plt.axis("off")
plt.show()
count += 1
if(count == num_images):
break
# -
predictAndPlot2('../sdata', '../qdata',12)
predictAndPlot2('../sdata3', '../pdata3',4)
| Sketch/sketchMe.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + input_collapsed=false
import numpy as np
from IPython.display import display
from bqplot import *
# -
size = 100
x_data = range(size)
np.random.seed(0)
y_data = np.cumsum(np.random.randn(size) * 100.0)
y_data_2 = np.cumsum(np.random.randn(size))
y_data_3 = np.cumsum(np.random.randn(size) * 100.)
# ## Multiple marks in a single figure
# +
sc_ord = OrdinalScale()
sc_y = LinearScale()
sc_y_2 = LinearScale()
ord_ax = Axis(label='Test X', scale=sc_ord, tick_format='0.0f', grid_lines='none')
y_ax = Axis(label='Test Y', scale=sc_y,
orientation='vertical', tick_format='0.2f',
grid_lines='solid')
y_ax_2 = Axis(label='Test Y 2', scale=sc_y_2,
orientation='vertical', side='right',
tick_format='0.0f', grid_lines='solid')
# +
line_chart = Lines(x=x_data[:10], y = [y_data[:10], y_data_2[:10] * 100, y_data_3[:10]],
scales={'x': sc_ord, 'y': sc_y},
labels=['Line1', 'Line2', 'Line3'],
display_legend=True)
bar_chart = Bars(x=x_data[:10],
y=[y_data[:10], y_data_2[:10] * 100, y_data_3[:10]],
scales={'x': sc_ord, 'y': sc_y_2},
labels=['Bar1', 'Bar2', 'Bar3'],
display_legend=True)
fig = Figure(axes=[ord_ax, y_ax], marks=[bar_chart, line_chart], legend_location = 'bottom-left')
# the line does not have a Y value set. So only the bars will be displayed
display(fig)
# -
# ## Sample Histogram with mid-points set
# +
x_scale = LinearScale()
y_scale = LinearScale()
hist = Hist(sample=y_data,
colors=['orange'],
scales={'sample': x_scale, 'count': y_scale},
labels=['Test Histogram'],
display_legend=True)
x_ax = Axis(label='Test X', scale=x_scale,
tick_format='0.2f', grid_lines='none')
y_ax_2 = Axis(label='Test Y', scale=y_scale, orientation='vertical', tick_format='0.2f', grid_lines='none')
fig = Figure(axes=[x_ax, y_ax_2], marks=[hist])
# -
display(fig)
# Setting the tick values to be the mid points of the bins
x_ax.tick_values = hist.midpoints
# ## Line Chart Log Scale
dates_all = np.arange('2005-02', '2005-03', dtype='datetime64[D]')
size = len(dates_all)
final_prices = 100 + 5 * np.cumsum(np.random.randn(size))
# +
## Log scale for the Y axis
dt_scale = DateScale()
lsc = LogScale()
ax_x = Axis(label='Date', scale=dt_scale, grid_lines='dashed')
lax_y = Axis(label='Log Price', scale=lsc, orientation='vertical', tick_format='0.1f', grid_lines='solid')
logline = Lines(x=dates_all, y=final_prices, scales={'x': dt_scale, 'y': lsc}, colors=['hotpink', 'orange'])
logfig = Figure(axes=[ax_x, lax_y], marks=[logline], fig_margin = dict(left=100, right=100, top=100, bottom=70))
display(logfig)
# -
# ## Multiple Marks with Ordinal Scale
# +
# Plotting a bar and line chart on the same figure with x axis being an oridnal scale.
ord_scale = OrdinalScale()
lin_scale = LinearScale()
bar_chart = Bars(x=x_data[:10],
y=np.abs(y_data_3[:10]),
scales={'x': ord_scale, 'y': lin_scale},
colors=['hotpink', 'orange', 'limegreen'],
padding=0.5)
line_chart = Lines(x=x_data[:10],
y=np.abs(y_data[:10]),
scales={'x': ord_scale, 'y': lin_scale},
colors=['hotpink', 'orange', 'limegreen'])
bar_x = Axis(scale=ord_scale, orientation='horizontal', grid_lines='none',
set_ticks=True)
bar_y = Axis(scale=lin_scale, orientation='vertical', grid_lines='none')
fig_2 = Figure(axes=[bar_x, bar_y], marks=[bar_chart, line_chart])
display(fig_2)
# -
# ## Setting min and max along an axis for plots
# +
sc_x = LinearScale(min=10, max=50)
sc_y = LinearScale()
x_data = np.arange(100)
line_1 = Lines(x=x_data, y=y_data_2, scales={'x': sc_x, 'y': sc_y}, colors=['orange'])
ax_x = Axis(scale=sc_x)
ax_y = Axis(scale=sc_y, orientation='vertical', tick_format='.2f')
fig = Figure(marks=[line_1], axes=[ax_x, ax_y])
display(fig)
# -
## changing the min/max
sc_x.min=-10
sc_x.max=110
# ## Marks which do not affect the domain along an axis
# +
sc_x = LinearScale()
sc_y = LinearScale()
x_data = np.arange(50)
line_1 = Lines(x=x_data, y=y_data, scales={'x': sc_x, 'y': sc_y}, colors=['blue'])
line_2 = Lines(x=x_data, y=y_data * 2, scales={'x': sc_x, 'y': sc_y}, colors=['orangered'], preserve_domain={'y': True})
ax_x = Axis(scale=sc_x)
ax_y = Axis(scale=sc_y, orientation='vertical')
fig = Figure(marks=[line_1, line_2], axes=[ax_x, ax_y])
display(fig)
# -
with line_2.hold_sync():
line_2.preserve_domain={}
# ## Preserve domain for color scale
# +
sc_x = LinearScale()
sc_y = LinearScale()
sc_col = ColorScale(colors=['red', 'white', 'green'], mid=0.0)
x_data = np.arange(50)
scatt_1 = Scatter(x=x_data, y=y_data[:50], color=y_data[:50], scales={'x': sc_x, 'y': sc_y, 'color': sc_col},marker='circle')
scatt_2 = Scatter(x=x_data, y=y_data[:50] * 2, color=y_data[:50] * 2, scales={'x': sc_x, 'y': sc_y, 'color': sc_col},
preserve_domain={'color': True}, marker='cross')
ax_x = Axis(scale=sc_x)
ax_y = Axis(scale=sc_y, orientation='vertical')
fig = Figure(marks=[scatt_1, scatt_2], axes=[ax_x, ax_y])
display(fig)
# -
# ## Reversing a scale
# +
sc_x = LinearScale(reverse=True)
sc_y = LinearScale()
x_data = np.arange(50)
line_1 = Lines(x=x_data, y=y_data, scales={'x': sc_x, 'y': sc_y}, colors=['orange'])
ax_x = Axis(scale=sc_x)
ax_y = Axis(scale=sc_y, orientation='vertical')
fig = Figure(marks=[line_1], axes=[ax_x, ax_y])
display(fig)
# -
# ## Fixing the domain of an ordinal scale
# +
ord_scale = OrdinalScale(domain=list(range(20)))
y_scale = LinearScale()
bar_chart = Bars(x=x_data[:10],
y=[(y_data[:10]), y_data_2[:10] * 100, y_data_3[:10]],
scales={'x': ord_scale, 'y': y_scale},
colors=['hotpink', 'orange', 'limegreen'],
labels=['Component 1', 'Component 2', 'Component 3'],
display_legend=True)
bar_x = Axis(scale=ord_scale, orientation='horizontal', set_ticks=True, grid_lines='none')
bar_y = Axis(scale=y_scale, orientation='vertical', grid_lines='none')
fig_2 = Figure(axes=[bar_x, bar_y], marks=[bar_chart])
display(fig_2)
# -
# ## Applying clip to marks
# +
sc_x = LinearScale(min=10, max=90)
sc_y = LinearScale()
x_data = np.arange(100)
line_1 = Lines(x=x_data, y=y_data_3, scales={'x': sc_x, 'y': sc_y}, colors=['orange'], labels=['Clipped Line'],
display_legend=True)
line_2 = Lines(x=x_data, y=y_data, scales={'x': sc_x, 'y': sc_y}, apply_clip=False, colors=['orangered'],
labels=['Non clipped line'], display_legend=True)
ax_x = Axis(scale=sc_x)
ax_y = Axis(scale=sc_y, orientation='vertical')
fig = Figure(marks=[line_1, line_2], axes=[ax_x, ax_y])
display(fig)
| examples/Advanced Plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ismaelgarcia64/daa_2021_1/blob/master/28septiembre.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="J7eufMeECe2c"
# + [markdown] id="Ved1BTBHCyPx"
# Seccion 1
# + [markdown] id="i9InOJP3C6rH"
# En este archivoaprenderemos a programar en python con la herramienta Google research, tambien aprenderemos a guardar cambios a nuestro repositorio Github
# + [markdown] id="JqsYaMR0DxEx"
# ## Código de ejemplo
# **negritas**
# _italica_
# `edad = 10
# print (edad)
# `
# + id="o9i84HQ2GfW4" outputId="56d3089a-b178-4fa6-cf0e-c604759b3d05" colab={"base_uri": "https://localhost:8080/", "height": 35}
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
# + id="zAgzHwqwHrrm"
archivo= open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
| 28septiembre.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Homework 01. Simple text processing.
# %load_ext autotime
# %load_ext autoreload
# %autoreload 2
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import pandas as pd
from IPython import display
# ### Toxic or not
# Your main goal in this assignment is to classify, whether the comments are toxic or not. And practice with both classical approaches and PyTorch in the process.
#
# *Credits: This homework is inspired by YSDA NLP_course.*
# +
# In colab uncomment this cell
# # ! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/homeworks/homework01/utils.py -nc
# -
try:
data = pd.read_csv('../../datasets/comments_small_dataset/comments.tsv', sep='\t')
except FileNotFoundError:
# ! wget https://raw.githubusercontent.com/neychev/made_nlp_course/master/datasets/comments_small_dataset/comments.tsv -nc
data = pd.read_csv("comments.tsv", sep='\t')
data.shape
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
# __Note:__ it is generally a good idea to split data into train/test before anything is done to them.
#
# It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation.
# ### Preprocessing and tokenization
#
# Comments contain raw text with punctuation, upper/lowercase letters and even newline symbols.
#
# To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.
#
# Generally, library `nltk` [link](https://www.nltk.org) is widely used in NLP. It is not necessary in here, but mentioned to intoduce it to you.
# +
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "I don\'t want to do that" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# +
# task: preprocess each comment in train and test
texts_train = list(map(preprocess, texts_train))
texts_test = list(map(preprocess, texts_test))
# -
# Small check that everything is done properly
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
# ### Step 1: bag of words
#
# One traditional approach to such problem is to use bag of words features:
# 1. build a vocabulary of frequent words (use train data only)
# 2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).
# 3. consider this count a feature for some classifier
#
# __Note:__ in practice, you can compute such features using sklearn. __Please don't do that in the current assignment, though.__
# * `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
from collections import Counter, defaultdict
k=15
list(map(lambda pair: pair[0], Counter(' '.join(texts_train).split()).most_common()[:k]))
# +
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = min(10000, len(set(' '.join(texts_train).split())))
token_counter = Counter(' '.join(texts_train).split())
bow_vocabulary = list(map(lambda pair: pair[0], token_counter.most_common(k)))
print('example features:', sorted(bow_vocabulary)[::100])
# -
# +
idx2token = dict(enumerate(bow_vocabulary))
token2idx = dict(map(reversed, idx2token.items()))
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
bow_vector = np.zeros(len(idx2token))
for idx in map(token2idx.get, text.split()):
if idx is None:
continue
bow_vector[idx] += 1
return np.array(bow_vector, 'float32')
# -
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
# Small check that everything is done properly
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
# Now let's do the trick with `sklearn` logistic regression implementation:
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
# +
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
# -
# Seems alright. Now let's create the simple logistic regression using PyTorch. Just like in the classwork.
# +
import torch
from torch import nn
from torch.nn import functional as F
from torch.optim.lr_scheduler import StepLR, ReduceLROnPlateau
from sklearn.metrics import accuracy_score
# -
from utils import plot_train_process
# +
model = nn.Sequential()
model.add_module('l1', nn.Linear(len(bow_vocabulary), 2))
# -
# Remember what we discussed about loss functions! `nn.CrossEntropyLoss` combines both log-softmax and `NLLLoss`.
#
# __Be careful with it! Criterion `nn.CrossEntropyLoss` with still work with log-softmax output, but it won't allow you to converge to the optimum.__ Next comes small demonstration:
# loss_function = nn.NLLLoss()
loss_function = nn.CrossEntropyLoss()
opt = torch.optim.AdamW(model.parameters(), lr=3e-3, weight_decay=0.05)
lr_scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.5, patience=4, min_lr=3e-5)
# +
X_train_bow_torch = torch.tensor(X_train_bow, dtype=torch.float32)
X_test_bow_torch = torch.tensor(X_test_bow, dtype=torch.float32)
y_train_torch = torch.tensor(y_train, dtype=torch.long)
y_test_torch = torch.tensor(y_test, dtype=torch.long)
# -
# Let's test that everything is fine
# example loss
loss = loss_function(model(X_train_bow_torch[:3]), y_train_torch[:3])
assert type(loss.item()) == float
# Here comes small function to train the model. In future we will take in into separate file, but for this homework it's ok to implement it here.
def train_model(
model,
opt,
lr_scheduler,
X_train_torch,
y_train_torch,
X_val_torch,
y_val_torch,
n_iterations=500,
batch_size=32,
warm_start=False,
show_plots=True,
eval_every=10
):
if not warm_start:
for name, module in model.named_children():
print('resetting ', name)
try:
module.reset_parameters()
except AttributeError as e:
print('Cannot reset {} module parameters: {}'.format(name, e))
train_loss_history = []
train_acc_history = []
val_loss_history = []
val_acc_history = []
local_train_loss_history = []
local_train_acc_history = []
for i in range(n_iterations):
# sample 256 random observations
ix = np.random.randint(0, len(X_train_torch), batch_size)
x_batch = X_train_torch[ix]
y_batch = y_train_torch[ix]
# predict log-probabilities or logits
y_predicted = model(x_batch)### YOUR CODE
# compute loss, just like before
loss = loss_function(y_predicted, y_batch)### YOUR CODE
# compute gradients
loss.backward()
# Adam step
opt.step()
# clear gradients
opt.zero_grad()
local_train_loss_history.append(loss.data.numpy())
local_train_acc_history.append(
accuracy_score(
y_batch.to('cpu').detach().numpy(),
y_predicted.to('cpu').detach().numpy().argmax(axis=1)
)
)
if i % eval_every == 0:
train_loss_history.append(np.mean(local_train_loss_history))
train_acc_history.append(np.mean(local_train_acc_history))
local_train_loss_history, local_train_acc_history = [], []
predictions_val = model(X_val_torch)
val_loss_history.append(loss_function(predictions_val, y_val_torch).to('cpu').detach().item())
acc_score_val = accuracy_score(y_val_torch.cpu().numpy(), predictions_val.to('cpu').detach().numpy().argmax(axis=1))
val_acc_history.append(acc_score_val)
lr_scheduler.step(train_loss_history[-1])
if show_plots:
display.clear_output(wait=True)
plot_train_process(train_loss_history, val_loss_history, train_acc_history, val_acc_history)
return model
# Let's run it on the data. Note, that here we use the `test` part of the data for validation. It's not so good idea in general, but in this task our main goal is practice.
train_model(model, opt, lr_scheduler, X_train_bow_torch, y_train_torch, X_test_bow_torch, y_test_torch)
# +
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow_torch, y_train, model),
('test ', X_test_bow_torch, y_test, model)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
# -
# Try to vary the number of tokens `k` and check how the model performance changes. Show it on a plot.
# +
import plotly.express as px
import plotly.graph_objects as go
from matplotlib import pyplot as plt
# %matplotlib inline
# +
k_values = [50, 100, 200, 300, 500, 700, 1000, 2000, 3000, 5000, 7000, 10000]
mode2aucs = defaultdict(list)
for k in k_values:
# build vocab
token_counter = Counter(' '.join(texts_train).split())
bow_vocabulary = list(map(lambda pair: pair[0], token_counter.most_common(k)))
idx2token = dict(enumerate(bow_vocabulary))
token2idx = dict(map(reversed, idx2token.items()))
# feature extraction
X_train_bow_torch = torch.tensor(np.stack(list(map(text_to_bow, texts_train))), dtype=torch.float32)
X_test_bow_torch = torch.tensor(np.stack(list(map(text_to_bow, texts_test))), dtype=torch.float32)
# build model
model = nn.Sequential()
model.add_module('l1', nn.Linear(len(bow_vocabulary), 2))
opt = torch.optim.AdamW(model.parameters(), lr=3e-3, weight_decay=0.05)
lr_scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.5, patience=4, min_lr=3e-5)
train_model(model, opt, lr_scheduler, X_train_bow_torch, y_train_torch, X_test_bow_torch, y_test_torch)
for name, X, y in [
('train', X_train_bow_torch, y_train),
('test ', X_test_bow_torch, y_test)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
mode2aucs[name].append(roc_auc_score(y, proba))
# +
fig = go.Figure()
for mode in ['train', 'test ']:
fig.add_trace(go.Scatter(
x=k_values, y=mode2aucs[mode],
mode='lines+markers', name=mode,
))
fig.update_layout(xaxis_title='k', yaxis_title='auc')
fig.show()
# +
plt.figure(figsize=(10, 6))
for mode in ['train', 'test ']:
plt.plot(k_values, mode2aucs[mode], label=mode)
plt.xlabel('k')
plt.ylabel('auc')
plt.legend()
plt.show()
# -
# Словаря размера 1000 вполне достаточно для достижения хорошего качества(test roc auc = 0.849). Однако есть переобучение.
# ### Step 2: implement TF-IDF features
#
# Not all words are equally useful. One can prioritize rare words and downscale words like "and"/"or" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:
#
# $$ feature_i = { Count(word_i \in x) \times { log {N \over Count(word_i \in D) + \alpha} }}, $$
#
#
# where x is a single text, D is your dataset (a collection of texts), N is a total number of documents and $\alpha$ is a smoothing hyperparameter (typically 1).
# And $Count(word_i \in D)$ is the number of documents where $word_i$ appears.
#
# It may also be a good idea to normalize each data sample after computing tf-idf features.
#
# __Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.
#
# __Please don't use sklearn/nltk builtin tf-idf vectorizers in your solution :)__ You can still use 'em for debugging though.
# Blog post about implementing the TF-IDF features from scratch: https://triton.ml/blog/tf-idf-from-scratch
# +
from typing import List, Dict, Iterable
class TfIdf:
def __init__(self, use_idf: bool = True):
self.use_idf = use_idf
self.idx2token: Dict[int, str] = None
self.token2idx: Dict[str, int] = None
self.idf: List[float] = None
def fit(self, texts: List[str], k: int = 2000, alpha: float = 1e-5):
token_counter = Counter(' '.join(texts).split())
vocabulary = list(map(lambda pair: pair[0], token_counter.most_common(k)))
self.idx2token = dict(enumerate(vocabulary))
self.token2idx = dict(map(reversed, self.idx2token.items()))
if self.use_idf:
bow = self._bow(texts)
n_texts_with_token = np.sum(bow > 0, axis=0, dtype=np.float32)
self.idf = np.log(len(texts) / (n_texts_with_token + alpha))
return self
def _bow(self, texts: List[str]) -> Iterable[Iterable[float]]:
bow = np.zeros((len(texts), len(self.idx2token)))
for i, text in enumerate(texts):
for idx in map(self.token2idx.get, text.split()):
if idx is None:
continue
bow[i, idx] += 1
return bow
def transform(self, texts: List[str]):
X = self._bow(texts)
if self.use_idf:
X *= self.idf
return X
# -
# Обычно в tf есть нормализация на количество слов в тексте, но сделал по формуле.
vectorizer = TfIdf()
vectorizer.fit(texts=texts_train, k=2000)
X_train_bow_torch = torch.tensor(vectorizer.transform(texts_train), dtype=torch.float32)
X_test_bow_torch = torch.tensor(vectorizer.transform(texts_test), dtype=torch.float32)
# +
k_values = [50, 100, 200, 300, 500, 700, 1000, 2000, 3000, 5000, 7000, 10000]
mode2aucs = defaultdict(list)
for k in k_values:
vectorizer = TfIdf()
vectorizer.fit(texts=texts_train, k=k)
X_train_bow_torch = torch.tensor(vectorizer.transform(texts_train), dtype=torch.float32)
X_test_bow_torch = torch.tensor(vectorizer.transform(texts_test), dtype=torch.float32)
# build model
model = nn.Sequential()
model.add_module('l1', nn.Linear(len(vectorizer.idf), 2))
opt = torch.optim.AdamW(model.parameters(), lr=1e-3, weight_decay=0.05)
lr_scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.5, patience=4, min_lr=3e-5)
train_model(model, opt, lr_scheduler, X_train_bow_torch, y_train_torch, X_test_bow_torch, y_test_torch)
for name, X, y in [
('train', X_train_bow_torch, y_train),
('test ', X_test_bow_torch, y_test)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
mode2aucs[name].append(roc_auc_score(y, proba))
# +
fig = go.Figure()
for mode in ['train', 'test ']:
fig.add_trace(go.Scatter(
x=k_values, y=mode2aucs[mode],
mode='lines+markers', name=mode,
))
fig.update_layout(xaxis_title='k', yaxis_title='auc')
fig.show()
# +
plt.figure(figsize=(10, 6))
for mode in ['train', 'test ']:
plt.plot(k_values, mode2aucs[mode], label=mode)
plt.xlabel('k')
plt.ylabel('auc')
plt.legend()
plt.show()
# -
# Результаты примерно те же самые. Словаря размера 1000 вполне достаточно для достижения хорошего качества(test roc auc = 0.843). Однако есть переобучение.
#
# Далее зафиксируем k=1000 для удобства.
k = 1000
# ### Step 3: Comparing it with Naive Bayes
#
# Naive Bayes classifier is a good choice for such small problems. Try to tune it for both BOW and TF-iDF features. Compare the results with Logistic Regression.
from sklearn.naive_bayes import GaussianNB
for vec_type in ['bow', 'tfidf']:
vectorizer = TfIdf(use_idf=(vec_type == 'tfidf'))
vectorizer.fit(texts=texts_train, k=k)
X_train = vectorizer.transform(texts_train)
X_test = vectorizer.transform(texts_test)
for model_type, model in [
('LogReg', LogisticRegression()),
('NB', GaussianNB())
]:
model = model.fit(X_train, y_train)
for name, X, y in [
('train', X_train, y_train),
('test ', X_test, y_test)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.title(f"{vec_type}, {model_type}")
plt.legend(fontsize='large')
plt.grid()
plt.show()
# _Your beautiful thoughts here_
# Странно, но наивный Байес выдает вероятности только 0 или 1. Уж больно он уверенный. Из-за этого его roc auc сильно низкий, так что нет особого смысла сравнивать эти модели. Зато видно, что добавление idf не особо влияет на итоговый результат.
# ### Step 4: Using the external knowledge.
#
# Use the `gensim` word2vec pretrained model to translate words into vectors. Use several models with this new encoding technique. Compare the results, share your thoughts.
# +
from gensim.models import KeyedVectors
wv_from_bin = KeyedVectors.load_word2vec_format(
'data/word2vec_twitter_tokens.bin', binary=True, unicode_errors='ignore',
)
# -
wv_from_bin.wv.get_vector('word').shape
# Проверим сколько и каких токенов есть в словаре
# +
good_tokens = []
bad_tokens = []
for token in set(' '.join(texts_train).split()):
try:
vec = wv_from_bin.wv.get_vector(token)
good_tokens.append(token)
except:
bad_tokens.append(token)
len(good_tokens), len(bad_tokens)
# -
# В целом почти все токены есть, так что не будем заморачиваться по этому поводу
# Будем переводить тексты в вектор при помощи усреднения векторов каждого токена в тексте
# +
vec_size = 400
def text2vec(text: str):
vectors = []
for token in text.split():
try:
vec = wv_from_bin.wv.get_vector(token)
vectors.append(vec)
except:
continue
if len(vectors) == 0:
return np.zeros(vec_size)
else:
vectors = np.array([num for vec in vectors for num in vec]).reshape(-1, vec_size)
return np.mean(vectors, axis=0)
# -
X_train = torch.tensor(np.stack(list(map(text2vec, texts_train))), dtype=torch.float32)
X_test = torch.tensor(np.stack(list(map(text2vec, texts_test))), dtype=torch.float32)
# ### lr=3e-3, weight_decay=0.05, без lr_scheduler
# +
model = nn.Sequential()
model.add_module('l1', nn.Linear(vec_size, 2))
opt = torch.optim.AdamW(model.parameters(), lr=3e-3, weight_decay=0.05)
lr_scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.5, patience=50000, min_lr=3e-5)
train_model(
model, opt, lr_scheduler,
X_train, y_train_torch,
X_test, y_test_torch,
n_iterations=500,
)
for name, X, y in [
('train', X_train, y_train),
('test ', X_test, y_test)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
print(f"{name}: roc auc = {roc_auc_score(y, proba)}")
# -
# Стало заметно лучше. Это логично, поскольку w2v обучен на твитах, что очень похоже на наши комменты.
# Видно, что это не предел(при этом переобучения практически нет). Увеличим количество эпох.
# ### lr=3e-3, weight_decay=0.05, без lr_scheduler + больше эпох
# +
model = nn.Sequential()
model.add_module('l1', nn.Linear(vec_size, 2))
opt = torch.optim.AdamW(model.parameters(), lr=3e-3, weight_decay=0.05)
lr_scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.5, patience=50000, min_lr=3e-5)
train_model(
model, opt, lr_scheduler,
X_train, y_train_torch,
X_test, y_test_torch,
n_iterations=3000,
)
for name, X, y in [
('train', X_train, y_train),
('test ', X_test, y_test)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
print(f"{name}: roc auc = {roc_auc_score(y, proba)}")
# -
# Качество немного подросло. Где-то к 1000 эпохе началось расхождение train и test лосса(переобучение). Попробуем увеличить штраф за веса.
# ### lr=3e-3, weight_decay=0.2, без lr_scheduler + больше эпох
# +
model = nn.Sequential()
model.add_module('l1', nn.Linear(vec_size, 2))
opt = torch.optim.AdamW(model.parameters(), lr=3e-3, weight_decay=0.2)
lr_scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.5, patience=50000, min_lr=3e-5)
train_model(
model, opt, lr_scheduler,
X_train, y_train_torch,
X_test, y_test_torch,
n_iterations=3000,
)
for name, X, y in [
('train', X_train, y_train),
('test ', X_test, y_test)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
print(f"{name}: roc auc = {roc_auc_score(y, proba)}")
# -
# Стало еще чуточку лучше. Лосс теперь выходит на плато: добавим шедулер и еще больше эпох.
# ### lr=3e-3, weight_decay=0.2, с lr_scheduler + еще больше эпох
# +
model = nn.Sequential()
model.add_module('l1', nn.Linear(vec_size, 2))
opt = torch.optim.AdamW(model.parameters(), lr=3e-3, weight_decay=0.2)
lr_scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.8, patience=10, min_lr=1e-4)
train_model(
model, opt, lr_scheduler,
X_train, y_train_torch,
X_test, y_test_torch,
n_iterations=4000,
)
for name, X, y in [
('train', X_train, y_train),
('test ', X_test, y_test)
]:
proba = model(X).detach().cpu().numpy()[:, 1]
print(f"{name}: roc auc = {roc_auc_score(y, proba)}")
# -
# **Вывод:**
#
# Tranfer learning - наше все. Тюнинг параметров помогает выбить немного качества: поднял roc auc с 0.921 до 0.93.
| homeworks/homework01/homework01_texts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import json
import pandas as pd
import numpy as np
import feather
import _pickle as pickle
with open('../ref_data/yelp_api_json.json', 'r') as f:
ls_dct = f.readlines()
ls_dct = [json.loads(line) for line in ls_dct]
with open('../ref_data/mid_gps_id_4_yelp.pkl', 'rb') as f:
mid2gps = pickle.load(f)
# -
ls_dct[0]['0.0']['businesses'][0].keys()
# +
output = []
categories_unique = []
for row in ls_dct:
loc_id = list(row.keys())[0]
json_bizs = row[loc_id]['businesses']
for json in json_bizs:
output_row = {}
output_row['loc_id'] = loc_id
output_row['yelp_biz_id'] = json['id']
output_row['name'] = json['name']
output_row['stars'] = json['rating']
output_row['review_ct'] = json['review_count']
output_row['category_alias'] = [v['alias'] for v in json['categories']]
for v in json['categories']:
output_row[v['alias']] = 1
output_row['dollar_signs'] = json.get('price')
output.append(output_row)
# -
yelp_df = pd.DataFrame(output)
yelp_df['loc_id'] = yelp_df['loc_id'].astype(float)
yelp_df = yelp_df.merge(mid2gps, how='left', left_on = 'loc_id', right_on = 'id')
yelp_df['category_alias'] = yelp_df['category_alias'].map(lambda x : '|'.join(x))
yelp_df['dollar_signs'] = yelp_df['dollar_signs'].fillna('')
yelp_df['dollar_signs'] = yelp_df['dollar_signs'].map(lambda x: len(x.split('$'))-1 if '$' in x else 0)
imp_columns = ['loc_id', 'yelp_biz_id','id','name','stars','review_ct','category_alias','dollar_signs','lat_mid','lng_mid']
yelp_cats = [col for col in yelp_df if col not in imp_columns]
all_columns = imp_columns + yelp_cats
yelp_df = yelp_df[all_columns].copy()
yelp_df.fillna(0,inplace=True)
yelp_df.head(10)
yelp_df.to_feather('../ref_data/yelp_df_to_join_to_tt.feather')
#yelp_df.read_feather('../ref_data/yelp_df_to_join_to_tt.feather')
# +
max_ydf = yelp_df.groupby(['loc_id','stars'])[['review_ct']].max().reset_index()
max_ydf = max_ydf.pivot(index='loc_id', columns='stars', values='review_ct').reset_index()
max_ydf.fillna(0, inplace=True)
prefix = 'max_rev_star_lvl'
max_ydf.columns = ['loc_id'] + [prefix + str(i/2.) for i in range(2,11)]
mean_ydf = yelp_df.groupby(['loc_id','stars'])[['review_ct']].mean().reset_index()
mean_ydf = mean_ydf.pivot(index='loc_id', columns='stars', values='review_ct').reset_index()
mean_ydf.fillna(0, inplace=True)
prefix = 'mean_rev_star_lvl'
mean_ydf.columns = ['loc_id'] + [prefix + str(i/2.) for i in range(2,11)]
ct_ydf = yelp_df.groupby(['loc_id','stars'])[['review_ct']].count().reset_index()
ct_ydf = ct_ydf.pivot(index='loc_id', columns='stars', values='review_ct').reset_index()
ct_ydf.fillna(0, inplace=True)
prefix = 'biz_ct_star_lvl'
ct_ydf.columns = ['loc_id'] + [prefix + str(i/2.) for i in range(2,11)]
breaks = [10,100,250,500,1000,2000]
labels = ['less_10','10-100','100-250','250-500','500-1000','1000-2000','2000+']
yelp_df['review_cat'] = np.digitize(yelp_df['review_ct'],breaks)
yelp_df['review_cat'] = yelp_df['review_cat'].map(lambda x : labels[x])
rev_cat_df = yelp_df.groupby(['loc_id','review_cat'])['review_ct'].count().reset_index()
rev_cat_df = rev_cat_df.pivot(index='loc_id', columns='review_cat', values='review_ct').reset_index()
rev_cat_df
# -
yelp_cats_df = yelp_df.groupby('loc_id')[yelp_cats].sum().reset_index()
yelp_cats_df.columns = ['loc_id']+['ycat_'+ col for col in yelp_cats_df.columns if col not in ['loc_id']]
yelp_cats_df.head()
master_yelp_summary = pd.DataFrame(mid2gps)
master_yelp_summary['loc_id'] = master_yelp_summary['id']
master_yelp_summary = master_yelp_summary.merge(max_ydf, how='left', on='loc_id')\
.merge(mean_ydf, how='left', on='loc_id')\
.merge(ct_ydf, how='left', on='loc_id')\
.merge(yelp_cats_df, how='left', on='loc_id')\
.merge(rev_cat_df, how='left', on='loc_id')
master_yelp_summary.head(10)
master_yelp_summary.to_feather('../ref_data/yelp_summary_stats_df_by_location.feather')
| eda/12-yelp-api-json-to-dataframe.ipynb |
Subsets and Splits