code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ex - GroupBy
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv).
# ### Step 3. Assign it to a variable called drinks.
# ### Step 4. Which continent drinks more beer on average?
# ### Step 5. For each continent print the statistics for wine consumption.
# + tags=[]
# -
# ### Step 6. Print the mean alcohol consumption per continent for every column
# + tags=[]
# -
# ### Step 7. Print the median alcohol consumption per continent for every column
# ### Step 8. Print the mean, min and max values for spirit consumption.
# #### This time output a DataFrame
| 2-EDA/2-Pandas/Practica/04_Grouping/Alcohol_Consumption/Alcohol_Consumption aula.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Tutorial - Ground Truth Estimation
# > Derive reference segmentations from segmentations of multiple experts.
#
# 
# 
# 
#
# [](https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb)
# ## 1 - Expert Segmentations
#
# **Required Steps:**
# 1. *Select parent folder* containing sub-folders with segmentation masks, one folder per expert
# 1. Click *Load Data*
#
# <video src="https://user-images.githubusercontent.com/13711052/139746674-4dbd2df4-d780-4ce3-8f77-f7f79a0b4eb9.mov" controls width="100%"></video>
# **Input Details**: *deepflash2* fuses
#
# - binary segmentations of an image, that is, there must be a single foreground value that represents positively classified pixels
# - Segmentation pixel values: background-class: 0; foreground-class: 1 or 255
# - instance segmentations of an image (instances represent positively classified pixels)
# - Segmentation pixel values: background-class: 0; foreground-instances: 1,2,...,I
#
# Examplary input folder structure:
#
# ```
# expert_segmentations -> one parent folder
# │
# │───expert1 -> one folder per expert
# │ │ mask1.png -> segmentation masks
# │ │ mask2.png
# │
# └───expert2
# │ mask1.png
# │ mask2.png
# ```
#
# All common image formats (tif, png, etc.) are supported. See [imageio docs](https://imageio.readthedocs.io/en/stable/formats/index.html).
# ## 2 - Ground Truth Estimation
#
# **Required Steps:**
# 1. Click *Run* for STAPLE or Majority Voting
#
# <video src="https://user-images.githubusercontent.com/13711052/139746719-6cabfc99-fbbe-4fd3-a495-3984c75507d2.mov" controls width="100%"></video>
#
# - **Simultaneous truth and performance level estimation (STAPLE).** The STAPLE algorithm considers a collection of segmentations and computes a probabilistic estimate of the true segmentation and a measure of the performance level represented by each segmentation. _Source: Warfield, <NAME>., <NAME>, and <NAME>. "Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation." IEEE transactions on medical imaging 23.7 (2004): 903-921_
# - **Majority Voting.** Use majority voting to obtain the reference segmentation. Note that this filter does not resolve ties. In case of ties it will assign the indicated *MV undecided* label to the result.
# ## 3 - Expert Performance Scores
#
# **Required Steps:**
# 1. Results Table: Click *Open* and *Update*
# - Filter the and sort the results
# - Download the results
#
# <video src="https://user-images.githubusercontent.com/13711052/139746788-3df4f730-da4a-4117-9633-18e7832a24d2.mov" controls width="100%"></video>
| nbs/tutorial_gt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark (Local)
# language: python
# name: pyspark_local
# ---
# * Linear Regression
rentalsDF = spark.createDataFrame([
("Monday",1.5,358),
("Saturday",1.0,272),
("Saturday",0.5,390),
("Monday",3.0,120),
("Saturday",0.3,439),
("Monday",0.9,509),
("Saturday",1.9,102),
("Saturday",2.7,43),
("Monday",0.6,597),
],["weekDay","distanceCenter","rentals"])
rentalsTestDF = spark.createDataFrame([
("Monday",0.1,641),
("Saturday",2.1,129),
("Saturday",1.5,199),
("Monday",2.0,231),
("Sunday",0.5,393)
],["weekDay","distanceCenter","rentals"])
# +
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import OneHotEncoderEstimator
from pyspark.ml.feature import VectorAssembler
indexer = StringIndexer(inputCol="weekDay",
outputCol="weekDayIndex", handleInvalid="keep")
indexerModel = indexer.fit(rentalsDF)
indexedDF=indexerModel.transform(rentalsDF)
va=VectorAssembler(inputCols=["weekDayIndex","distanceCenter"],
outputCol="features")
assembledDF=va.transform(indexedDF)
# -
from pyspark.ml.regression import LinearRegression
lr = LinearRegression(labelCol="rentals",featuresCol="features",maxIter=10)
# Fit the model
lrModel = lr.fit(assembledDF)
# Create the predictions
predictionDF=lrModel.transform(assembledDF)
predictionDF.show()
# Print the coefficients and intercept for linear regression
print("Coefficients: %s" % str(lrModel.coefficients))
print("Intercept: %s" % str(lrModel.intercept))
# Summarize the model over the training set and print out some metrics
trainingSummary = lrModel.summary
print("RMSE: %f" % trainingSummary.rootMeanSquaredError)
trainingSummary.residuals.show()
from pyspark.ml.evaluation import RegressionEvaluator
indexedTestDF=indexerModel.transform(rentalsTestDF)
assembledTestDF=va.transform(indexedTestDF)
predictionTestDF=lrModel.transform(assembledTestDF)
predictionTestDF.show()
# compute test error
evaluator = RegressionEvaluator(
labelCol="rentals", predictionCol="prediction", metricName="rmse")
rmse = evaluator.evaluate(predictionTestDF)
print("Root Mean Squared Error (RMSE) on test data = %g" % rmse)
# * Decision tree regression
from pyspark.ml.regression import DecisionTreeRegressor
from pyspark.ml.evaluation import RegressionEvaluator
# Train a DecisionTree model.
dt = DecisionTreeRegressor(labelCol="rentals",featuresCol="features",maxDepth=4)
# Fit the model
dtModel = dt.fit(assembledDF)
# Predict output
predictionDF=dtModel.transform(assembledDF)
predictionDF.show()
# Compute test error
evaluator = RegressionEvaluator(
labelCol="rentals", predictionCol="prediction", metricName="rmse")
rmse = evaluator.evaluate(predictionDF)
print("Root Mean Squared Error (RMSE) on training data = %g" % rmse)
predictionTestDF=lrModel.transform(assembledTestDF)
# compute test error
evaluator = RegressionEvaluator(
labelCol="rentals", predictionCol="prediction", metricName="rmse")
rmse = evaluator.evaluate(predictionTestDF)
print("Root Mean Squared Error (RMSE) on training data = %g" % rmse)
# * **Unsupervised learning: clustering**
data = spark.createDataFrame([
(15000,1000,"Paolo"),
(0,5000,"Luca"),
(20000,800,"Martino"),
(6000,1300,"Mike"),
(50000,2500,"Francesca"),
(2000,1100,"Steve"),
(700,1500,"Maria"),
(75000,0,"Guido"),
(4000,500,"Roberta"),
(7000,3000,"Idilio"),
(3000,900,"Marco"),
(6000,1200,"Dena"),
],["Savings","Income","User"])
dataNewDF = spark.createDataFrame([
(10000,1860,"MARIANA"),
(4500,1100,"Nicola"),
(27000,1000,"Davide"),
],["Savings","Income","User"])
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
va=VectorAssembler(inputCols=["Savings","Income"],
outputCol="features")
assembledDF=va.transform(data)
scaler = StandardScaler(inputCol="features",
outputCol="scaledFeatures", withStd=True, withMean=True)
scalerModel = scaler.fit(assembledDF)
scaledDF=scalerModel.transform(assembledDF)
scaledDF.show()
# * K-means clustering algorithm
from pyspark.ml.clustering import KMeans
# Trains a k-means model.
kmeans = KMeans(k=3,featuresCol="scaledFeatures",initMode="k-means||")
model = kmeans.fit(scaledDF)
# Make predictions
predictionsDF = model.transform(scaledDF)
predictionsDF.show()
from pyspark.ml.evaluation import ClusteringEvaluator
# Shows the result.
centers = model.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
print("Size of the clusters: ", model.summary.clusterSizes)
# Evaluate clustering by computing Silhouette score
evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(predictionsDF)
print("Silhouette with squared euclidean distance = " + str(silhouette))
print("SSE: ",model.computeCost(predictionsDF))
assembledNewDF=va.transform(dataNewDF)
scaledNewDF=scalerModel.transform(assembledNewDF)
# Make predictions
predictionsNewDF = model.transform(scaledNewDF)
predictionsNewDF.show()
# * Gaussian mixture model
from pyspark.ml.clustering import GaussianMixture
# Trains a GMM model.
gmm = GaussianMixture(k=3,featuresCol="scaledFeatures")
model = gmm.fit(scaledDF)
# Make predictions
predictionsDF = model.transform(scaledDF)
predictionsDF.show(truncate=False)
# +
print("Gaussians weights shown as a DataFrame: ")
model.gaussiansDF.show(truncate=False)
print("Size of the clusters: ", model.summary.clusterSizes)
from pyspark.ml.evaluation import ClusteringEvaluator
# Evaluate clustering by computing Silhouette score
evaluator = ClusteringEvaluator()
silhouette = evaluator.evaluate(predictionsDF)
print("Silhouette with squared euclidean distance = " + str(silhouette))
# -
assembledNewDF=va.transform(dataNewDF)
scaledNewDF=scalerModel.transform(assembledNewDF)
# Make predictions
predictionsNewDF = model.transform(scaledNewDF)
predictionsNewDF.show()
| class example/16RegressionClustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
# +
# %%html
<style>
.output_wrapper button.btn.btn-default,
.output_wrapper .ui-dialog-titlebar {
display: none;
}
</style>
# -
# <h1 align='center'>Introducting Interactivity into Jupyter Notebooks</h1>
#
# <h4 align='center'><NAME>$\mid$ SciProg $\mid$ Simon Fraser University</h4>
#
# <h2 align='center'>Jupyter Magics</h2>
#
# What if we want to embed HTML, Javascript or other language within our Jupyter notebook? Furthermore, what if we want to allow the user to interact with plots via click?
#
# It turns out we can do that via "magics". In this notebook we will focus on %%html, %%latex and %matplotlib notebook magics, although the user is welcome to explore single magics here https://ipython.readthedocs.io/en/stable/interactive/magics.html#
#
# To call a cell magic, we will follow the format
#
# # %%magic_name or %magic_name
# <h3 align='center'>Interactive Plots with the %matplotlib notebook magic</h3>
#
# We will begin with a simple example that allows the user to create images by building line segments whenever they click. We have implemented a button that allows to restart that action when pressed.
# +
# # %matplotlib notebook
# import numpy as np
# import matplotlib.pyplot as plt
# fig, ax = plt.subplots()
# ax.plot(np.random.rand(10))
# def onclick(event):
# print('%s click: button=%d, x=%d, y=%d, xdata=%f, ydata=%f' %
# ('double' if event.dblclick else 'single', event.button,
# event.x, event.y, event.xdata, event.ydata))
# cid = fig.canvas.mpl_connect('button_press_event', onclick)
# +
# %matplotlib notebook
from matplotlib import pyplot as plt
from ipywidgets import widgets,Layout
from IPython.display import Javascript
def run_cells(ev):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index(),IPython.notebook.get_selected_index()+1)'))
mean_exercise_button = widgets.Button( button_style='info',description="Restart", layout=Layout(width='20%', height='30px') )
# On button click, execute the next cell
mean_exercise_button.on_click( run_cells )
class LineBuilder:
def __init__(self, line):
self.line = line
self.xs = list(line.get_xdata())
self.ys = list(line.get_ydata())
self.cid = line.figure.canvas.mpl_connect('button_press_event', self)
def __call__(self, event):
print('click', event)
if event.inaxes!=self.line.axes: return
self.xs.append(event.xdata)
self.ys.append(event.ydata)
self.line.set_data(self.xs, self.ys)
self.line.figure.canvas.draw()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('click to build line segments')
line, = ax.plot([0], [0]) # empty line
linebuilder = LineBuilder(line)
plt.show()
# Display widgets
display(mean_exercise_button)
mean_exercise_button.on_click( run_cells )
# -
# #### Exercise
#
# 1. to the top of this notebook, and click on the word "here"
#
# The raw code for this IPython notebook is by default hidden for easier reading. To toggle on/off the raw code, click here.
#
# 2. Find all cells containing the magic %matplotlib notebook and replace them with the magic %matplotlib inline.
#
# 3. Restart and run the notebook.
#
# 4. What is different this time?
#
# As it turns out the %matplotlib notebook magic is crucial for allowing user interaction within matplotlib plots.
#
#
# <h3 align='center'>HTML (and Javascript) Magic</h3>
#
# Suppose we want to create a timeline that the user can navigate. It is probably more difficult to achieve that with Python than it is with HTML. We will call the %%html magic to embed HTML code into our notebook.
#
# For example, the magic used below was implemented to make the code for the plot above easier to read.
#
# As an exercise, remove that cell and press Restart and Run all. What is different this time?
#
#
# +
# %%html
<style>
.output_wrapper button.btn.btn-default,
.output_wrapper .ui-dialog-titlebar {
display: none;
}
</style>
# -
# We can use the %%html magic to make our notebook interactive via using Javacript. See the example below that implements a simple timeline.
# + language="html"
#
# <style>
#
# * {box-sizing:border-box}
#
# /* Slideshow container */
# .slideshow-container {
# max-width: 1000px;
# position: relative;
# margin: auto;
# }
#
# /* Hide the images by default */
# .mySlides {
# display: none;
# }
#
# /* Next & previous buttons */
# .prev, .next {
# cursor: pointer;
# position: absolute;
# top: 50%;
# width: auto;
# margin-top: -22px;
# padding: 16px;
# color: black;
# font-weight: bold;
# font-size: 18px;
# transition: 0.6s ease;
# border-radius: 0 3px 3px 0;
# }
#
# /* Position the "next button" to the right */
# .next {
# right: 0;
# border-radius: 3px 0 0 3px;
# }
#
# /* On hover, add a black background color with a little bit see-through */
# .prev:hover, .next:hover {
# background-color: rgba(0,0,0,0.8);
# }
#
# /* Caption text */
# .text {
# color: #000000;
# font-size: 15px;
# padding: 8px 12px;
# position: absolute;
# bottom: 8px;
# width: 100%;
# text-align: right;
# }
#
# /* Number text (1/3 etc) */
# .numbertext {
# color: #f2f2f2;
# font-size: 12px;
# padding: 8px 12px;
# position: absolute;
# top: 0;
# }
#
# /* The dots/bullets/indicators */
# .dot {
# cursor: pointer;
# height: 15px;
# width: 15px;
# margin: 0 2px;
# background-color: black;
# border-radius: 50%;
# display: inline-block;
# transition: background-color 0.6s ease;
# }
#
# .active, .dot:hover {
# background-color: #717171;
# }
#
# /* Fading animation */
# .fade {
# -webkit-animation-name: fade;
# -webkit-animation-duration: 1.5s;
# animation-name: fade;
# animation-duration: 1.5s;
# }
#
# @-webkit-keyframes fade {
# from {opacity: .4}
# to {opacity: 1}
# }
#
# @keyframes fade {
# from {opacity: .4}
# to {opacity: 1}
# }
#
# </style>
#
# <script>
#
# var slideIndex = 1;
# showSlides(slideIndex);
#
# // Next/previous controls
# function plusSlides(n) {
# showSlides(slideIndex += n);
# }
#
# // Thumbnail image controls
# function currentSlide(n) {
# showSlides(slideIndex = n);
# }
#
# function showSlides(n) {
# var i;
# var slides = document.getElementsByClassName("mySlides");
# var dots = document.getElementsByClassName("dot");
# if (n > slides.length) {slideIndex = 1}
# if (n < 1) {slideIndex = slides.length}
# for (i = 0; i < slides.length; i++) {
# slides[i].style.display = "none";
# }
# for (i = 0; i < dots.length; i++) {
# dots[i].className = dots[i].className.replace(" active", "");
# }
# slides[slideIndex-1].style.display = "block";
# dots[slideIndex-1].className += " active";
# }
#
# </script>
#
# <body>
# <div class="slideshow-container">
#
# <!-- Full-width images with number and caption text -->
#
# <div class="mySlides">
# <div class="numbertext">1 / 3</div>
# <img src="./images/tree3.png" style="width:15%;height:150px">
# </div>
#
# <div class="mySlides">
# <div class="numbertext">2 / 3</div>
# <img src="./images/tree4.png" style="width:25%;height:250px">
# </div>
#
# <div class="mySlides">
# <div class="numbertext">3 / 3</div>
# <img src="./images/tree5.png" style="width:35%;height:350px">
# </div>
#
#
#
# <!-- Next and previous buttons -->
# <a class="prev" onclick="plusSlides(-1)">❮</a>
# <a class="next" onclick="plusSlides(1)">❯</a>
# </div>
# <br>
#
# <!-- The dots/circles -->
# <div style="text-align:center">
# <span class="dot" onclick="currentSlide(1)"></span>
# <span class="dot" onclick="currentSlide(2)"></span>
# <span class="dot" onclick="currentSlide(3)"></span>
#
# </div>
#
#
# </body>
# -
# <h2 align='center'>Summary</h2>
#
# In this notebook we learned about the use of magics and how they can help us introduce interactivity into our notebooks.
#
# To see more elaborate examples of what can be done with html and Javascript, we leave the reader to explore D3 https://d3js.org/
#
# This concludes this workshop.
| Notebooks/Jupyter_Magics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from sklearn import datasets #stores anacondas in-built datasets
import pandas as pd
data=datasets.load_iris()
#load the iris dataset
print(data)
print(type(data))
df = pd.DataFrame(data.data, columns=data.feature_names)
df.head()
#convert the imported sklearn dataset into a pandas data frame
print(type(df))
df.count()
df.describe() #summary statistics
print(df.median())
print(df.var())
print(df.std()) #calculate the different summary stats individually
| section4/Lecture22.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Sanderson's Wonderland
# ### Angewandte Systemwissenschaften I
# #### Python - Wonderland
# + [markdown] slideshow={"slide_type": "slide"}
# # Imports
# + slideshow={"slide_type": "-"}
from math import exp # Manche hatten darauf vergessen 'exp' explizit zu importieren
# Wenn man 'import math' benutzen möchte, muss man "math.exp" schreiben, da man dann
# nur 'math' in den Namespace einfügt und im Namespace 'math' sich 'exp' befindet.
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] slideshow={"slide_type": "slide"}
# # Funktionen
# (ohne Umwelt)
# + slideshow={"slide_type": "-"}
def technologie(p): return p*(1.0-CHI)
def wirtschaft(y,z): return y*(1.0+GAMMA-(GAMMA+ETA)*(1.0-z)**LAMBDA)
def population(x,y,z): return x*(1.0+geburtenrate(y,z)-sterberate(y,z))
def geburtenrate(y,z): e = y_strich(BETA,y,z); return BETA1 * (BETA2 - e/(1.0+e))
def sterberate(y,z): e = y_strich(ALPHA,y,z); return DELTA1 * (DELTA2 - e/(1.0+e)) * (1.0 + DELTA3 * (1.0-z)**THETA)
def y_strich(CONST,y,z): return exp(CONST * (y - umweltschutz(y,z)))
def fluss_emissionen(x,y,z,p): c_strich = exp(EPSILON * umweltschutz(y,z) * x); return x*y*p - KAPPA * (c_strich/(1.0+c_strich) - 0.5)
def umweltschutz(y,z): return PHI * (1.0-z)**MY * y
# + [markdown] slideshow={"slide_type": "slide"}
# # Parameter
# (```CHI = 0.01``` $\rightarrow$ Environmentalist's Nightmare)
# + slideshow={"slide_type": "-"}
# Geburtenrate
BETA1 = 0.04; BETA2 = 1.375; BETA = 0.16
# Sterberate
ALPHA = 0.18; DELTA1 = 0.01; DELTA2 = 2.5; DELTA3 = 4.0; THETA = 15.0
# Wirtschaft
GAMMA = 0.02; ETA = 0.1; LAMBDA = 2.0
# Umwelt
KAPPA = 2.0; EPSILON = 0.02; DELTA = 1.0; RHO = 2.0; OMEGA = 0.1; NY = 1.0
# Umweltschutz
PHI = 0.5; MY = 2.0
# Technologie
CHI = 0.01
# + [markdown] slideshow={"slide_type": "-"}
# # $\chi = 0.01$
# + [markdown] slideshow={"slide_type": "slide"}
# # Umweltfunktion
# + slideshow={"slide_type": "-"}
def umwelt(x, y, z, p):
g = exp(DELTA*z**RHO - OMEGA*fluss_emissionen(x,y,z,p))
return z + NY * (z-z**2) * (g-1)
# -
# # Simulation
x_0 = y_0 = p_0 = 1.0
z_0 = 0.98
zustand_0 = (x_0, y_0, z_0, p_0)
def simulation(years=300):
res = [zustand_0]
for year in range(1, years, 1): # 300 Jahre
# Zustand zum Zeitpunkt t entpacken
x_t, y_t, z_t, p_t = res[-1]
# Berechnung des Zustands in t+1
x_neu = population(x_t, y_t, z_t)
y_neu = wirtschaft(y_t, z_t)
z_neu = umwelt(x_t, y_t, z_t, p_t)
p_neu = technologie(p_t)
# Initialisierung des Zustand-Tupels in t+1
zustand_neu = (x_neu, y_neu, z_neu, p_neu)
# Speicherung des neuen Zustands
res.append(zustand_neu)
return res
plt.plot([(x,z,p) for x,y,z,p in simulation()])
| notebooks/1_2_Wonderland-short.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.10 ('conda-forge')
# language: python
# name: python3
# ---
# +
import heapq
def lilysHomework(arr):
'''
My first idea is to follow what I did in largest-permutation.
'''
# create the index dictionary
ind_dict = {a:i for i, a in enumerate(arr)}
# create the min heap
min_heap = arr.copy()
heapq.heapify(min_heap)
k = 0
for i in range(len(arr)):
min_num = heapq.heappop(min_heap)
min_idx = ind_dict[min_num]
if min_idx != i:
ind_dict[arr[i]] = min_idx
arr[i], arr[min_idx] = arr[min_idx], arr[i]
k += 1
return k
# -
print(lilysHomework([7, 15, 12, 3]))
print(lilysHomework([2, 5, 3, 1]))
print(lilysHomework([3, 4, 2, 5, 1]))
# Oops. This first try does not work out.
# I also have to check the number of swaps if the array is in decreasing order!
# +
def count_swaps(arr):
# create the index dictionary
ind_dict = {a:i for i, a in enumerate(arr)}
# create the min heap
min_heap = arr.copy()
heapq.heapify(min_heap)
k = 0
for i in range(len(arr)):
min_num = heapq.heappop(min_heap)
min_idx = ind_dict[min_num]
if min_idx != i:
ind_dict[arr[i]] = min_idx
arr[i], arr[min_idx] = arr[min_idx], arr[i]
k += 1
return k
def lilysHomework2(arr):
'''
Second try: maintain a decreasing list as well
'''
return min(count_swaps(arr.copy()), count_swaps(arr[::-1].copy()))
# -
print(lilysHomework2([7, 15, 12, 3]))
print(lilysHomework2([2, 5, 3, 1]))
print(lilysHomework2([3, 4, 2, 5, 1]))
| hacker-rank/lilys-homework.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from model import *
from data import *
base_path = "/home/koosk/data/images/renard/unet/HCT116_Intro_Replicate1"
# -
# ## Train your Unet with membrane data
# membrane data is in folder membrane/, it is a binary classification task.
#
# The input shape of image and mask are the same :(batch_size,rows,cols,channel = 1)
# ### Train with data generator
data_gen_args = dict(rotation_range=0.2,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
zoom_range=0.05,
horizontal_flip=True,
fill_mode='nearest')
myGene = trainGenerator(2,base_path+'/train','image','label',data_gen_args,save_to_dir = None)
model = unet()
model_checkpoint = ModelCheckpoint(base_path+'/unet.hdf5', monitor='loss',verbose=1, save_best_only=True)
model.fit_generator(myGene,steps_per_epoch=4,epochs=5,callbacks=[model_checkpoint])
# ### Train with npy file
# +
# imgs_train,imgs_mask_train = geneTrainNpy("data/HCT116_Intro_Replicate1/train/aug/","data/HCT116_Intro_Replicate1/train/aug/")
# model.fit(imgs_train, imgs_mask_train, batch_size=2, nb_epoch=10, verbose=1,validation_split=0.2, shuffle=True, callbacks=[model_checkpoint])
# -
# ### test your model and save predicted results
testGene = testGenerator(base_path+"/test",4)
model = unet()
model.load_weights(base_path+"/unet.hdf5")
results = model.predict(testGene,4,verbose=1)
saveResult(base_path+"/test",results)
for i in range(4):
r = results[i]
print(f"{np.min(r)} {np.max(r)} {np.max(r)*255}")
| trainUnet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center>Python Basics<center/>
# <img height="60" width="120" src="https://www.python.org/static/img/python-logo-large.png?1414305901"></img>
# # Table of contents
# <br/>
# <a href = "#9.-Python-Output">09. Python Output</a><br/>
# <a href = "#10.-Output-Formatting">10. Output Formatting</a><br/>
# <a href = "#11.-Python-Input">11. Python Input</a>
# # Python I/O functions
# # 9. Python Output
# To see the output on the screen(standard output device), use the <b>print()</b> function
print("The output will be displayed on the screen")
myInt = 400
print("The value of myInt is", myInt)
print("The value of myInt is " + str(myInt)) # If you wish to print the number in the string form
# # 10. Output Formatting
# To make the output look attractive<br/>
# we use the <b> str.format() </b> method.<br/>
# This method is visible to any string object.
# +
myInt1 = 20
myInt2 = 40
print("myInt1: {} is the half of myInt2: {}".format(myInt1, myInt2)) # Considers the Python provided default Type
# +
myInt1 = 20
myInt2 = 40
print("myInt2: {1} is double of myInt1: {0}".format(myInt1, myInt2)) # Considers position of arguments
# -
# Use of keyword arguments to format the string
print("Welcome {name} to the world of {ds}".format(name="Suchit", ds="Data Science"))
# Also a combination of positional arguments with keyword arguments is applicable
print('Welcome {0}, to the world of {1}. We are learning {Subject}.'.format('Suchit', 'Data Science', Subject='Python'))
# Refer documentation for more examples: https://docs.python.org/3.4/library/string.html#format-examples
print('{:+f}; {:+f}'.format(3.14, -3.14)) # show the signs
# # 11. Python Input
# User inputs can be taken in Python by using the input() function.
userInput = input("Please enter some data: ")
print("You typed in: ",userInput)
userInput = input("Enter class strength: ")
print("You typed in: ",userInput)
print("userInput type is : ",type(userInput))
classStrength = int(userInput)
print("classStrength value is: ",classStrength)
print("classStrength type is : ",type(classStrength))
| Python Primer/PythonBasics_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 数据规整:连接、联合与重塑 Data Wrangling: Join, Combine,
import numpy as np
import pandas as pd
pd.options.display.max_rows = 20
np.random.seed(12345)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
np.set_printoptions(precision=4, suppress=True)
# ## 8.1 分层索引 Hierarchical Indexing
# 直接看列子
#
# 九个数,三个a,两个b,两个c,两个d
#
# a里面有三个,编号为123
#
# b里面有两个,编号为13
#
# 好吧
data = pd.Series(np.random.randn(9),
index=[['a', 'a', 'a', 'b', 'b', 'c', 'c', 'd', 'd'],
[1, 2, 3, 1, 3, 1, 2, 2, 3]])
data
# 查看数据的索引,返回MultiIndex
#
# 里面使用索引来标注一行的索引是什么
data.index
# 查看
data['b']
data['b':'c']
data.loc[['b', 'd']]
# 所有的2
data.loc[:, 2]
# 变成dataframe
data.unstack(fill_value = 0)
# 反操作
data.unstack().stack()
# 在DataFrame中,每个轴都可以有分层索引
frame = pd.DataFrame(np.arange(12).reshape((4, 3)),
index=[['a', 'a', 'b', 'b'], [1, 2, 1, 2]],
columns=[['Ohio', 'Ohio', 'Colorado'],
['Green', 'Red', 'Green']])
frame
# 横轴纵轴两层索引分别加入表头
frame.index.names = ['key1', 'key2']
frame.columns.names = ['state', 'color']
frame
# 查看
frame['Ohio']
# ### 8.1.1 重排序与层级排序 Reordering and Sorting Levels
# swaplevel接受两个层级序号或层级名称,返回一个进行了层级变更的对象
#
# 说人话,也就是交叉分类,分类次序调整
#
# 以前是先key1,后key2
#
# 现在变成先key2,后key1
frame.swaplevel('key1', 'key2')
# sort_index,按哪一层排序
frame.sort_index(level=0) # 按key1排序
# frame.sort_index(level=1) # 按key2排序
# 交换分类层级,且按交换后的1层排序
frame.swaplevel(0, 1).sort_index(level=1)
# ### 8.1.2 按层级进行汇总统计 Summary Statistics by Level
# 在特定轴上进行聚合
frame.sum(level='key2')
# 可以选择横轴留下什么,我们横轴有两个分类,一个是state,一个是color
frame.sum(level='color', axis=1)
frame.sum(level='state', axis=1)
# ### 8.1.3 使用DataFrame的列进行索引 Indexing with a DataFrame's columns
# 看一下这个例子
frame = pd.DataFrame({'a': range(7), 'b': range(7, 0, -1),
'c': ['one', 'one', 'one', 'two', 'two',
'two', 'two'],
'd': [0, 1, 2, 0, 1, 2, 3]})
frame
# 将c、d两列作为索引
#
# set_index会生成一个新的DataFrame
frame2 = frame.set_index(['c', 'd'])
frame2
# 默认情况先,列会被移动到索引位置
#
# 如果既想作为索引,又要保留原始列,可以选择drop=False
frame.set_index(['c', 'd'], drop=False)
# reset_index()是set_index()的反操作
frame2.reset_index()
# ## 8.2 联合与合并数据集 Combining and Merging Datasets
# 各种合并方式
import numpy as np
import pandas as pd
pd.options.display.max_rows = 20
np.random.seed(12345)
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
np.set_printoptions(precision=4, suppress=True)
# ### 8.2.1 数据库风格数据合并 Database-Style DataFrame Joins
# #### merge
# 随便生成一个DataFrame
#
# key:abc
#
# data1
df1 = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],
'data1': range(7)})
df1
# 再生成一个,为了说明merge默认的合并效果,我这里改了一下
#
# key:abd,但是有两个b
#
# data2
df2 = pd.DataFrame({'key': ['a', 'b', 'd','b'],
'data2': range(4)})
df2
# #### merge情况1:没指定用什么链接么连接,默认用名字相同的列连接,也就是key
#
# 直接看效果
pd.merge(df1, df2)
# key值中,b和a留下,不重合的部分c和d删除了
#
# 其中b
#
# 在data1有 0,1,6三种
#
# data2有1,3两种
#
# 所以在merge之后的表连,共有6种
#
# a同理
#
#
# #### merge情况2:指定按key这一列的值链接
#
pd.merge(df1, df2, on='key')
# #### merge情况3:左右两个表分别指定连接的列
#
# 下面看另一个例子
#
# https://www.youtube.com/watch?v=vv4rYbqq7k0&list=PL8xPPUJdubH4rAr3gw8q8-zYRIsA-QaBl&index=27
#
# 10:37
df3 = pd.DataFrame({'lkey': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],
'data1': range(7)})
df3
df4 = pd.DataFrame({'rkey': ['a', 'b', 'd'],
'data2': range(3)})
df4
pd.merge(df3, df4, left_on='lkey', right_on='rkey')
# 上面默认删除c和d,如果像保留,可是设置how参数
pd.merge(df1, df2, how='outer')
# ##### 参数 how
df1 = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
'data1': range(6)})
df1
df1 = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
'data1': range(6)})
df2 = pd.DataFrame({'key': ['a', 'b', 'a', 'b', 'd'],
'data2': range(5)})
pd.merge(df1, df2, on='key', how='left')
df2 = pd.DataFrame({'key': ['a', 'b', 'a', 'b', 'd'],
'data2': range(5)})
df2
pd.merge(df1, df2, on='key', how='left')
pd.merge(df1, df2, on='key', how='right')
pd.merge(df1, df2, how='inner')
left = pd.DataFrame({'key1': ['foo', 'foo', 'bar'],
'key2': ['one', 'two', 'one'],
'lval': [1, 2, 3]})
left
right = pd.DataFrame({'key1': ['foo', 'foo', 'bar', 'bar'],
'key2': ['one', 'one', 'one', 'two'],
'rval': [4, 5, 6, 7]})
right
pd.merge(left, right, on=['key1', 'key2'], how='outer')
pd.merge(left, right, on='key1')
pd.merge(left, right, on='key1', suffixes=('_left', '_right'))
# ### merge函数的参数
#
# left:合并时依据左边的,左边的关键值都保留
#
# right:合并时依据右边的
#
# how:inner,outer,left,right
#
# on:需要连接的列名
#
# left_on:左边表用作连接的列
#
# right_on:右边表用作连接的列
#
# left_index:使用左边的索引作为连接
#
# right_index:使用右边索引进行连接
#
# 。。。
# ### 8.2.2 根据索引合并 Merging on Index
# 比如你有两个表,想合并
#
# 我们随便定义其中一个列表
left1 = pd.DataFrame({'key': ['a', 'b', 'a', 'a', 'b', 'c'],
'value': range(6)})
left1
# 注意索引,a,b,c
# 定义第二个列表,其中之后两行
right1 = pd.DataFrame({'group_val': [3.5, 7]}, index=['a', 'b'])
right1
# 使用merge合并两个列表
#
# 其中左边用left1,右边right1
#
# 左边的表用left_on='key',key这一列
#
# 右边的表用index
pd.merge(left1, right1, left_on='key', right_index=True)
# 这时,c不见了,只有a和b
#
# 顺序为先a,后b
# how:inner,outer,left,right
#
# how设置为outer
pd.merge(left1, right1, left_on='key', right_index=True, how='outer')
# 这时c会保留,没有c的地方会记为NaN
# ### 多重索引的情况
#
# 多重索引的情况下,会更复杂
# 先定义一个放在左边的表
lefth = pd.DataFrame({'key1': ['Ohio', 'Ohio', 'Ohio',
'Nevada', 'Nevada'],
'key2': [2000, 2001, 2002, 2001, 2002],
'data': np.arange(5.)})
lefth
# 定义右边的表,注意一下右边的index,就是我们要连接的依据
righth = pd.DataFrame(np.arange(12).reshape((6, 2)),
index=[['Nevada', 'Nevada', 'Ohio', 'Ohio',
'Ohio', 'Ohio'],
[2001, 2000, 2000, 2000, 2001, 2002]],
columns=['event1', 'event2'])
righth
# 观察,索引有Navada,Ohio与年份两层
#
# 左边的设置为left_on=['key1', 'key2']
#
# 右边设置right_index=True,用索引链接
pd.merge(lefth, righth, left_on=['key1', 'key2'], right_index=True)
# 默认情况下,两个表两层索引相同的列,数据会对齐
#
# 不相同的,删除
# 如果想要保留,则需要设置how='outer'
pd.merge(lefth, righth, left_on=['key1', 'key2'],
right_index=True, how='outer')
# #### 使用两边的索引进行连接
# 定义左边的索引是ace
left2 = pd.DataFrame([[1., 2.], [3., 4.], [5., 6.]],
index=['a', 'c', 'e'],
columns=['Ohio', 'Nevada'])
left2
# 右边,定义所以为bcde
right2 = pd.DataFrame([[7., 8.], [9., 10.], [11., 12.], [13, 14]],
index=['b', 'c', 'd', 'e'],
columns=['Missouri', 'Alabama'])
right2
# merge链接
#
# 设置参数
pd.merge(left2, right2, how='outer', left_index=True, right_index=True)
# #### join方法
# join也可以实现按照索引合并
left2.join(right2, how='outer')
# on='key'的效果,添加一列叫做‘key’,以前的索引放到这一列
#
left1.join(right1, on='key')
# 可以多个表链接在一起
#
# 这个跟之后要介绍的concat类似
another = pd.DataFrame([[7., 8.], [9., 10.], [11., 12.], [16., 17.]],
index=['a', 'c', 'e', 'f'],
columns=['New York', 'Oregon'])
another
# 并排并到一起咯
left2.join([right2, another])
# 设置how='outer'
left2.join([right2, another], how='outer')
# ### 8.2.3 沿轴向连接Concatenating Along an Axis
# concatenate
# 生成一个数组
arr = np.arange(12).reshape((3, 4))
arr
# 两个表连起来axis=1
np.concatenate([arr, arr], axis=1)
np.concatenate([arr, arr], axis=0)
# ##### concat
#
# 三个索引不重叠的Series
s1 = pd.Series([0, 1], index=['a', 'b'])
s2 = pd.Series([2, 3, 4], index=['c', 'd', 'e'])
s3 = pd.Series([5, 6], index=['f', 'g'])
# 默认axis=0
pd.concat([s1, s2, s3])
# 如果设置axis=1
pd.concat([s1, s2, s3], axis=1)
s4 = pd.concat([s1, s3])
s4
pd.concat([s1, s4], axis=1)
# join='inner',fg消失了
pd.concat([s1, s4], axis=1, join='inner')
# 可以指定轴
pd.concat([s1, s4], axis=1, join_axes=[['a', 'c', 'b', 'e']])
# 可以创建索引,区分拼接在一起的各部分
result = pd.concat([s1, s1, s3], keys=['one', 'two', 'three'])
result
result.unstack()
pd.concat([s1, s2, s3], axis=1, keys=['one', 'two', 'three'])
#
df1 = pd.DataFrame(np.arange(6).reshape(3, 2), index=['a', 'b', 'c'],
columns=['one', 'two'])
df2 = pd.DataFrame(5 + np.arange(4).reshape(2, 2), index=['a', 'c'],
columns=['three', 'four'])
df1
df2
pd.concat([df1, df2], axis=1, keys=['level1', 'level2'])
pd.concat({'level1': df1, 'level2': df2}, axis=1)
pd.concat([df1, df2], axis=1, keys=['level1', 'level2'],
names=['upper', 'lower'])
df1 = pd.DataFrame(np.random.randn(3, 4), columns=['a', 'b', 'c', 'd'])
df2 = pd.DataFrame(np.random.randn(2, 3), columns=['b', 'd', 'a'])
df1
df2
pd.concat([df1, df2], ignore_index=True)
# ### 8.2.4 联合重叠数据 Combining Data with Overlap
a = pd.Series([np.nan, 2.5, np.nan, 3.5, 4.5, np.nan],
index=['f', 'e', 'd', 'c', 'b', 'a'])
b = pd.Series(np.arange(len(a), dtype=np.float64),
index=['f', 'e', 'd', 'c', 'b', 'a'])
b[-1] = np.nan
a
b
np.where(pd.isnull(a), b, a)
b[:-2].combine_first(a[2:])
df1 = pd.DataFrame({'a': [1., np.nan, 5., np.nan],
'b': [np.nan, 2., np.nan, 6.],
'c': range(2, 18, 4)})
df2 = pd.DataFrame({'a': [5., 4., np.nan, 3., 7.],
'b': [np.nan, 3., 4., 6., 8.]})
df1
df2
df1.combine_first(df2)
# ## 8.3 重塑和透视Reshaping and Pivoting
# ### 8.3.1 使用多层索引进行重塑 Reshaping with Hierarchical Indexing
data = pd.DataFrame(np.arange(6).reshape((2, 3)),
index=pd.Index(['Ohio', 'Colorado'], name='state'),
columns=pd.Index(['one', 'two', 'three'],
name='number'))
data
result = data.stack()
result
result.unstack()
result.unstack(0)
result.unstack('state')
s1 = pd.Series([0, 1, 2, 3], index=['a', 'b', 'c', 'd'])
s2 = pd.Series([4, 5, 6], index=['c', 'd', 'e'])
data2 = pd.concat([s1, s2], keys=['one', 'two'])
data2
data2.unstack()
data2.unstack()
data2.unstack().stack()
data2.unstack().stack(dropna=False)
df = pd.DataFrame({'left': result, 'right': result + 5},
columns=pd.Index(['left', 'right'], name='side'))
df
df.unstack('state')
df.unstack('state').stack('side')
# ### 8.3.2 将长透视为宽 Pivoting “Long” to “Wide” Format
data = pd.read_csv('examples/macrodata.csv')
data.head()
periods = pd.PeriodIndex(year=data.year, quarter=data.quarter,
name='date')
columns = pd.Index(['realgdp', 'infl', 'unemp'], name='item')
data = data.reindex(columns=columns)
data.index = periods.to_timestamp('D', 'end')
ldata = data.stack().reset_index().rename(columns={0: 'value'})
ldata[:10]
pivoted = ldata.pivot('date', 'item', 'value')
pivoted
ldata['value2'] = np.random.randn(len(ldata))
ldata[:10]
pivoted = ldata.pivot('date', 'item')
pivoted[:5]
pivoted['value'][:5]
unstacked = ldata.set_index(['date', 'item']).unstack('item')
unstacked[:7]
# ### 8.3.3 将宽透视为长 Pivoting “Wide” to “Long” Format
df = pd.DataFrame({'key': ['foo', 'bar', 'baz'],
'A': [1, 2, 3],
'B': [4, 5, 6],
'C': [7, 8, 9]})
df
melted = pd.melt(df, ['key'])
melted
reshaped = melted.pivot('key', 'variable', 'value')
reshaped
reshaped.reset_index()
pd.melt(df, id_vars=['key'], value_vars=['A', 'B'])
pd.melt(df, value_vars=['A', 'B', 'C'])
pd.melt(df, value_vars=['key', 'A', 'B'])
| ch08.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## <small>
# Copyright (c) 2017-21 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# </small>
#
#
#
# # Deep Learning: A Visual Approach
# ## by <NAME>, https://glassner.com
# ### Order: https://nostarch.com/deep-learning-visual-approach
# ### GitHub: https://github.com/blueberrymusic
# ------
#
# ### What's in this notebook
#
# This notebook is provided to help you work with Keras and TensorFlow. It accompanies the bonus chapters for my book. The code is in Python3, using the versions of libraries as of April 2021.
#
# Note that I've included the output cells in this saved notebook, but Jupyter doesn't save the variables or data that were used to generate them. To recreate any cell's output, evaluate all the cells from the start up to that cell. A convenient way to experiment is to first choose "Restart & Run All" from the Kernel menu, so that everything's been defined and is up to date. Then you can experiment using the variables, data, functions, and other stuff defined in this notebook.
# ## Bonus Chapter 2 - Notebook 3: Model creation summary
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Workaround for Keras issues on Mac computers (you can comment this
# out if you're not on a Mac, or not having problems)
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
# +
# These variables are assigned during our pre-processing step.
# We'll just assign them directly here for this demonstration.
number_of_pixels = 28*28 # size of an MNIST image
number_of_classes = 10 # MNIST images are digits 0 to 9
def make_one_hidden_layer_model():
# create an empty model
model = Sequential()
# add a fully-connected hidden layer with #nodes = #pixels
model.add(Dense(number_of_pixels, activation='relu',
input_shape=[number_of_pixels]))
# add an output layer with softmax activation
model.add(Dense(number_of_classes, activation='softmax'))
# compile the model to turn it from specification to code
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
return model
model = make_one_hidden_layer_model() # make the model
# -
| Notebooks/Bonus02-KerasPart1/Bonus02-Keras-3-Model-Creation-Summary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Build Your First QA System
#
# <img style="float: right;" src="https://upload.wikimedia.org/wikipedia/en/d/d8/Game_of_Thrones_title_card.jpg">
#
# EXECUTABLE VERSION: [*colab*](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial1_Basic_QA_Pipeline.ipynb)
#
# Question Answering can be used in a variety of use cases. A very common one: Using it to navigate through complex knowledge bases or long documents ("search setting").
#
# A "knowledge base" could for example be your website, an internal wiki or a collection of financial reports.
# In this tutorial we will work on a slightly different domain: "Game of Thrones".
#
# Let's see how we can use a bunch of Wikipedia articles to answer a variety of questions about the
# marvellous seven kingdoms...
#
# ### Prepare environment
#
# #### Colab: Enable the GPU runtime
# Make sure you enable the GPU runtime to experience decent speed in this tutorial.
# **Runtime -> Change Runtime type -> Hardware accelerator -> GPU**
#
# <img src="https://raw.githubusercontent.com/deepset-ai/haystack/master/docs/_src/img/colab_gpu_runtime.jpg">
# + pycharm={"name": "#%%\n"}
# Make sure you have a GPU running
# !nvidia-smi
# +
# Install the latest release of Haystack in your own environment
# #! pip install farm-haystack
# Install the latest master of Haystack
# !pip install git+https://github.com/deepset-ai/haystack.git
# !pip install urllib3==1.25.4
# !pip install torch==1.6.0+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# -
from haystack import Finder
from haystack.preprocessor.cleaning import clean_wiki_text
from haystack.preprocessor.utils import convert_files_to_dicts, fetch_archive_from_http
from haystack.reader.farm import FARMReader
from haystack.reader.transformers import TransformersReader
from haystack.utils import print_answers
# ## Document Store
#
# Haystack finds answers to queries within the documents stored in a `DocumentStore`. The current implementations of `DocumentStore` include `ElasticsearchDocumentStore`, `FAISSDocumentStore`, `SQLDocumentStore`, and `InMemoryDocumentStore`.
#
# **Here:** We recommended Elasticsearch as it comes preloaded with features like [full-text queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/full-text-queries.html), [BM25 retrieval](https://www.elastic.co/elasticon/conf/2016/sf/improved-text-scoring-with-bm25), and [vector storage for text embeddings](https://www.elastic.co/guide/en/elasticsearch/reference/7.6/dense-vector.html).
#
# **Alternatives:** If you are unable to setup an Elasticsearch instance, then follow the [Tutorial 3](https://github.com/deepset-ai/haystack/blob/master/tutorials/Tutorial3_Basic_QA_Pipeline_without_Elasticsearch.ipynb) for using SQL/InMemory document stores.
#
# **Hint**: This tutorial creates a new document store instance with Wikipedia articles on Game of Thrones. However, you can configure Haystack to work with your existing document stores.
#
# ### Start an Elasticsearch server
# You can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source.
# +
# Recommended: Start Elasticsearch using Docker
# #! docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.6.2
# +
# In Colab / No Docker environments: Start Elasticsearch from source
# ! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-linux-x86_64.tar.gz -q
# ! tar -xzf elasticsearch-7.6.2-linux-x86_64.tar.gz
# ! chown -R daemon:daemon elasticsearch-7.6.2
import os
from subprocess import Popen, PIPE, STDOUT
es_server = Popen(['elasticsearch-7.6.2/bin/elasticsearch'],
stdout=PIPE, stderr=STDOUT,
preexec_fn=lambda: os.setuid(1) # as daemon
)
# wait until ES has started
# ! sleep 30
# + pycharm={"name": "#%%\n"}
# Connect to Elasticsearch
from haystack.document_store.elasticsearch import ElasticsearchDocumentStore
document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", index="document")
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Preprocessing of documents
#
# Haystack provides a customizable pipeline for:
# - converting files into texts
# - cleaning texts
# - splitting texts
# - writing them to a Document Store
#
# In this tutorial, we download Wikipedia articles about Game of Thrones, apply a basic cleaning function, and index them in Elasticsearch.
# + pycharm={"name": "#%%\n"}
# Let's first fetch some documents that we want to query
# Here: 517 Wikipedia articles for Game of Thrones
doc_dir = "data/article_txt_got"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/wiki_gameofthrones_txt.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
# Convert files to dicts
# You can optionally supply a cleaning function that is applied to each doc (e.g. to remove footers)
# It must take a str as input, and return a str.
dicts = convert_files_to_dicts(dir_path=doc_dir, clean_func=clean_wiki_text, split_paragraphs=True)
# We now have a list of dictionaries that we can write to our document store.
# If your texts come from a different source (e.g. a DB), you can of course skip convert_files_to_dicts() and create the dictionaries yourself.
# The default format here is:
# {
# 'text': "<DOCUMENT_TEXT_HERE>",
# 'meta': {'name': "<DOCUMENT_NAME_HERE>", ...}
#}
# (Optionally: you can also add more key-value-pairs here, that will be indexed as fields in Elasticsearch and
# can be accessed later for filtering or shown in the responses of the Finder)
# Let's have a look at the first 3 entries:
print(dicts[:3])
# Now, let's write the dicts containing documents to our DB.
document_store.write_documents(dicts)
# -
# ## Initalize Retriever, Reader, & Finder
#
# ### Retriever
#
# Retrievers help narrowing down the scope for the Reader to smaller units of text where a given question could be answered.
# They use some simple but fast algorithm.
#
# **Here:** We use Elasticsearch's default BM25 algorithm
#
# **Alternatives:**
#
# - Customize the `ElasticsearchRetriever`with custom queries (e.g. boosting) and filters
# - Use `TfidfRetriever` in combination with a SQL or InMemory Document store for simple prototyping and debugging
# - Use `EmbeddingRetriever` to find candidate documents based on the similarity of embeddings (e.g. created via Sentence-BERT)
# - Use `DensePassageRetriever` to use different embedding models for passage and query (see Tutorial 6)
from haystack.retriever.sparse import ElasticsearchRetriever
retriever = ElasticsearchRetriever(document_store=document_store)
# + pycharm={"is_executing": false, "name": "#%%\n"}
# Alternative: An in-memory TfidfRetriever based on Pandas dataframes for building quick-prototypes with SQLite document store.
# from haystack.retriever.sparse import TfidfRetriever
# retriever = TfidfRetriever(document_store=document_store)
# -
# ### Reader
#
# A Reader scans the texts returned by retrievers in detail and extracts the k best answers. They are based
# on powerful, but slower deep learning models.
#
# Haystack currently supports Readers based on the frameworks FARM and Transformers.
# With both you can either load a local model or one from Hugging Face's model hub (https://huggingface.co/models).
#
# **Here:** a medium sized RoBERTa QA model using a Reader based on FARM (https://huggingface.co/deepset/roberta-base-squad2)
#
# **Alternatives (Reader):** TransformersReader (leveraging the `pipeline` of the Transformers package)
#
# **Alternatives (Models):** e.g. "distilbert-base-uncased-distilled-squad" (fast) or "deepset/bert-large-uncased-whole-word-masking-squad2" (good accuracy)
#
# **Hint:** You can adjust the model to return "no answer possible" with the no_ans_boost. Higher values mean the model prefers "no answer possible"
#
# #### FARMReader
# + pycharm={"is_executing": false}
# Load a local model or any of the QA models on
# Hugging Face's model hub (https://huggingface.co/models)
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", use_gpu=True)
# -
# #### TransformersReader
# +
# Alternative:
# reader = TransformersReader(model_name_or_path="distilbert-base-uncased-distilled-squad", tokenizer="distilbert-base-uncased", use_gpu=-1)
# -
# ### Finder
#
# The Finder sticks together reader and retriever in a pipeline to answer our actual questions.
# + pycharm={"is_executing": false}
finder = Finder(reader, retriever)
# -
# ## Voilà! Ask a question!
# + pycharm={"is_executing": false}
# You can configure how many candidates the reader and retriever shall return
# The higher top_k_retriever, the better (but also the slower) your answers.
prediction = finder.get_answers(question="Who is the father of <NAME>?", top_k_retriever=10, top_k_reader=5)
# +
# prediction = finder.get_answers(question="Who created the Dothraki vocabulary?", top_k_reader=5)
# prediction = finder.get_answers(question="Who is the sister of Sansa?", top_k_reader=5)
# + pycharm={"is_executing": false, "name": "#%%\n"}
print_answers(prediction, details="minimal")
| tutorials/Tutorial1_Basic_QA_Pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# ********************************
# Creating Colormaps in Matplotlib
# ********************************
#
# Matplotlib has a number of built-in colormaps accessible via
# `.matplotlib.cm.get_cmap`. There are also external libraries like
# palettable_ that have many extra colormaps.
#
#
# However, we often want to create or manipulate colormaps in Matplotlib.
# This can be done using the class `.ListedColormap` and a Nx4 numpy array of
# values between 0 and 1 to represent the RGBA values of the colormap. There
# is also a `.LinearSegmentedColormap` class that allows colormaps to be
# specified with a few anchor points defining segments, and linearly
# interpolating between the anchor points.
#
# Getting colormaps and accessing their values
# ============================================
#
# First, getting a named colormap, most of which are listed in
# :doc:`/tutorials/colors/colormaps` requires the use of
# `.matplotlib.cm.get_cmap`, which returns a
# :class:`.matplotlib.colors.ListedColormap` object. The second argument gives
# the size of the list of colors used to define the colormap, and below we
# use a modest value of 12 so there are not a lot of values to look at.
#
#
# +
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
from collections import OrderedDict
viridis = cm.get_cmap('viridis', 12)
print(viridis)
# -
# The object ``viridis`` is a callable, that when passed a float between
# 0 and 1 returns an RGBA value from the colormap:
#
#
print(viridis(0.56))
# The list of colors that comprise the colormap can be directly accessed using
# the ``colors`` property,
# or it can be acccessed indirectly by calling ``viridis`` with an array
# of values matching the length of the colormap. Note that the returned list
# is in the form of an RGBA Nx4 array, where N is the length of the colormap.
#
#
print('viridis.colors', viridis.colors)
print('viridis(range(12))', viridis(range(12)))
print('viridis(np.linspace(0, 1, 12))', viridis(np.linspace(0, 1, 12)))
# The colormap is a lookup table, so "oversampling" the colormap returns
# nearest-neighbor interpolation (note the repeated colors in the list below)
#
#
print('viridis(np.linspace(0, 1, 15))', viridis(np.linspace(0, 1, 15)))
# Creating listed colormaps
# =========================
#
# This is essential the inverse operation of the above where we supply a
# Nx4 numpy array with all values between 0 and 1,
# to `.ListedColormap` to make a new colormap. This means that
# any numpy operations that we can do on a Nx4 array make carpentry of
# new colormaps from existing colormaps quite straight forward.
#
# Suppose we want to make the first 25 entries of a 256-length "viridis"
# colormap pink for some reason:
#
#
# +
viridis = cm.get_cmap('viridis', 256)
newcolors = viridis(np.linspace(0, 1, 256))
pink = np.array([248/256, 24/256, 148/256, 1])
newcolors[:25, :] = pink
newcmp = ListedColormap(newcolors)
def plot_examples(cms):
"""
helper function to plot two colormaps
"""
np.random.seed(19680801)
data = np.random.randn(30, 30)
fig, axs = plt.subplots(1, 2, figsize=(6, 3), constrained_layout=True)
for [ax, cmap] in zip(axs, cms):
psm = ax.pcolormesh(data, cmap=cmap, rasterized=True, vmin=-4, vmax=4)
fig.colorbar(psm, ax=ax)
plt.show()
plot_examples([viridis, newcmp])
# -
# We can easily reduce the dynamic range of a colormap; here we choose the
# middle 0.5 of the colormap. However, we need to interpolate from a larger
# colormap, otherwise the new colormap will have repeated values.
#
#
viridisBig = cm.get_cmap('viridis', 512)
newcmp = ListedColormap(viridisBig(np.linspace(0.25, 0.75, 256)))
plot_examples([viridis, newcmp])
# and we can easily concatenate two colormaps:
#
#
# +
top = cm.get_cmap('Oranges_r', 128)
bottom = cm.get_cmap('Blues', 128)
newcolors = np.vstack((top(np.linspace(0, 1, 128)),
bottom(np.linspace(0, 1, 128))))
newcmp = ListedColormap(newcolors, name='OrangeBlue')
plot_examples([viridis, newcmp])
# -
# Of course we need not start from a named colormap, we just need to create
# the Nx4 array to pass to `.ListedColormap`. Here we create a
# brown colormap that goes to white....
#
#
N = 256
vals = np.ones((N, 4))
vals[:, 0] = np.linspace(90/256, 1, N)
vals[:, 1] = np.linspace(39/256, 1, N)
vals[:, 2] = np.linspace(41/256, 1, N)
newcmp = ListedColormap(vals)
plot_examples([viridis, newcmp])
# Creating linear segmented colormaps
# ===================================
#
# `.LinearSegmentedColormap` class specifies colormaps using anchor points
# between which RGB(A) values are interpolated.
#
# The format to specify these colormaps allows discontinuities at the anchor
# points. Each anchor point is specified as a row in a matrix of the
# form ``[x[i] yleft[i] yright[i]]``, where ``x[i]`` is the anchor, and
# ``yleft[i]`` and ``yright[i]`` are the values of the color on either
# side of the anchor point.
#
# If there are no discontinuities, then ``yleft[i]=yright[i]``:
#
#
# +
cdict = {'red': [[0.0, 0.0, 0.0],
[0.5, 1.0, 1.0],
[1.0, 1.0, 1.0]],
'green': [[0.0, 0.0, 0.0],
[0.25, 0.0, 0.0],
[0.75, 1.0, 1.0],
[1.0, 1.0, 1.0]],
'blue': [[0.0, 0.0, 0.0],
[0.5, 0.0, 0.0],
[1.0, 1.0, 1.0]]}
def plot_linearmap(cdict):
newcmp = LinearSegmentedColormap('testCmap', segmentdata=cdict, N=256)
rgba = newcmp(np.linspace(0, 1, 256))
fig, ax = plt.subplots(figsize=(4, 3), constrained_layout=True)
col = ['r', 'g', 'b']
for xx in [0.25, 0.5, 0.75]:
ax.axvline(xx, color='0.7', linestyle='--')
for i in range(3):
ax.plot(np.arange(256)/256, rgba[:, i], color=col[i])
ax.set_xlabel('index')
ax.set_ylabel('RGB')
plt.show()
plot_linearmap(cdict)
# -
# In order to make a discontinuity at an anchor point, the third column is
# different than the second. The matrix for each of "red", "green", "blue",
# and optionally "alpha" is set up as::
#
# cdict['red'] = [...
# [x[i] yleft[i] yright[i]],
# [x[i+1] yleft[i+1] yright[i+1]],
# ...]
#
# and for values passed to the colormap between ``x[i]`` and ``x[i+1]``,
# the interpolation is between ``yright[i]`` and ``yleft[i+1]``.
#
# In the example below there is a discontiuity in red at 0.5. The
# interpolation between 0 and 0.5 goes from 0.3 to 1, and between 0.5 and 1
# it goes from 0.9 to 1. Note that red[0, 1], and red[2, 2] are both
# superfluous to the interpolation because red[0, 1] is the value to the
# left of 0, and red[2, 2] is the value to the right of 1.0.
#
#
cdict['red'] = [[0.0, 0.0, 0.3],
[0.5, 1.0, 0.9],
[1.0, 1.0, 1.0]]
plot_linearmap(cdict)
# ------------
#
# References
# """"""""""
#
# The use of the following functions, methods, classes and modules is shown
# in this example:
#
#
import matplotlib
matplotlib.axes.Axes.pcolormesh
matplotlib.figure.Figure.colorbar
matplotlib.colors
matplotlib.colors.LinearSegmentedColormap
matplotlib.colors.ListedColormap
matplotlib.cm
matplotlib.cm.get_cmap
| python/learn/matplotlib/tutorials_jupyter/colors/colormap-manipulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="SsgOxKzsYVlf"
# # Imports
# + colab={"base_uri": "https://localhost:8080/", "height": 375} colab_type="code" id="wnlx5n_s9mQS" outputId="cf891fbd-95e9-47f3-99b9-ad691a35b6cf"
# !pip3 install --upgrade tensorflow-model-optimization
# !pip3 install mat4py
import numpy as np
import tensorflow as tf
import tensorflow_model_optimization as tfmot
import matplotlib.pyplot as plt
import json
import tempfile
from google.colab import drive
from mat4py import loadmat
print(tf.__version__)
drive.mount('/content/drive')
# %cd /content/drive/My Drive/CGM_prediction_data
# + [markdown] colab_type="text" id="LA5nFtn6Y_zj"
# # Data pre-processing
# + colab={} colab_type="code" id="CE1p5gsMZDRs"
def downscale(data, resolution):
# 10 min resolution.. (data.shape[0], 3, 1440) -> (data.shape[0], 10, 3, 144).. breaks one 3,1440 length trajectory into ten 3,144 length trajectories
# Use ~12 timesteps -> 2-5 timesteps (Use ~2 hours to predict 20-50 mins)
return np.mean(data.reshape(data.shape[0], data.shape[1], int(data.shape[2]/resolution), resolution), axis=3)
def process_data(aligned_data, time_horizon, ph):
# 10 min resolution.. breaks each (3,144) trajectory into (144-ph-time_horizon,3,time_horizon) samples
data = np.zeros((aligned_data.shape[0] * (aligned_data.shape[2]-ph-time_horizon), aligned_data.shape[1], time_horizon))
label = np.zeros((aligned_data.shape[0] * (aligned_data.shape[2]-ph-time_horizon), ph))
count = 0
for i in range(aligned_data.shape[0]): # for each sample
for j in range(aligned_data.shape[2]-ph-time_horizon): # TH length sliding window across trajectory
data[count] = aligned_data[i,:,j:j+time_horizon]
label[count] = aligned_data[i,0,j+time_horizon:j+time_horizon+ph]
count+=1
return data, label
def load_mpc(time_horizon, ph, resolution, batch): # int, int, int, bool
# Load train data
g = np.loadtxt('glucose_readings_train.csv', delimiter=',')
c = np.loadtxt('meals_carbs_train.csv', delimiter=',')
it = np.loadtxt('insulin_therapy_train.csv', delimiter=',')
# Load test data
g_ = np.loadtxt('glucose_readings_test.csv', delimiter=',')
c_ = np.loadtxt('meals_carbs_test.csv', delimiter=',')
it_ = np.loadtxt('insulin_therapy_test.csv', delimiter=',')
# Time align train & test data
aligned_train_data = downscale(np.array([(g[i,:], c[i,:], it[i,:]) for i in range(g.shape[0])]), resolution)
aligned_test_data = downscale(np.array([(g_[i,:], c_[i,:], it_[i,:]) for i in range(g_.shape[0])]), resolution)
print(aligned_train_data.shape)
# Break time aligned data into train & test samples
if batch:
train_data, train_label = process_data(aligned_train_data, time_horizon, ph)
test_data, test_label = process_data(aligned_test_data, time_horizon, ph)
return np.swapaxes(train_data,1,2), train_label, np.swapaxes(test_data,1,2), test_label
else:
return aligned_train_data, aligned_test_data
def load_uva(time_horizon, ph, resolution, batch):
data = loadmat('uva/sim_results.mat')
train_data = np.zeros((231,3,1440))
test_data = np.zeros((99,3,1440))
# Separate train and test sets.. last 3 records of each patient will be used for testing
count_train = 0
count_test = 0
for i in range(33):
for j in range(10):
if j>=7:
test_data[count_test,0,:] = np.asarray(data['data']['results']['sensor'][count_test+count_train]['signals']['values']).flatten()[:1440]
test_data[count_test,1,:] = np.asarray(data['data']['results']['CHO'][count_test+count_train]['signals']['values']).flatten()[:1440]
test_data[count_test,2,:] = np.asarray(data['data']['results']['BOLUS'][count_test+count_train]['signals']['values']).flatten()[:1440] + np.asarray(data['data']['results']['BASAL'][i]['signals']['values']).flatten()[:1440]
count_test+=1
else:
train_data[count_train,0,:] = np.asarray(data['data']['results']['sensor'][count_test+count_train]['signals']['values']).flatten()[:1440]
train_data[count_train,1,:] = np.asarray(data['data']['results']['CHO'][count_test+count_train]['signals']['values']).flatten()[:1440]
train_data[count_train,2,:] = np.asarray(data['data']['results']['BOLUS'][count_test+count_train]['signals']['values']).flatten()[:1440] + np.asarray(data['data']['results']['BASAL'][i]['signals']['values']).flatten()[:1440]
count_train+=1
train_data = downscale(train_data, resolution)
test_data = downscale(test_data, resolution)
if batch:
train_data, train_label = process_data(train_data, time_horizon, ph)
test_data, test_label = process_data(test_data, time_horizon, ph)
return np.swapaxes(train_data,1,2)*0.0555, train_label*0.0555, np.swapaxes(test_data,1,2)*0.0555, test_label*0.0555 # convert to mmol/L
else:
return train_data, test_data
# + [markdown] colab_type="text" id="F1wVV8YaZXuz"
# # Define models
# + [markdown] colab_type="text" id="5kTAST-4Zgz0"
# ## LSTM
# + colab={} colab_type="code" id="HFvW4L49Zl4L"
def lstm(ph, training):
inp = tf.keras.Input(shape=(train_data.shape[1], train_data.shape[2]))
model = tf.keras.layers.LSTM(200, return_sequences=True)(inp)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.LSTM(200, return_sequences=True)(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.LSTM(200, return_sequences=True)(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.Flatten()(model)
model = tf.keras.layers.Dense(ph, activation=None)(model)
x = tf.keras.Model(inputs=inp, outputs=model)
x.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
return x
# + [markdown] colab_type="text" id="iM7WqsDNZoWb"
# ## CRNN
# + colab={} colab_type="code" id="UBcyQxFRZriz"
def crnn(ph, training):
inp = tf.keras.Input(shape=(train_data.shape[1], train_data.shape[2]))
model = tf.keras.layers.Conv1D(256, 4, activation='relu', padding='same')(inp)
model = tf.keras.layers.MaxPool1D(pool_size=2, strides=1, padding='same')(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.Conv1D(512, 4, activation='relu', padding='same')(model)
model = tf.keras.layers.MaxPool1D(pool_size=2, strides=1, padding='same')(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.LSTM(200, return_sequences=True)(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.Flatten()(model)
model = tf.keras.layers.Dense(ph, activation=None)(model)
x = tf.keras.Model(inputs=inp, outputs=model)
x.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
return x
# + [markdown] colab_type="text" id="CjtGofK5Zylr"
# ## Bidirectional LSTM
# + colab={} colab_type="code" id="Ky0UOecGZ0KE"
def bilstm(ph, training):
inp = tf.keras.Input(shape=(train_data.shape[1], train_data.shape[2]))
model = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(200, return_sequences=True))(inp)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(200, return_sequences=True))(model)
model = tf.keras.layers.Dropout(rate=0.5)(model, training=training)
model = tf.keras.layers.Flatten()(model)
model = tf.keras.layers.Dense(ph, activation=None)(model)
x = tf.keras.Model(inputs=inp, outputs=model)
x.compile(optimizer='adam', loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError(), loss_metric1, loss_metric2, loss_metric3, loss_metric4, loss_metric5, loss_metric6])
return x
# + [markdown] colab_type="text" id="XHmdOt-tZ4XM"
# # Load MPC results
# + [markdown] colab_type="text" id="qsj6sCnNbFKj"
# ## Train loss
# + colab={"base_uri": "https://localhost:8080/", "height": 621} colab_type="code" id="vXjrETvybHST" outputId="f807bc30-9858-433d-f818-518a726ecd84"
t = np.arange(1,100)
lstm_val_loss_10 = json.load(open('saved_models/mpc_guided_lstm_history'))['loss_metric1'][1:]
lstm_val_loss_20 = json.load(open('saved_models/mpc_guided_lstm_history'))['loss_metric2'][1:]
lstm_val_loss_30 = json.load(open('saved_models/mpc_guided_lstm_history'))['loss_metric3'][1:]
lstm_val_loss_40 = json.load(open('saved_models/mpc_guided_lstm_history'))['loss_metric4'][1:]
lstm_val_loss_50 = json.load(open('saved_models/mpc_guided_lstm_history'))['loss_metric5'][1:]
lstm_val_loss_60 = json.load(open('saved_models/mpc_guided_lstm_history'))['loss_metric6'][1:]
crnn_val_loss_10 = json.load(open('saved_models/mpc_guided_crnn_history'))['loss_metric1'][1:]
crnn_val_loss_20 = json.load(open('saved_models/mpc_guided_crnn_history'))['loss_metric2'][1:]
crnn_val_loss_30 = json.load(open('saved_models/mpc_guided_crnn_history'))['loss_metric3'][1:]
crnn_val_loss_40 = json.load(open('saved_models/mpc_guided_crnn_history'))['loss_metric4'][1:]
crnn_val_loss_50 = json.load(open('saved_models/mpc_guided_crnn_history'))['loss_metric5'][1:]
crnn_val_loss_60 = json.load(open('saved_models/mpc_guided_crnn_history'))['loss_metric6'][1:]
bilstm_val_loss_10 = json.load(open('saved_models/mpc_guided_bilstm_history'))['loss_metric1'][1:]
bilstm_val_loss_20 = json.load(open('saved_models/mpc_guided_bilstm_history'))['loss_metric2'][1:]
bilstm_val_loss_30 = json.load(open('saved_models/mpc_guided_bilstm_history'))['loss_metric3'][1:]
bilstm_val_loss_40 = json.load(open('saved_models/mpc_guided_bilstm_history'))['loss_metric4'][1:]
bilstm_val_loss_50 = json.load(open('saved_models/mpc_guided_bilstm_history'))['loss_metric5'][1:]
bilstm_val_loss_60 = json.load(open('saved_models/mpc_guided_bilstm_history'))['loss_metric6'][1:]
fig, axes = plt.subplots(2,3)
axes[0,0].plot(t, np.sqrt(lstm_val_loss_10), label='LSTM')
axes[0,1].plot(t, np.sqrt(lstm_val_loss_20), label='LSTM')
axes[0,2].plot(t, np.sqrt(lstm_val_loss_30), label='LSTM')
axes[1,0].plot(t, np.sqrt(lstm_val_loss_40), label='LSTM')
axes[1,1].plot(t, np.sqrt(lstm_val_loss_50), label='LSTM')
axes[1,2].plot(t, np.sqrt(lstm_val_loss_60), label='LSTM')
axes[0,0].plot(t, np.sqrt(crnn_val_loss_10), label='CRNN')
axes[0,1].plot(t, np.sqrt(crnn_val_loss_20), label='CRNN')
axes[0,2].plot(t, np.sqrt(crnn_val_loss_30), label='CRNN')
axes[1,0].plot(t, np.sqrt(crnn_val_loss_40), label='CRNN')
axes[1,1].plot(t, np.sqrt(crnn_val_loss_50), label='CRNN')
axes[1,2].plot(t, np.sqrt(crnn_val_loss_60), label='CRNN')
axes[0,0].plot(t, np.sqrt(bilstm_val_loss_10), label='Bidirectional LSTM')
axes[0,1].plot(t, np.sqrt(bilstm_val_loss_20), label='Bidirectional LSTM')
axes[0,2].plot(t, np.sqrt(bilstm_val_loss_30), label='Bidirectional LSTM')
axes[1,0].plot(t, np.sqrt(bilstm_val_loss_40), label='Bidirectional LSTM')
axes[1,1].plot(t, np.sqrt(bilstm_val_loss_50), label='Bidirectional LSTM')
axes[1,2].plot(t, np.sqrt(bilstm_val_loss_60), label='Bidirectional LSTM')
axes[0,0].title.set_text('10 minute prediction train loss')
axes[0,1].title.set_text('20 minute prediction train loss')
axes[0,2].title.set_text('30 minute prediction train loss')
axes[1,0].title.set_text('40 minute prediction train loss')
axes[1,1].title.set_text('50 minute prediction train loss')
axes[1,2].title.set_text('60 minute prediction train loss')
axes[0,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_xlabel('Epochs')
axes[1,1].set_xlabel('Epochs')
axes[1,2].set_xlabel('Epochs')
axes[0,0].legend()
axes[0,1].legend()
axes[0,2].legend()
axes[1,0].legend()
axes[1,1].legend()
axes[1,2].legend()
plt.rcParams["figure.figsize"] = (20,10)
custom_ylim = (0,0.8)
plt.setp(axes, ylim=custom_ylim)
#plt.suptitle('Figure 8. MPC train losses')
plt.show()
#print(lstm_train_loss)
# + [markdown] colab_type="text" id="C02xNdRfbHvE"
# ## Validation loss
# + colab={"base_uri": "https://localhost:8080/", "height": 621} colab_type="code" id="i5qSXwU7aUis" outputId="73eb3fa3-3a85-46c9-ddba-47fa98ec6b14"
t = np.arange(1,100)
lstm_val_loss_10 = json.load(open('saved_models/mpc_guided_lstm_history'))['val_loss_metric1'][1:]
lstm_val_loss_20 = json.load(open('saved_models/mpc_guided_lstm_history'))['val_loss_metric2'][1:]
lstm_val_loss_30 = json.load(open('saved_models/mpc_guided_lstm_history'))['val_loss_metric3'][1:]
lstm_val_loss_40 = json.load(open('saved_models/mpc_guided_lstm_history'))['val_loss_metric4'][1:]
lstm_val_loss_50 = json.load(open('saved_models/mpc_guided_lstm_history'))['val_loss_metric5'][1:]
lstm_val_loss_60 = json.load(open('saved_models/mpc_guided_lstm_history'))['val_loss_metric6'][1:]
crnn_val_loss_10 = json.load(open('saved_models/mpc_guided_crnn_history'))['val_loss_metric1'][1:]
crnn_val_loss_20 = json.load(open('saved_models/mpc_guided_crnn_history'))['val_loss_metric2'][1:]
crnn_val_loss_30 = json.load(open('saved_models/mpc_guided_crnn_history'))['val_loss_metric3'][1:]
crnn_val_loss_40 = json.load(open('saved_models/mpc_guided_crnn_history'))['val_loss_metric4'][1:]
crnn_val_loss_50 = json.load(open('saved_models/mpc_guided_crnn_history'))['val_loss_metric5'][1:]
crnn_val_loss_60 = json.load(open('saved_models/mpc_guided_crnn_history'))['val_loss_metric6'][1:]
bilstm_val_loss_10 = json.load(open('saved_models/mpc_guided_bilstm_history'))['val_loss_metric1'][1:]
bilstm_val_loss_20 = json.load(open('saved_models/mpc_guided_bilstm_history'))['val_loss_metric2'][1:]
bilstm_val_loss_30 = json.load(open('saved_models/mpc_guided_bilstm_history'))['val_loss_metric3'][1:]
bilstm_val_loss_40 = json.load(open('saved_models/mpc_guided_bilstm_history'))['val_loss_metric4'][1:]
bilstm_val_loss_50 = json.load(open('saved_models/mpc_guided_bilstm_history'))['val_loss_metric5'][1:]
bilstm_val_loss_60 = json.load(open('saved_models/mpc_guided_bilstm_history'))['val_loss_metric6'][1:]
plt.rcParams["figure.figsize"] = (20,10)
fig, axes = plt.subplots(2,3)
axes[0,0].plot(t, np.sqrt(lstm_val_loss_10), label='LSTM')
axes[0,1].plot(t, np.sqrt(lstm_val_loss_20), label='LSTM')
axes[0,2].plot(t, np.sqrt(lstm_val_loss_30), label='LSTM')
axes[1,0].plot(t, np.sqrt(lstm_val_loss_40), label='LSTM')
axes[1,1].plot(t, np.sqrt(lstm_val_loss_50), label='LSTM')
axes[1,2].plot(t, np.sqrt(lstm_val_loss_60), label='LSTM')
axes[0,0].plot(t, np.sqrt(crnn_val_loss_10), label='CRNN')
axes[0,1].plot(t, np.sqrt(crnn_val_loss_20), label='CRNN')
axes[0,2].plot(t, np.sqrt(crnn_val_loss_30), label='CRNN')
axes[1,0].plot(t, np.sqrt(crnn_val_loss_40), label='CRNN')
axes[1,1].plot(t, np.sqrt(crnn_val_loss_50), label='CRNN')
axes[1,2].plot(t, np.sqrt(crnn_val_loss_60), label='CRNN')
axes[0,0].plot(t, np.sqrt(bilstm_val_loss_10), label='Bidirectional LSTM')
axes[0,1].plot(t, np.sqrt(bilstm_val_loss_20), label='Bidirectional LSTM')
axes[0,2].plot(t, np.sqrt(bilstm_val_loss_30), label='Bidirectional LSTM')
axes[1,0].plot(t, np.sqrt(bilstm_val_loss_40), label='Bidirectional LSTM')
axes[1,1].plot(t, np.sqrt(bilstm_val_loss_50), label='Bidirectional LSTM')
axes[1,2].plot(t, np.sqrt(bilstm_val_loss_60), label='Bidirectional LSTM')
axes[0,0].title.set_text('10 minute prediction validation loss')
axes[0,1].title.set_text('20 minute prediction validation loss')
axes[0,2].title.set_text('30 minute prediction validation loss')
axes[1,0].title.set_text('40 minute prediction validation loss')
axes[1,1].title.set_text('50 minute prediction validation loss')
axes[1,2].title.set_text('60 minute prediction validation loss')
axes[0,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_xlabel('Epochs')
axes[1,1].set_xlabel('Epochs')
axes[1,2].set_xlabel('Epochs')
axes[0,0].legend()
axes[0,1].legend()
axes[0,2].legend()
axes[1,0].legend()
axes[1,1].legend()
axes[1,2].legend()
custom_ylim = (0,0.8)
plt.setp(axes, ylim=custom_ylim)
#fig.suptitle('Figure 9. MPC validation losses')
plt.show()
# + [markdown] colab_type="text" id="ojpcnE0Ganms"
# # Load UVA results
# + [markdown] colab_type="text" id="bhzuQKKbbrWz"
# ## Train loss
# + colab={"base_uri": "https://localhost:8080/", "height": 621} colab_type="code" id="3Fr8M5axbtZj" outputId="0304758b-d77c-4a25-b3b2-b5d03b5dfcfb"
t = np.arange(1,100)
lstm_val_loss_10 = json.load(open('saved_models/uva_padova_lstm_history'))['loss_metric1'][1:]
lstm_val_loss_20 = json.load(open('saved_models/uva_padova_lstm_history'))['loss_metric2'][1:]
lstm_val_loss_30 = json.load(open('saved_models/uva_padova_lstm_history'))['loss_metric3'][1:]
lstm_val_loss_40 = json.load(open('saved_models/uva_padova_lstm_history'))['loss_metric4'][1:]
lstm_val_loss_50 = json.load(open('saved_models/uva_padova_lstm_history'))['loss_metric5'][1:]
lstm_val_loss_60 = json.load(open('saved_models/uva_padova_lstm_history'))['loss_metric6'][1:]
crnn_val_loss_10 = json.load(open('saved_models/uva_padova_crnn_history'))['loss_metric1'][1:]
crnn_val_loss_20 = json.load(open('saved_models/uva_padova_crnn_history'))['loss_metric2'][1:]
crnn_val_loss_30 = json.load(open('saved_models/uva_padova_crnn_history'))['loss_metric3'][1:]
crnn_val_loss_40 = json.load(open('saved_models/uva_padova_crnn_history'))['loss_metric4'][1:]
crnn_val_loss_50 = json.load(open('saved_models/uva_padova_crnn_history'))['loss_metric5'][1:]
crnn_val_loss_60 = json.load(open('saved_models/uva_padova_crnn_history'))['loss_metric6'][1:]
bilstm_val_loss_10 = json.load(open('saved_models/uva_padova_bilstm_history'))['loss_metric1'][1:]
bilstm_val_loss_20 = json.load(open('saved_models/uva_padova_bilstm_history'))['loss_metric2'][1:]
bilstm_val_loss_30 = json.load(open('saved_models/uva_padova_bilstm_history'))['loss_metric3'][1:]
bilstm_val_loss_40 = json.load(open('saved_models/uva_padova_bilstm_history'))['loss_metric4'][1:]
bilstm_val_loss_50 = json.load(open('saved_models/uva_padova_bilstm_history'))['loss_metric5'][1:]
bilstm_val_loss_60 = json.load(open('saved_models/uva_padova_bilstm_history'))['loss_metric6'][1:]
fig, axes = plt.subplots(2,3)
plt.rcParams["figure.figsize"] = (20,10)
axes[0,0].plot(t, np.sqrt(lstm_val_loss_10), label='LSTM')
axes[0,1].plot(t, np.sqrt(lstm_val_loss_20), label='LSTM')
axes[0,2].plot(t, np.sqrt(lstm_val_loss_30), label='LSTM')
axes[1,0].plot(t, np.sqrt(lstm_val_loss_40), label='LSTM')
axes[1,1].plot(t, np.sqrt(lstm_val_loss_50), label='LSTM')
axes[1,2].plot(t, np.sqrt(lstm_val_loss_60), label='LSTM')
axes[0,0].plot(t, np.sqrt(crnn_val_loss_10), label='CRNN')
axes[0,1].plot(t, np.sqrt(crnn_val_loss_20), label='CRNN')
axes[0,2].plot(t, np.sqrt(crnn_val_loss_30), label='CRNN')
axes[1,0].plot(t, np.sqrt(crnn_val_loss_40), label='CRNN')
axes[1,1].plot(t, np.sqrt(crnn_val_loss_50), label='CRNN')
axes[1,2].plot(t, np.sqrt(crnn_val_loss_60), label='CRNN')
axes[0,0].plot(t, np.sqrt(bilstm_val_loss_10), label='Bidirectional LSTM')
axes[0,1].plot(t, np.sqrt(bilstm_val_loss_20), label='Bidirectional LSTM')
axes[0,2].plot(t, np.sqrt(bilstm_val_loss_30), label='Bidirectional LSTM')
axes[1,0].plot(t, np.sqrt(bilstm_val_loss_40), label='Bidirectional LSTM')
axes[1,1].plot(t, np.sqrt(bilstm_val_loss_50), label='Bidirectional LSTM')
axes[1,2].plot(t, np.sqrt(bilstm_val_loss_60), label='Bidirectional LSTM')
axes[0,0].title.set_text('10 minute prediction train loss')
axes[0,1].title.set_text('20 minute prediction train loss')
axes[0,2].title.set_text('30 minute prediction train loss')
axes[1,0].title.set_text('40 minute prediction train loss')
axes[1,1].title.set_text('50 minute prediction train loss')
axes[1,2].title.set_text('60 minute prediction train loss')
axes[0,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_xlabel('Epochs')
axes[1,1].set_xlabel('Epochs')
axes[1,2].set_xlabel('Epochs')
axes[0,0].legend()
axes[0,1].legend()
axes[0,2].legend()
axes[1,0].legend()
axes[1,1].legend()
axes[1,2].legend()
#plt.rcParams["figure.figsize"] = (20,10)
custom_ylim = (0,1.2)
plt.setp(axes, ylim=custom_ylim)
#fig.suptitle('Figure 10. UVA Padova train losses')
plt.show()
# + [markdown] colab_type="text" id="hKXfj-9NbtuM"
# ## Validation loss
# + colab={"base_uri": "https://localhost:8080/", "height": 621} colab_type="code" id="H5GspnPzaqL8" outputId="4c9b231c-cc39-4989-ee73-d820986ba768"
t = np.arange(1,100)
lstm_val_loss_10 = json.load(open('saved_models/uva_padova_lstm_history'))['val_loss_metric1'][1:]
lstm_val_loss_20 = json.load(open('saved_models/uva_padova_lstm_history'))['val_loss_metric2'][1:]
lstm_val_loss_30 = json.load(open('saved_models/uva_padova_lstm_history'))['val_loss_metric3'][1:]
lstm_val_loss_40 = json.load(open('saved_models/uva_padova_lstm_history'))['val_loss_metric4'][1:]
lstm_val_loss_50 = json.load(open('saved_models/uva_padova_lstm_history'))['val_loss_metric5'][1:]
lstm_val_loss_60 = json.load(open('saved_models/uva_padova_lstm_history'))['val_loss_metric6'][1:]
crnn_val_loss_10 = json.load(open('saved_models/uva_padova_crnn_history'))['val_loss_metric1'][1:]
crnn_val_loss_20 = json.load(open('saved_models/uva_padova_crnn_history'))['val_loss_metric2'][1:]
crnn_val_loss_30 = json.load(open('saved_models/uva_padova_crnn_history'))['val_loss_metric3'][1:]
crnn_val_loss_40 = json.load(open('saved_models/uva_padova_crnn_history'))['val_loss_metric4'][1:]
crnn_val_loss_50 = json.load(open('saved_models/uva_padova_crnn_history'))['val_loss_metric5'][1:]
crnn_val_loss_60 = json.load(open('saved_models/uva_padova_crnn_history'))['val_loss_metric6'][1:]
bilstm_val_loss_10 = json.load(open('saved_models/uva_padova_bilstm_history'))['val_loss_metric1'][1:]
bilstm_val_loss_20 = json.load(open('saved_models/uva_padova_bilstm_history'))['val_loss_metric2'][1:]
bilstm_val_loss_30 = json.load(open('saved_models/uva_padova_bilstm_history'))['val_loss_metric3'][1:]
bilstm_val_loss_40 = json.load(open('saved_models/uva_padova_bilstm_history'))['val_loss_metric4'][1:]
bilstm_val_loss_50 = json.load(open('saved_models/uva_padova_bilstm_history'))['val_loss_metric5'][1:]
bilstm_val_loss_60 = json.load(open('saved_models/uva_padova_bilstm_history'))['val_loss_metric6'][1:]
fig, axes = plt.subplots(2,3)
plt.rcParams["figure.figsize"] = (20,10)
axes[0,0].plot(t, np.sqrt(lstm_val_loss_10), label='LSTM')
axes[0,1].plot(t, np.sqrt(lstm_val_loss_20), label='LSTM')
axes[0,2].plot(t, np.sqrt(lstm_val_loss_30), label='LSTM')
axes[1,0].plot(t, np.sqrt(lstm_val_loss_40), label='LSTM')
axes[1,1].plot(t, np.sqrt(lstm_val_loss_50), label='LSTM')
axes[1,2].plot(t, np.sqrt(lstm_val_loss_60), label='LSTM')
axes[0,0].plot(t, np.sqrt(crnn_val_loss_10), label='CRNN')
axes[0,1].plot(t, np.sqrt(crnn_val_loss_20), label='CRNN')
axes[0,2].plot(t, np.sqrt(crnn_val_loss_30), label='CRNN')
axes[1,0].plot(t, np.sqrt(crnn_val_loss_40), label='CRNN')
axes[1,1].plot(t, np.sqrt(crnn_val_loss_50), label='CRNN')
axes[1,2].plot(t, np.sqrt(crnn_val_loss_60), label='CRNN')
axes[0,0].plot(t, np.sqrt(bilstm_val_loss_10), label='Bidirectional LSTM')
axes[0,1].plot(t, np.sqrt(bilstm_val_loss_20), label='Bidirectional LSTM')
axes[0,2].plot(t, np.sqrt(bilstm_val_loss_30), label='Bidirectional LSTM')
axes[1,0].plot(t, np.sqrt(bilstm_val_loss_40), label='Bidirectional LSTM')
axes[1,1].plot(t, np.sqrt(bilstm_val_loss_50), label='Bidirectional LSTM')
axes[1,2].plot(t, np.sqrt(bilstm_val_loss_60), label='Bidirectional LSTM')
axes[0,0].title.set_text('10 minute prediction validation loss')
axes[0,1].title.set_text('20 minute prediction validation loss')
axes[0,2].title.set_text('30 minute prediction validation loss')
axes[1,0].title.set_text('40 minute prediction validation loss')
axes[1,1].title.set_text('50 minute prediction validation loss')
axes[1,2].title.set_text('60 minute prediction validation loss')
axes[0,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_ylabel('RMSE (mmol/L)')
axes[1,0].set_xlabel('Epochs')
axes[1,1].set_xlabel('Epochs')
axes[1,2].set_xlabel('Epochs')
axes[0,0].legend()
axes[0,1].legend()
axes[0,2].legend()
axes[1,0].legend()
axes[1,1].legend()
axes[1,2].legend()
#plt.rcParams["figure.figsize"] = (20,10)
custom_ylim = (0,1.2)
plt.setp(axes, ylim=custom_ylim)
#fig.suptitle('Figure 11. UVA Padova validation losses')
plt.show()
# + [markdown] colab_type="text" id="ZSxsRlXmc7j7"
# # Total insulin
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="fIkbJT7eh26T" outputId="d5bb0d9d-d142-420b-d8d8-756e7b044e2e"
uva_train, _ = load_uva(12,6,10,False)
mpc_train, _ = load_mpc(12,6,10,False)
plt.plot(np.arange(144)*10, uva_train[0,2,:])
plt.xlabel('Minutes')
plt.ylabel('Units')
#plt.title('Figure 13. UVA Padova total insulin')
plt.show()
plt.plot(np.arange(144)*10, mpc_train[0,2,:])
plt.xlabel('Minutes')
plt.ylabel('Units')
#plt.title('Figure 12. MPC total insulin')
plt.show()
| notebooks/.ipynb_checkpoints/accuracy-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: python3
# ---
# # Train and Host a Keras Model with Pipe Mode and Horovod on Amazon SageMaker
#
# Amazon SageMaker is a fully-managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. The SageMaker Python SDK makes it easy to train and deploy models in Amazon SageMaker with several different machine learning and deep learning frameworks, including TensorFlow and Keras.
#
# In this notebook, we train and host a [Keras Sequential model](https://keras.io/getting-started/sequential-model-guide) on SageMaker. The model used for this notebook is a simple deep convolutional neural network (CNN) that was extracted from [the Keras examples](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py).
#
# For training our model, we also demonstrate distributed training with [Horovod](https://horovod.readthedocs.io) and Pipe Mode. Amazon SageMaker's Pipe Mode streams your dataset directly to your training instances instead of being downloaded first, which translates to training jobs that start sooner, finish quicker, and need less disk space.
# ## Setup
#
# First, we define a few variables that are be needed later in the example.
# +
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
# role = get_execution_role()
role = "arn:aws:iam::941656036254:role/service-role/AmazonSageMaker-ExecutionRole-20210904T193230"
# -
# ## The CIFAR-10 dataset
#
# The [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is one of the most popular machine learning datasets. It consists of 60,000 32x32 images belonging to 10 different classes (6,000 images per class). Here are the classes in the dataset, as well as 10 random images from each:
#
# 
# ### Prepare the dataset for training
#
# To use the CIFAR-10 dataset, we first download it and convert it to TFRecords. This step takes around 5 minutes.
# !python generate_cifar10_tfrecords.py --data-dir ./data
# Next, we upload the data to Amazon S3:
# +
from sagemaker.s3 import S3Uploader
bucket = sagemaker_session.default_bucket()
dataset_uri = S3Uploader.upload("data", "s3://{}/tf-cifar10-example/data".format(bucket))
display(dataset_uri)
# -
# ## Train the model
#
# In this tutorial, we train a deep CNN to learn a classification task with the CIFAR-10 dataset. We compare three different training jobs: a baseline training job, training with Pipe Mode, and distributed training with Horovod.
#
# ### Run a baseline training job on SageMaker
#
# The SageMaker Python SDK's `sagemaker.tensorflow.TensorFlow` estimator class makes it easy for us to interact with SageMaker. We create one for each of the different training jobs we run in this example. A couple parameters worth noting:
#
# * `entry_point`: our training script (adapted from [this Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py)).
# * `train_instance_count`: the number of training instances. Here, we set it to 1 for our baseline training job.
#
# As we run each of our training jobs, we change different parameters to configure our different training jobs.
#
# For more details about the TensorFlow estimator class, see the [API documentation](https://sagemaker.readthedocs.io/en/stable/sagemaker.tensorflow.html).
# ### Verify the training code
#
# Before running the baseline training job, we first use [the SageMaker Python SDK's Local Mode feature](https://sagemaker.readthedocs.io/en/stable/overview.html#local-mode) to check that our code works with SageMaker's TensorFlow environment. Local Mode downloads the [prebuilt Docker image for TensorFlow](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-images.html) and runs a Docker container locally for a training job. In other words, it simulates the SageMaker environment for a quicker development cycle, so we use it here just to test out our code.
#
# We create a TensorFlow estimator, and specify the `instance_type` to be `'local'` or `'local_gpu'`, depending on our local instance type. This tells the estimator to run our training job locally (as opposed to on SageMaker). We also have our training code run for only one epoch because our intent here is to verify the code, not train an accurate model.
# +
import subprocess
from sagemaker.tensorflow import TensorFlow
instance_type = "local"
if subprocess.call("nvidia-smi") == 0:
# Set instance type to GPU if one is present
instance_type = "local_gpu"
local_hyperparameters = {"epochs": 1, "batch-size": 64}
estimator = TensorFlow(
entry_point="cifar10_keras_main.py",
source_dir="source_dir",
role=role,
framework_version="1.15.2",
py_version="py3",
hyperparameters=local_hyperparameters,
train_instance_count=1,
train_instance_type=instance_type,
)
# -
# Once we have our estimator, we call `fit()` to start the training job and pass the inputs that we downloaded earlier. We pass the inputs as a dictionary to define different data channels for training.
# +
import os
data_path = os.path.join(os.getcwd(), "data")
local_inputs = {
"train": "file://{}/train".format(data_path),
"validation": "file://{}/validation".format(data_path),
"eval": "file://{}/eval".format(data_path),
}
estimator.fit(local_inputs)
# -
# ### Run a baseline training job on SageMaker
#
# Now we run training jobs on SageMaker, starting with our baseline training job.
# ### Configure metrics
#
# In addition to running the training job, Amazon SageMaker can retrieve training metrics directly from the logs and send them to CloudWatch metrics. Here, we define metrics we would like to observe:
metric_definitions = [
{"Name": "train:loss", "Regex": ".*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*"},
{"Name": "train:accuracy", "Regex": ".*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*"},
{
"Name": "validation:accuracy",
"Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: ([0-9\\.]+).*",
},
{
"Name": "validation:loss",
"Regex": ".*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_accuracy: [0-9\\.]+.*",
},
{
"Name": "sec/steps",
"Regex": ".* - \d+s (\d+)[mu]s/step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: [0-9\\.]+",
},
]
# Once again, we create a TensorFlow estimator, with a couple key modfications from last time:
#
# * `train_instance_type`: the instance type for training. We set this to `ml.p2.xlarge` because we are training on SageMaker now. For a list of available instance types, see [the AWS documentation](https://aws.amazon.com/sagemaker/pricing/instance-types).
# * `metric_definitions`: the metrics (defined above) that we want sent to CloudWatch.
# +
from sagemaker.tensorflow import TensorFlow
hyperparameters = {"epochs": 10, "batch-size": 256}
tags = [{"Key": "Project", "Value": "cifar10"}, {"Key": "TensorBoard", "Value": "file"}]
estimator = TensorFlow(
entry_point="cifar10_keras_main.py",
source_dir="source_dir",
metric_definitions=metric_definitions,
hyperparameters=hyperparameters,
role=role,
framework_version="1.15.2",
py_version="py3",
train_instance_count=1,
train_instance_type="ml.p2.xlarge",
base_job_name="cifar10-tf",
tags=tags,
)
# -
# Like before, we call `fit()` to start the SageMaker training job and pass the inputs in a dictionary to define different data channels for training. This time, we use the S3 URI from uploading our data.
# +
inputs = {
"train": "{}/train".format(dataset_uri),
"validation": "{}/validation".format(dataset_uri),
"eval": "{}/eval".format(dataset_uri),
}
estimator.fit(inputs)
# -
# ### View the job training metrics
#
# We can now view the metrics from the training job directly in the SageMaker console.
#
# Log into the [SageMaker console](https://console.aws.amazon.com/sagemaker/home), choose the latest training job, and scroll down to the monitor section. Alternatively, the code below uses the region and training job name to generate a URL to CloudWatch metrics.
#
# Using CloudWatch metrics, you can change the period and configure the statistics.
# +
from urllib import parse
from IPython.core.display import Markdown
region = sagemaker_session.boto_region_name
cw_url = parse.urlunparse(
(
"https",
"{}.console.aws.amazon.com".format(region),
"/cloudwatch/home",
"",
"region={}".format(region),
"metricsV2:namespace=/aws/sagemaker/TrainingJobs;dimensions=TrainingJobName;search={}".format(
estimator.latest_training_job.name
),
)
)
display(
Markdown(
"CloudWatch metrics: [link]({}). After you choose a metric, "
"change the period to 1 Minute (Graphed Metrics -> Period).".format(cw_url)
)
)
# -
# ### Train on SageMaker with Pipe Mode
#
# Here we train our model using Pipe Mode. With Pipe Mode, SageMaker uses [Linux named pipes](https://www.linuxjournal.com/article/2156) to stream the training data directly from S3 instead of downloading the data first.
#
# In our script, we enable Pipe Mode using the following code:
#
# ```python
# from sagemaker_tensorflow import PipeModeDataset
#
# dataset = PipeModeDataset(channel=channel_name, record_format='TFRecord')
# ```
#
# When we create our estimator, the only difference from before is that we also specify `input_mode='Pipe'`:
pipe_mode_estimator = TensorFlow(
entry_point="cifar10_keras_main.py",
source_dir="source_dir",
metric_definitions=metric_definitions,
hyperparameters=hyperparameters,
role=role,
framework_version="1.15.2",
py_version="py3",
train_instance_count=1,
train_instance_type="ml.p2.xlarge",
input_mode="Pipe",
base_job_name="cifar10-tf-pipe",
tags=tags,
)
# In this example, we set ```wait=False``` if you want to see the output logs, change this to ```wait=True```
pipe_mode_estimator.fit(inputs, wait=False)
# ### Distributed training with Horovod
#
# [Horovod](https://horovod.readthedocs.io) is a distributed training framework based on MPI. To use Horovod, we make the following changes to our training script:
#
# 1. Enable Horovod:
#
# ```python
# import horovod.keras as hvd
#
# hvd.init()
# config = tf.ConfigProto()
# config.gpu_options.allow_growth = True
# config.gpu_options.visible_device_list = str(hvd.local_rank())
# K.set_session(tf.Session(config=config))
# ```
#
# 2. Add these callbacks:
#
# ```python
# hvd.callbacks.BroadcastGlobalVariablesCallback(0)
# hvd.callbacks.MetricAverageCallback()
# ```
#
# 3. Configure the optimizer:
#
# ```python
# opt = Adam(lr=learning_rate * size, decay=weight_decay)
# opt = hvd.DistributedOptimizer(opt)
# ```
#
# 4. Choose to save checkpoints and send TensorBoard logs only from the master node:
#
# ```python
# if hvd.rank() == 0:
# save_model(model, args.model_output_dir)
# ```
# To configure the training job, we specify the following for the distribution:
distribution = {
"mpi": {
"enabled": True,
"processes_per_host": 1, # Number of Horovod processes per host
}
}
# This is then passed to our estimator, in addition to setting `train_instance_count` to 2:
dist_estimator = TensorFlow(
entry_point="cifar10_keras_main.py",
source_dir="source_dir",
metric_definitions=metric_definitions,
hyperparameters=hyperparameters,
distributions=distribution,
role=role,
framework_version="1.15.2",
py_version="py3",
train_instance_count=2,
train_instance_type="ml.p3.2xlarge",
base_job_name="cifar10-tf-dist",
tags=tags,
)
# Like before, we call `fit()` on our estimator. If you want to see the training job logs in the notebook output, set `wait=True`.
dist_estimator.fit(inputs, wait=False)
# ### Compare the training jobs with TensorBoard
#
# Using the visualization tool [TensorBoard](https://www.tensorflow.org/tensorboard), we can compare our training jobs.
#
# In a local setting, install TensorBoard with `pip install tensorboard`. Then run the command generated by the following code:
# !python generate_tensorboard_command.py
# After running that command, we can access TensorBoard locally at http://localhost:6006.
#
# Based on the TensorBoard metrics, we can see that:
# 1. All jobs run for 10 epochs (0 - 9).
# 1. Both File Mode and Pipe Mode run for ~1 minute - Pipe Mode doesn't affect training performance.
# 1. Distributed training runs for only 45 seconds.
# 1. All of the training jobs resulted in similar validation accuracy.
#
# This example uses a relatively small dataset (179 MB). For larger datasets, Pipe Mode can significantly reduce training time because it does not copy the entire dataset into local memory.
# ## Deploy the trained model
#
# After we train our model, we can deploy it to a SageMaker Endpoint, which serves prediction requests in real-time. To do so, we simply call `deploy()` on our estimator, passing in the desired number of instances and instance type for the endpoint.
#
# Because we're using TensorFlow Serving for deployment, our training script saves the model in TensorFlow's SavedModel format. For more details, see [this blog post on deploying Keras and TF models in SageMaker](https://aws.amazon.com/blogs/machine-learning/deploy-trained-keras-or-tensorflow-models-using-amazon-sagemaker).
# + pycharm={"name": "#%%\n"}
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
# -
# ### Invoke the endpoint
#
# To verify the that the endpoint is in service, we generate some random data in the correct shape and get a prediction.
# +
import numpy as np
data = np.random.randn(1, 32, 32, 3)
print("Predicted class: {}".format(np.argmax(predictor.predict(data)["predictions"])))
# -
# Now let's use the test dataset for predictions.
# +
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# -
# With the data loaded, we can use it for predictions:
# +
from keras.preprocessing.image import ImageDataGenerator
def predict(data):
predictions = predictor.predict(data)["predictions"]
return predictions
predicted = []
actual = []
batches = 0
batch_size = 128
datagen = ImageDataGenerator()
for data in datagen.flow(x_test, y_test, batch_size=batch_size):
for i, prediction in enumerate(predict(data[0])):
predicted.append(np.argmax(prediction))
actual.append(data[1][i][0])
batches += 1
if batches >= len(x_test) / batch_size:
break
# -
# With the predictions, we calculate our model accuracy and create a confusion matrix.
# +
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_pred=predicted, y_true=actual)
display("Average accuracy: {}%".format(round(accuracy * 100, 2)))
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_pred=predicted, y_true=actual)
cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
sn.set(rc={"figure.figsize": (11.7, 8.27)})
sn.set(font_scale=1.4) # for label size
sn.heatmap(cm, annot=True, annot_kws={"size": 10}) # font size
# -
# Aided by the colors of the heatmap, we can use this confusion matrix to understand how well the model performed for each label.
# ## Cleanup
#
# To avoid incurring extra charges to your AWS account, let's delete the endpoint we created:
predictor.delete_endpoint()
| chapter4/3_Streaming_S3_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
os.chdir('../../')
import DeepPurpose.DTI as models
from DeepPurpose.utils import *
from DeepPurpose.dataset import *
import Processing.dataset_filter as processors
if not os.path.exists('./result/DeepDTA'):
os.makedirs('./result/DeepDTA')
# + pycharm={"name": "#%%\n"}
df = pd.read_csv('./data/r2/title_r2_1.25k.csv', sep = ',', error_bad_lines=False)
X_drug, X_target, y = df['Drug'].values, df['Target'].values, df['Label'].values
drug_encoding = 'CNN'
target_encoding = 'CNN'
train, val, test = data_process(X_drug, X_target, y,
drug_encoding, target_encoding,
split_method='random',frac=[0.7,0.1,0.2])
# use the parameters setting provided in the paper: https://arxiv.org/abs/1801.10193
config = generate_config(drug_encoding = drug_encoding,
target_encoding = target_encoding,
cls_hidden_dims = [1024,1024,512],
train_epoch = 100,
LR = 0.001,
batch_size = 256,
cnn_drug_filters = [32,64,96],
cnn_target_filters = [32,64,96],
cnn_drug_kernels = [4,6,8],
cnn_target_kernels = [4,8,12]
)
# + pycharm={"name": "#%%\n"}
model = models.model_initialize(**config)
model.train(train, val, test)
# + pycharm={"name": "#%%\n"}
model.save_model('./result/DeepDTA/r2/model_r2_1.25k_100epochs')
# -
| Processing/r2/DeepDTA-kdki-r2-1.25k.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_unsga3:
# -
# ## U-NSGA-III
#
#
# The algorithm is implemented based on <cite data-cite="unsga3"></cite>. NSGA-III selects parents randomly for mating. It has been shown that tournament selection performs better than random selection. The *U* stands for *unified* and increases the performance of NSGA-III by introducing tournament pressure.
#
# The mating selections works as follows:
#
# <div style="display: block;margin-left: auto;margin-right: auto;width: 45%;">
# 
# </div>
#
# ### Example
# + code="algorithms/usage_unsga3.py" section="unsga3"
import numpy as np
from pymoo.algorithms.nsga3 import NSGA3
from pymoo.algorithms.unsga3 import UNSGA3
from pymoo.factory import get_problem
from pymoo.optimize import minimize
problem = get_problem("ackley", n_var=30)
# create the reference directions to be used for the optimization - just a single one here
ref_dirs = np.array([[1.0]])
# create the algorithm object
algorithm = UNSGA3(ref_dirs, pop_size=100)
# execute the optimization
res = minimize(problem,
algorithm,
termination=('n_gen', 150),
save_history=True,
seed=1)
print("UNSGA3: Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
# -
# U-NSGA-III has for single- and bi-objective problems a tournament pressure which is known to be useful.
# In the following we provide a quick comparison (here just one run, so not a valid experiment), to see the difference in convergence.
# + code="algorithms/usage_unsga3.py" section="no_unsga3"
_res = minimize(problem,
NSGA3(ref_dirs, pop_size=100),
termination=('n_gen', 150),
save_history=True,
seed=1)
print("NSGA3: Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
# + code="algorithms/usage_unsga3.py" section="unsga3_comp"
import numpy as np
import matplotlib.pyplot as plt
ret = [np.min(e.pop.get("F")) for e in res.history]
_ret = [np.min(e.pop.get("F")) for e in _res.history]
plt.plot(np.arange(len(ret)), ret, label="unsga3")
plt.plot(np.arange(len(_ret)), _ret, label="nsga3")
plt.title("Convergence")
plt.xlabel("Generation")
plt.ylabel("F")
plt.legend()
plt.show()
# + [markdown] raw_mimetype="text/restructuredtext"
# ### API
# + raw_mimetype="text/restructuredtext" active=""
# .. autoclass:: pymoo.algorithms.unsga3.UNSGA3
# :noindex:
| doc/source/algorithms/unsga3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project - Perceptron
# - Try a Perceptron model with more dimensions
# ### Step 1: Import libraries
import pandas as pd
import numpy as np
from sklearn.linear_model import Perceptron
from sklearn import metrics
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# %matplotlib inline
# ### Step 2: Read the data
# - Use Pandas [read_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) method to read **files/weather.csv**
# - HINT: Use **parse_dates=True** and **index_col=0**
data = pd.read_csv('files/weather.csv', parse_dates=True, index_col=0)
data.head()
# ### Step 3: Investigate data
# - Look for missing data points
# - You can do that by applying **isna()** and **sum()**, which will give a summary of rows missing data for each column.
# - Resource: [isna()](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isna.html)
# - Resource: [sum()](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sum.html)
data.isna().sum()
data.isnull().sum()
# ### Step 4: Remove 'dirty' columns
# - Make a choice and remove columns with too many entries with NaN.
# - Say, take all columns with more than 100 rows.
# - Also, you can remove rows with non-numeric values (remember to keep **RainTomorrow**)
# - To remove rows use [drop(columns, axis=1)](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html)
dataset = data.drop(['WindGustDir', 'WindGustSpeed', 'Cloud9am', 'Cloud3pm', 'WindDir9am', 'WindDir3pm', 'RainToday'], axis=1)
dataset.head()
# ### Step 5: Deal with remaining missing data
# - A simple choice is to simply remove rows with missing data
# - Use [dropna()](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html)
dataset_clean = dataset.dropna()
len(dataset), len(dataset_clean)
# ### Step 6: Create training and test datasets
# - Define dataset **X** to consist of all data except **'RainTomorrow'**.
# - Use [dropna()](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html)
# - Define dataset **y** to be datset cosisting of **'RainTomorrow'**.
# - Divide into **X_train, X_test, y_train, y_test** with **train_test_split**
# - You can use **random_state=42** (or any other number) if you want to reproduce results.
X = dataset_clean[dataset_clean.columns[:-1]]
y = dataset_clean['RainTomorrow']
y = np.array([0 if value == 'No' else 1 for value in y])
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# ### Step 7: Train and test the model
# - Create classifier with **Perceptron**
# - You can use **random_state=0** to be able to reproduce
# - Fit the model with training data **(X_train, y_train**)
# - Predict data from **X_test** (use predict) and assign to **y_pred**.
# - Evalute score by using **metrics.accuracy_score(y_test, y_pred)**.
# - You can redo with different choice of columns
clf = Perceptron(random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.accuracy_score(y_test, y_pred)
sum(y == 0) / len(y)
# ### Step 8 (Optional): Plot the result
# - Use Matplotlib.pyplot (**plt**) with **subplots** to create a figure and axes (**fig, ax**)
# - Predict all the datapoints in **X**.
# - Make a scatter plot with all datapoints in **X** with color by the predictions made.
# - You might want to use **alpha=0.25** in your plot as argument.
fig, ax = plt.subplots()
y_pred = clf.predict(X)
ax.scatter(x=X['Humidity3pm'], y=X['Pressure3pm'], c=y_pred, alpha=.25)
| Machine Learning With Python/01 - Project - Perceptron.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Introduction to SIFT (Scale-Invariant Feature Transform)
# 주요 4단계 절차
# ### 1. Scale-space Extrema Detection
# DoG(Difference of Gaussian)을 이용하여 이미지에서 스케일-공간 좌표상 극값을 찾는다.
# 만약 극값이 있다면 이를 Potential Keypoint라 한다.
# ### 2. Keypoint Localization
# 이미지에서 잠재적 키포인트들의 위치를 모두 찾았다면, 보다 정확한 결과를 위해 테일러 전개를 이용, 키포인트를 추출한다.
# ### 3. Orientation Assignment
# 최종적으로 추출된 키포인트들에 방향성 불변이 되도록 방향을 할당한다.
# ### 4. Keypoint Descriptor
# 키포인트 기술자를 계산한다. 이미지 히스토그램을 활용하고, 몇 가지 측정값을 추가한다.
# ### 5.Keypoint Matching
# ## Code
# +
import cv2
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
img = cv2.imread('../data/ex7.png', 0)
sift = cv2.xfeatures2d.SIFT_create() # SIFT 객체 생성
kp, des = sift.detectAndCompute(img, None) # 이미지에서 키포인트들을 계산하고 추출
img2 = cv2.drawKeypoints(img, kp, outImage=None) # 이미지에 키포인트의 위치를 원으로 표시
img3 = cv2.drawKeypoints(img, kp, outImage=None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) # 위치와 크기 방향성 까지 표시(flags)
plt.figure(figsize=(7,7))
plt.imshow(img2)
plt.axis('off')
plt.show()
plt.figure(figsize=(7,7))
plt.imshow(img3)
plt.axis('off')
plt.show()
# -
# ## Feature matching using Brute-Force matcher
# 두 이미지의 특성을 비교하는 가장 단순한 방법은 전수조사일 것이다.
# 이미지 A, B가 있다고 하면, 이미지 A에서 하나의 특징기술자를 취하고, 이 기술자를 이미지 B의 모든 특징기술자와 거리 계산 방법을 이용해 하나하나 비교한다.
# 이렇게 해서 나온 결과중 가장 비슷한 값을 리턴한다. 이런 식으로 이미지 A의 모든 특징기술자에 대해 계산하는 방법을 BF 매칭이라 칭한다.
# +
import numpy as np
import cv2
from matplotlib import pyplot as plt
# %matplotlib inline
img1 = cv2.imread('../data/ex8.png', 0) # target image
img2 = cv2.imread('../data/ex9.png', 0) # original image
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
# BFMatcher with default params
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=False)
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append([m])
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,outImg=None,flags=2)
plt.figure(figsize=(13,13))
plt.imshow(img3)
plt.axis('off')
plt.show()
# -
| SIFT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import required packages
import os
import shutil
import numpy as np
import sklearn.utils as sku
import Config as conf
import CSV as csv
# +
# Set LOG_DIR & OUTPUT_DIR
LOG_DIR = conf.LOG_DIR.format('VGG16')
OUTPUT_DIR = conf.OUTPUT_DIR.format('VGG16')
# Import CSV data
csi, label, size = csv.getWindows()
# -
# Add new axis for VGG Model
csi = csi[..., np.newaxis]
# +
# Import Keras
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.applications as ka
import tensorflow.keras.callbacks as kc
import tensorflow.keras.layers as kl
import tensorflow.keras.models as km
import tensorflow.keras.optimizers as ko
import tensorflow.keras.utils as ku
# Set CUDA (use what gpu?) -- comment this if use all GPUs
os.environ["CUDA_VISIBLE_DEVICES"]="1,2,3"
# Print tensorflow version
print("Tensorflow:", tf.__version__)
print("Keras:", keras.__version__)
# +
# Setup Keras VGG16 Model
model = None
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
adam = ko.Adam(learning_rate=conf.LEARNING_RATE, amsgrad=True)
omodel = ka.VGG16(
input_shape=(size[0], size[1], 1),
classes=conf.ACTION_CNT,
weights=None,
include_top=False
)
flatten = kl.Flatten()(omodel.output)
dense = kl.Dense(conf.ACTION_CNT, activation="softmax")(flatten)
model = km.Model(inputs=omodel.input, outputs=dense)
model.compile(
loss="categorical_crossentropy",
optimizer=adam,
metrics=["accuracy"]
)
model.summary()
# -
# Check output directory and prepare tensorboard
if os.path.exists(OUTPUT_DIR):
shutil.rmtree(OUTPUT_DIR)
os.makedirs(OUTPUT_DIR)
if os.path.exists(LOG_DIR):
shutil.rmtree(LOG_DIR)
os.makedirs(LOG_DIR)
tensorboard = kc.TensorBoard(
log_dir=LOG_DIR,
write_graph=True,
write_images=True,
update_freq=10)
print(
"Your tensorboard command is:"
)
print(" tensorboard --logdir=" + LOG_DIR)
print("Keras checkpoints and final result will be saved in here:")
print(" " + OUTPUT_DIR)
# Run KFold
xx, yy = sku.shuffle(csi, label, random_state=0)
for i in range(conf.KFOLD):
# Roll the data
xx = np.roll(xx, int(len(xx) / conf.KFOLD), axis=0)
yy = np.roll(yy, int(len(yy) / conf.KFOLD), axis=0)
# Data separation
xTrain = xx[int(len(xx) / conf.KFOLD):]
yTrain = yy[int(len(yy) / conf.KFOLD):]
xEval = xx[:int(len(xx) / conf.KFOLD)]
yEval = yy[:int(len(yy) / conf.KFOLD)]
# If there exists only one action, convert Y to binary form
if yEval.shape[1] == 1:
yTrain = ku.to_categorical(yTrain)
yEval = ku.to_categorical(yEval)
# Setup Keras Checkpoint
checkpoint = kc.ModelCheckpoint(OUTPUT_DIR + "K" + str(i + 1) + "_A{val_accuracy:.6f}_L{val_loss:.6f}.h5")
# Fit model (learn)
print(str(i + 1) + " th fitting started. Endpoint is " + str(conf.KFOLD) + " th.")
model.fit(
xTrain,
yTrain,
epochs=conf.EPOCH_CNT,
batch_size=conf.BATCH_SIZE,
shuffle=True,
verbose=1,
callbacks=[tensorboard, checkpoint],
validation_data=(xEval, yEval),
validation_freq=1,
use_multiprocessing=True)
print("Epoch completed!")
# Saving model
print("Saving model & model information...")
modelYML = model.to_yaml()
with open(OUTPUT_DIR + "model.yml", "w") as yml:
yml.write(modelYML)
modelJSON = model.to_json()
with open(OUTPUT_DIR + "model.json", "w") as json:
json.write(modelJSON)
model.save(OUTPUT_DIR + "model.h5")
print('Model saved!')
# +
# Finished
| train-xy/VGG16.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
"""Manually build a mesh."""
from vedo import *
#embedWindow('itkwidgets') # or k3d
verts = [(50,50,50), (70,40,50), (50,40,80), (80,70,50)]
faces = [(0,1,2), (2,1,3), (1,0,3)]
# (the first triangle face is formed by vertex 0, 1 and 2)
m = Mesh([verts, faces])
# the way vertices are assembled into polygons can be retrieved
# in two different formats:
printc('points():\n', m.points())
printc('faces(): \n', m.faces())
m.show(axes=1)
# -
| examples/notebooks/basic/buildmesh.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from scipy import stats
array = np.array([1, 1, 5, 0, 1, 2, 2, 0, 1, 4])
array = np.array(['gabriela', 'patrícia', 'samantha', 'gabriela'])
frequency = stats.itemfreq(array)
print(frequency)
xi = frequency[:, 0]
print(xi)
fi = frequency[:, 1]
print(fi)
fi = fi.astype(int)
print(fi)
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
x_pos = np.arange(len(xi))
plt.figure(1)
plt.bar(x_pos, fi, align='center')
plt.ylim(0, max(fi) + 0.5)
plt.xticks(np.arange(3), xi)
# +
# %matplotlib notebook
import matplotlib.pyplot as plt
x_pos = np.arange(len(xi))
print(x_pos)
plt.figure(1)
plt.bar(x_pos, fi,align='center')
plt.ylim(0, max(fi) + 0.5)
plt.xticks(np.arange(5), xi)
plt.xlabel("xi")
plt.ylabel("fi")
# -
| frequencies/frequency.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from boruta import BorutaPy
csv_directory = os.getcwd()[:-17]+'\\final_model_logistic_regression\\'
csv_file = 'standardized_features.csv'
csv_path = os.path.join(csv_directory, csv_file)
dataset = pd.read_csv(csv_path, delimiter=';')
pd.set_option('display.max_columns', 39)
dataset.head()
# # Feature Selection Method utilizzando il package Boruta
dataset = pd.get_dummies(dataset, drop_first=True)
dataset.shape
features = [f for f in dataset.columns if f not in ['is_featured_Yes']]
len(features)
dataset[features] = dataset[features].fillna(dataset[features].mean()).clip(-1e9,1e9)
X = dataset[features].values
Y = dataset['is_featured_Yes'].values.ravel()
rf = RandomForestClassifier(n_jobs=-1, class_weight='balanced', max_depth=5)
boruta_feature_selector = BorutaPy(rf, n_estimators='auto', verbose=2, random_state=3537, max_iter=50, perc=90)
boruta_feature_selector.fit(X, Y)
# +
# final_features = list()
feature_list = list()
importance_values_list = list()
decision_list = list()
# indexes = np.where(boruta_feature_selector.support_ == True)
for x in range(0, len(dataset.columns)-1):
# final_features.append(features[x])
feature_list.append(features[x])
importance_values_list.append(rf.feature_importances_[x])
if boruta_feature_selector.support_[x] == True:
decision_list.append('Confirmed')
else:
decision_list.append('Rejected')
results = pd.DataFrame()
results['Feature'] = feature_list
results['Importance Value'] = importance_values_list
results['Decision'] = decision_list
results = results.sort_values(by='Importance Value', ascending=False)
results.to_csv('boruta_features_selected.csv', sep=';', index=False)
# +
def make_bold(s):
if s['Decision'] == 'Confirmed':
return ['font-weight: bold']*3
else:
return ['font-weight: normal']*3
features_selected = pd.read_csv('boruta_features_selected.csv', delimiter=';')
features_selected.style.apply(make_bold, axis=1)
# -
| analysis/notebook/feature_selection/Boruta_feature_selection_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Federated Learning Training Plan: Host Plan & Model
#
# Here we load Plan and Model params created earlier in "Create Plan" notebook
# and host them on PyGrid.
#
# After that it should be possible to run FL worker using
# SwiftSyft, KotlinSyft, syft.js, or FL python worker
# and train the hosted model using local worker's data.
# + pycharm={"name": "#%%\n"}
# %load_ext autoreload
# %autoreload 2
import warnings
warnings.filterwarnings("ignore")
import websockets
import json
import requests
import torch
import syft as sy
from syft.grid.clients.static_fl_client import StaticFLClient
from syft.serde import protobuf
from syft_proto.execution.v1.plan_pb2 import Plan as PlanPB
from syft_proto.execution.v1.state_pb2 import State as StatePB
sy.make_hook(globals())
# force protobuf serialization for tensors
hook.local_worker.framework = None
# + pycharm={"name": "#%%\n"}
async def sendWsMessage(data):
async with websockets.connect('ws://' + gatewayWsUrl) as websocket:
await websocket.send(json.dumps(data))
message = await websocket.recv()
return json.loads(message)
def deserializeFromBin(worker, filename, pb):
with open(filename, "rb") as f:
bin = f.read()
pb.ParseFromString(bin)
return protobuf.serde._unbufferize(worker, pb)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Step 4a: Host in PyGrid
#
# Here we load "ops list" Plan.
# PyGrid should translate it to other types (e.g. torchscript) automatically.
# + pycharm={"name": "#%% \n"}
# Load files with protobuf created in "Create Plan" notebook.
training_plan = deserializeFromBin(hook.local_worker, "tp_full.pb", PlanPB())
model_params_state = deserializeFromBin(hook.local_worker, "model_params.pb", StatePB())
# + [markdown] pycharm={"name": "#%% md\n"}
# Follow PyGrid README.md to build `openmined/grid-gateway` image from the latest `dev` branch
# and spin up PyGrid using `docker-compose up --build`.
# + pycharm={"name": "#%%\n"}
# Default gateway address when running locally
gatewayWsUrl = "127.0.0.1:5000"
grid = StaticFLClient(id="test", address=gatewayWsUrl, secure=False)
grid.connect()
# + [markdown] pycharm={"name": "#%% md\n"}
# Define name, version, configs.
# + pycharm={"name": "#%%\n"}
# These name/version you use in worker
name = "mnist"
version = "1.0.0"
client_config = {
"name": name,
"version": version,
"batch_size": 64,
"lr": 0.005,
"max_updates": 100 # custom syft.js option that limits number of training loops per worker
}
server_config = {
"min_workers": 3,
"max_workers": 3,
"pool_selection": "random",
"num_cycles": 5,
"do_not_reuse_workers_until_cycle": 4,
"cycle_length": 28800,
"max_diffs": 3, # number of diffs to collect before avg
"minimum_upload_speed": 0,
"minimum_download_speed": 0,
}
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Authentication (optional)
# Let's additionally protect the model with simple authentication for workers.
#
# PyGrid supports authentication via JWT token (HMAC, RSA) or opaque token
# via remote API.
#
# We'll try JWT/RSA. Suppose we generate RSA keys:
# ```
# openssl genrsa -out private.pem
# openssl rsa -in private.pem -pubout -out public.pem
# ```
# + pycharm={"name": "#%%\n"}
private_key = """
-----<KEY>
""".strip()
public_key = """
-----BEGIN PUBLIC KEY-----
<KEY>
-----END PUBLIC KEY-----
""".strip()
# + [markdown] pycharm={"name": "#%% md\n"}
# If we set __public key__ into model authentication config,
# then PyGrid will validate that submitted JWT auth token is signed with private key.
# + pycharm={"name": "#%%\n"}
server_config["authentication"] = {
"type": "jwt",
"pub_key": public_key,
}
# + [markdown] pycharm={"name": "#%% md\n"}
# Shoot!
#
# If everything's good, success is returned.
# If the name/version already exists in PyGrid, change them above or cleanup PyGrid db by re-creating docker containers (e.g. `docker-compose up --force-recreate`).
#
# + pycharm={"name": "#%%\n"}
response = grid.host_federated_training(
model=model_params_state,
client_plans={'training_plan': training_plan},
client_protocols={},
server_averaging_plan=None,
client_config=client_config,
server_config=server_config
)
print("Host response:", response)
# + [markdown] pycharm={"name": "#%% md\n"}
# Let's double-check that data is loaded by requesting a cycle.
#
# First, create authentication token.
# + pycharm={"name": "#%%\n"}
# !pip install pyjwt[crypto]
import jwt
auth_token = jwt.encode({}, private_key, algorithm='RS256').decode('ascii')
print(auth_token)
# -
# Make authentication request:
# + pycharm={"name": "#%%\n"}
auth_request = {
"type": "federated/authenticate",
"data": {
"model_name": name,
"model_version": version,
"auth_token": auth_token,
}
}
auth_response = await sendWsMessage(auth_request)
print('Auth response: ', json.dumps(auth_response, indent=2))
# + pycharm={"name": "#%%\n"}
cycle_request = {
"type": "federated/cycle-request",
"data": {
"worker_id": auth_response['data']['worker_id'],
"model": name,
"version": version,
"ping": 1,
"download": 10000,
"upload": 10000,
}
}
cycle_response = await sendWsMessage(cycle_request)
print('Cycle response:', json.dumps(cycle_response, indent=2))
worker_id = auth_response['data']['worker_id']
request_key = cycle_response['data']['request_key']
model_id = cycle_response['data']['model_id']
training_plan_id = cycle_response['data']['plans']['training_plan']
# + [markdown] pycharm={"name": "#%% md\n"}
# Let's download model and plan (both versions) and check they are actually workable.
#
# + pycharm={"name": "#%%\n"}
# Model
req = requests.get(f"http://{gatewayWsUrl}/federated/get-model?worker_id={worker_id}&request_key={request_key}&model_id={model_id}")
model_data = req.content
pb = StatePB()
pb.ParseFromString(req.content)
model_params_downloaded = protobuf.serde._unbufferize(hook.local_worker, pb)
print("Params shapes:", [p.shape for p in model_params_downloaded.tensors()])
# + pycharm={"name": "#%%\n"}
# Plan "list of ops"
req = requests.get(f"http://{gatewayWsUrl}/federated/get-plan?worker_id={worker_id}&request_key={request_key}&plan_id={training_plan_id}&receive_operations_as=list")
pb = PlanPB()
pb.ParseFromString(req.content)
plan_ops = protobuf.serde._unbufferize(hook.local_worker, pb)
print(plan_ops.code)
print(plan_ops.torchscript)
# + pycharm={"name": "#%%\n"}
# Plan "torchscript"
req = requests.get(f"http://{gatewayWsUrl}/federated/get-plan?worker_id={worker_id}&request_key={request_key}&plan_id={training_plan_id}&receive_operations_as=torchscript")
pb = PlanPB()
pb.ParseFromString(req.content)
plan_ts = protobuf.serde._unbufferize(hook.local_worker, pb)
print(plan_ts.code)
print(plan_ts.torchscript.code)
# + pycharm={"name": "#%%\n"}
# Plan "tfjs"
req = requests.get(f"http://{gatewayWsUrl}/federated/get-plan?worker_id={worker_id}&request_key={request_key}&plan_id={training_plan_id}&receive_operations_as=tfjs")
pb = PlanPB()
pb.ParseFromString(req.content)
plan_tfjs = protobuf.serde._unbufferize(hook.local_worker, pb)
print(plan_tfjs.code)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Step 5a: Train
#
# To train hosted model, use one of the existing FL workers:
# * Python FL Client: see "[Execute Plan with Python FL Client](Execute%20Plan%20with%20Python%20FL%20Client.ipynb)" notebook that
# has example of using python FL worker.
# * [SwiftSyft](https://github.com/OpenMined/SwiftSyft)
# * [KotlinSyft](https://github.com/OpenMined/KotlinSyft)
# * [syft.js](https://github.com/OpenMined/syft.js)
#
#
#
| examples/experimental/FL Training Plan/Host Plan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.5 64-bit (''tf2'': conda)'
# language: python
# name: python37564bittf2condaf9656480b0924f55a4cf395f6bffedd2
# ---
import numpy as np
import pandas as pd
from matplotlib import pyplot
from numpy import unique
from numpy import argmax
from tensorflow.keras.datasets.mnist import load_data
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPool2D
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dropout
import time
# +
train = pd.read_csv("../Data/mnist_train.csv", header=None)
test = pd.read_csv("../Data/mnist_test.csv", header=None)
trainX = train[train.columns[1:]].to_numpy().reshape((-1, 28, 28))
trainy = train[train.columns[0]].to_numpy()
testX = test[test.columns[1:]].to_numpy().reshape((-1, 28, 28))
testy = test[test.columns[0]].to_numpy()
# +
print('Train: X=%s, y=%s' % (trainX.shape, trainy.shape))
print('Test: X=%s, y=%s' % (testX.shape, testy.shape))
for i in range(25):
pyplot.subplot(5, 5, i+1)
pyplot.imshow(trainX[i], cmap=pyplot.get_cmap('gray'))
pyplot.show()
# -
trainX = trainX.reshape((trainX.shape[0], trainX.shape[1], trainX.shape[2], 1))
testX = testX.reshape((testX.shape[0], testX.shape[1], testX.shape[2], 1))
in_shape = trainX.shape[1:]
in_shape
n_classes = len(unique(trainy))
n_classes
# Normalize
trainX = trainX.astype('float32') / 255.0
testX = testX.astype('float32') / 255.0
# Create CNN Model
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=in_shape))
model.add(MaxPool2D((2, 2), strides=(2,2)))
model.add(Conv2D(32, (2, 2), activation='relu', kernel_initializer='he_uniform', input_shape=in_shape))
model.add(MaxPool2D((2, 2), strides=(2,2)))
model.add(Flatten())
model.add(Dense(500, activation='relu', kernel_initializer='he_uniform'))
# model.add(Dropout(0.5))
model.add(Dense(n_classes, activation='softmax'))
model.summary()
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# %%time
# Train
start_time = time.time()
model.fit(trainX, trainy, epochs=10, batch_size=128, verbose=1)
elapsed_time = time.time() - start_time
print(f"Time: {elapsed_time}")
# Evaluate
loss, acc = model.evaluate(testX, testy, verbose=0)
print('Accuracy: %.3f' % acc)
| TF2/MNIST_TF2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cell Magics in IPython
# IPython has a system of commands we call 'magics' that provide a mini command language that is orthogonal to the syntax of Python and is extensible by the user with new commands. Magics are meant to be typed interactively, so they use command-line conventions, such as using whitespace for separating arguments, dashes for options and other conventions typical of a command-line environment.
#
# Magics come in two kinds:
#
# * Line magics: these are commands prepended by one `%` character and whose arguments only extend to the end of the current line.
# * Cell magics: these use *two* percent characters as a marker (`%%`), and they receive as argument *both* the current line where they are declared and the whole body of the cell. Note that cell magics can *only* be used as the first line in a cell, and as a general principle they can't be 'stacked' (i.e. you can only use one cell magic per cell). A few of them, because of how they operate, can be stacked, but that is something you will discover on a case by case basis.
#
# The `%lsmagic` magic is used to list all available magics, and it will show both line and cell magics currently defined:
# + jupyter={"outputs_hidden": false}
# %lsmagic
# -
# Since in the introductory section we already covered the most frequently used line magics, we will focus here on the cell magics, which offer a great amount of power.
#
# Let's load matplotlib and numpy so we can use numerics/plotting at will later on.
# + jupyter={"outputs_hidden": false}
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# -
# ## <!--====--> Some simple cell magics
# Timing the execution of code; the 'timeit' magic exists both in line and cell form:
# + jupyter={"outputs_hidden": false}
# %timeit np.linalg.eigvals(np.random.rand(100,100))
# + jupyter={"outputs_hidden": false}
# %%timeit a = np.random.rand(100, 100)
np.linalg.eigvals(a)
# -
# The `%%capture` magic can be used to capture the stdout/err of any block of python code, either to discard it (if it's noise to you) or to store it in a variable for later use:
# + jupyter={"outputs_hidden": false}
# %%capture capt
from __future__ import print_function
import sys
print('Hello stdout')
print('and stderr', file=sys.stderr)
# + jupyter={"outputs_hidden": false}
capt.stdout, capt.stderr
# + jupyter={"outputs_hidden": false}
capt.show()
# -
# The `%%writefile` magic is a very useful tool that writes the cell contents as a named file:
# + jupyter={"outputs_hidden": false}
# %%writefile foo.py
print('Hello world')
# + jupyter={"outputs_hidden": false}
# %run foo
# -
# ## <!--====--> Magics for running code under other interpreters
# IPython has a `%%script` cell magic, which lets you run a cell in
# a subprocess of any interpreter on your system, such as: bash, ruby, perl, zsh, R, etc.
#
# It can even be a script of your own, which expects input on stdin.
# To use it, simply pass a path or shell command to the program you want to run on the `%%script` line,
# and the rest of the cell will be run by that script, and stdout/err from the subprocess are captured and displayed.
# + jupyter={"outputs_hidden": false} magic_args="python2" language="script"
# import sys
# print 'hello from Python %s' % sys.version
# + jupyter={"outputs_hidden": false} magic_args="python3" language="script"
# import sys
# print('hello from Python: %s' % sys.version)
# -
# IPython also creates aliases for a few common interpreters, such as bash, ruby, perl, etc.
#
# These are all equivalent to `%%script <name>`
# + jupyter={"outputs_hidden": false} language="ruby"
# puts "Hello from Ruby #{RUBY_VERSION}"
# + jupyter={"outputs_hidden": false} language="bash"
# echo "hello from $BASH"
# -
# ## Capturing output
# You can also capture stdout/err from these subprocesses into Python variables, instead of letting them go directly to stdout/err
# + jupyter={"outputs_hidden": false} language="bash"
# echo "hi, stdout"
# echo "hello, stderr" >&2
#
# + jupyter={"outputs_hidden": false} magic_args="--out output --err error" language="bash"
# echo "hi, stdout"
# echo "hello, stderr" >&2
# + jupyter={"outputs_hidden": false}
print(error)
print(output)
# -
# ## Background Scripts
# These scripts can be run in the background, by adding the `--bg` flag.
#
# When you do this, output is discarded unless you use the `--out/err`
# flags to store output as above.
# + jupyter={"outputs_hidden": false} magic_args="--bg --out ruby_lines" language="ruby" active=""
# for n in 1...10
# sleep 1
# puts "line #{n}"
# STDOUT.flush
# end
# -
# When you do store output of a background thread, these are the stdout/err *pipes*,
# rather than the text of the output.
# + jupyter={"outputs_hidden": false} active=""
# ruby_lines
# + jupyter={"outputs_hidden": false} active=""
# print(ruby_lines.read().decode('utf8'))
# -
# ## Cleanup
# !rm -f foo.py
| 001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Cell Magics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
library(caret)
library(data.table)
# +
path = '/home/zongyi/bimbo_data/'
train <- fread('/home/zongyi/bimbo_data/test_fs.csv', select=c('prior_sum','lag_sum'))
# -
train[is.na(train)] <- 0
train <- train[1:1000]
c2 <- chisq.test(train$lag_sum, train$lag1)
print(c2)
fcor <- cor(train)
fcor
sum(abs(fcor[upper.tri(fcor)]))
highCorr <- sum(abs(fcor[upper.tri(fcor)]) > .995)
highCorr
summary(fcor[upper.tri(fcor)])
| Bimbo/.ipynb_checkpoints/R_correlation-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# %matplotlib inline
# +
# HIDDEN
def joint_probability(x, y):
if x == 1 & y == 1:
return 2/8
elif abs(x - y) < 2:
return 1/8
else:
return 0
k = np.arange(3)
joint_table = Table().values('X', k, 'Y', k).probability_function(joint_probability)
joint_dist = joint_table.to_joint()
# -
# ## Conditional Distributions ##
# To understand the relation between two variables you must examine the conditional behavior of each of them given the value of the other. Towards this goal, we will start by examining the simple example of the previous sections and then develop the general theory.
# In our example about heads in three tosses of a coin, where $X$ is the number of heads in the first two tosses and $Y$ the number of heads in the last two tosses, the joint distribution of $X$ and $Y$ and the two marginals are displayed in the table below.
joint_dist.both_marginals()
# Given a particular value $x$ of $X$, that is, given that there were $x$ heads in the first two tosses, we can work out the *conditional distribution* of the number of heads in the last two tosses.
#
# In random variable language, for each value $x$ of $X$, the random variable $Y$ has a conditional distribution given $X = x$. We can get all three of these conditional distributions as follows:
# +
# conditional distribution of Y given each different value of X
joint_dist.conditional_dist('Y', 'X')
# -
# To understand this table, start with the first column. In that column, the given condition is $X = 0$, that is, there were no heads in the first two tosses. Under this condition, $Y$ can't be 2, which is why you see the probability 0 in the top cell. If $X = 0$ then $Y$ can only be 1 or 0, according to whether the third toss was a head or a tail. That explains the probability of 0.5 in each of the remaining two cells.
#
# You can work out all the other probabilities in this way. But you don't have to go back to the original outcomes to figure out these conditional probabilities. Just use the joint distribution table displayed at the start of the example, and the division rule.
#
# For example,
#
# $$
# P(Y = 1 \mid X = 0) = \frac{P(X = 0, Y = 1)}{P(X = 0)} = \frac{0.125}{0.25} = 0.5
# $$
#
# It is easy to see why each column in the table of conditional distributions sums to 1. Each cell in one column is obtained from the joint distribution table by taking the corresponding cell in that table and dividing it by the sum of its column. So the columns in the resulting table sum to 1.
# ### The Theory ###
# We will now generalize the calculations we did in the example above.
#
# Let $X$ and $Y$ be two random variables defined on the same space. If $x$ is a possible value of $X$, and $y$ and possible value of $Y$, then
# $$
# P(Y = y \mid X = x) = \frac{P(X = x, Y = y)}{P(X = x)}
# $$
#
# Therefore, for a fixed value $x^*$ of $X$, the *conditional distribution* of $Y$, given $X = x^*$ is the collection of probabilities
# $$
# P(Y = y \mid X = x^*) = \frac{P(X = x^*, Y = y)}{P(X = x^*)}
# $$
# where $y$ ranges over all the values of $Y$. Keep in mind that $y$ represents values of the variable here. The value $x^*$ is the particular value of $X$ that was observed; it is a constant.
#
# ### The Probabilities in a Conditional Distribution Sum to 1 ###
# In a distribution, the probabilities have to sum to 1. To see that this is true for the conditional distribution defined above, start by using the fundamental rule.
#
# Find $P(X = x^*)$ by partitioning the event $\{ X = x^* \}$ according to the values of $Y$:
#
# $$
# P(X = x^*) = \sum_{\text{all }y} P(X = x^*, Y = y)
# $$
#
# Now let's sum the probabilities in the conditional distribution of $Y$ given $X = x^*$, and see if the sum is 1.
#
# \begin{align*}
# \sum_{\text{all }y} P(Y = y \mid X = x^*) &=
# \sum_{\text{all }y} \frac{P(X = x^*, Y = y)}{P(X = x^*)} \\ \\
# &= \frac{1}{P(X = x^*)} \sum_{\text{all }y} P(X = x^*, Y = y) \\
# &= \frac{1}{P(X = x^*)} \cdot P(X = x^*) \\
# &= 1
# \end{align*}
| notebooks/Chapter_04/03_Conditional_Distributions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import numpy as np
import copy
featuresDir = '/scratch/mohsin/final_features_pca/'
# +
image_names = os.listdir(featuresDir)
image_names.sort()
print(len(image_names))
# -
meanf = None
for i, imageName in enumerate(image_names):
print(i)
features = np.load(featuresDir + imageName)[0,:]
if type(meanf) == type(None):
meanf = copy.deepcopy(features)
meanf = ((i * meanf) + features) / (1 + i)
def normalize(x, mean, k):
return np.divide((x - mean), np.absolute(x - mean) ** k)
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
for i, imageName in enumerate(image_names):
if i > 10:
break
print(i)
features = np.load(featuresDir + imageName)[0,:]
features = normalize(features, meanf, 0.8)
print(features)
# +
normalizationPath = '/home/kshitij98/style/final_features_pca/'
np.save(normalizationPath + 'mean.npy', meanf)
| Dimensionality-Reduction/src/Normalization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="nCc3XZEyG3XV"
# Lambda School Data Science
#
# *Unit 2, Sprint 3, Module 1*
#
# ---
#
#
# # Define ML problems
#
# You will use your portfolio project dataset for all assignments this sprint.
#
# ## Assignment
#
# Complete these tasks for your project, and document your decisions.
#
# - [ ] Choose your target. Which column in your tabular dataset will you predict?
# - [ ] Is your problem regression or classification?
# - [ ] How is your target distributed?
# - Classification: How many classes? Are the classes imbalanced?
# - Regression: Is the target right-skewed? If so, you may want to log transform the target.
# - [ ] Choose your evaluation metric(s).
# - Classification: Is your majority class frequency >= 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy?
# - Regression: Will you use mean absolute error, root mean squared error, R^2, or other regression metrics?
# - [ ] Choose which observations you will use to train, validate, and test your model.
# - Are some observations outliers? Will you exclude them?
# - Will you do a random split or a time-based split?
# - [ ] Begin to clean and explore your data.
# - [ ] Begin to choose which features, if any, to exclude. Would some features "leak" future information?
#
# If you haven't found a dataset yet, do that today. [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2) and choose your dataset.
#
# Some students worry, ***what if my model isn't “good”?*** Then, [produce a detailed tribute to your wrongness. That is science!](https://twitter.com/nathanwpyle/status/1176860147223867393)
# -
import pandas as pd
import numpy as np
df = pd.read_csv("E:/NotesAssignments/Unit-2/DS-Unit-2-Applied-Modeling/data/project-data/LoL-Ranked-Data.csv")
df.head()
df.set_index('gameId',inplace=True)
df.head()
target = "winner"
# My problem is classification. Team 1 wins or Team 2 wins.
df['winner'].value_counts(normalize=True)
# My target seems to be evenly distributed.
# Since my target is evenly distributed I can simply use accuracy
df.columns
features = ['firstBlood',
'firstTower',
'firstInhibitor',
'firstBaron',
'firstDragon',
'firstRiftHerald',
'gameDuration']
# I will use the features listed above to train my data. And I will use a random split in order to split my data.
df['firstBlood'].value_counts(normalize=True)
df['firstTower'].value_counts(normalize=True)
df['firstInhibitor'].value_counts(normalize=True)
# While exploring the data I found that there are some games where Towers and Inhibitors were not taken. The destruction of these objectives are a prerequisite of victory in any game. The fact that some games exist that have a winner (when I looked at the target data, there were no stalemates, either team_1 wins, or team_2 wins) but no one capturing any of these objectives makes me believe that these games either 1) The data may be wrong or 2) That the losing team surrendered for any reason, before any of these objectives were captures by the opposing team.
#
# I do not believe that any of these features would leak because none of these are whether or not the nexus is destroyed.
| module1-define-ml-problems/LS_DS_231_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chemical Dose Controler (CDC) Simplification
# **Team:MIC**
#
# **Team members: <NAME>, <NAME>, <NAME>, <NAME>**
# ---
#
# ## Abbreviation
#
# CDC: chemical dosage controller
#
# Chemical feeding rate: the flow rate of chemical dosage
# ---
#
# ## Introduction
#
# In order to achieve easier operation for operators and better performance for water treatment facilities, we plan to substitute the LFOM and slider part with a single valve, which allows operators to change chemical dosage by adjusting valve opening. In that way, we change the mechanism of CDC from major loss control to minor loss control.
#
# To achieve a minor loss control, we try to make the head loss dominant by expansion through a valve. To reduce the major loss, we’ll replace the long straight and slim dosing tube set with a shorter and larger tube.
#
# We’ll remove the LFOM and just leave an orifice in the bottom of the flow entrance tank, which allows water level proportional to the square of the flow rate. And the elevation change of the dosing tube is proportional to chemical feeding rate. By setting the lever to be equal-length, we ensure the elevation change in flow tank is the same as the head loss in chemical feeding system, thus we could reach a proportional relationship between the nonlinear tank flow rate and the chemical feeding rate.
#
# Our main concern would be whether a valve could keep the minor loss dominate. We'll set up equations and do error anaylysis to test our assumptions and build reliability into our system. At last we would compare the new CDC to the old CDC.
# ---
#
# ## 2-D layout of our system
# 
# ## <center>Figure 1 2-D layout of our design with zero entrance tank flow rate</center>
# 
# ## <center>Figure 2 2-D layout of our design with certain entrance tank flow rate</center>
# ---
#
# ## 3D layout of our system
# 
# 
# 
# 
# 
# ## <center>Figure 3 3-D layout of our design</center>
# ---
#
# ## Theoretical Basis
#
#
# **First**, the flow rate of a horizontal orifice tank is proportional to the square root of water elevation, which is shown in the following formula.
#
# 
#
# ## <center>Figure 4 Entrance tank flow rate</center>
# **Second**, the minor loss coefficient is determined by the cross sectional area of the flow coming(A_in) in and the flow going out(A_out) when passing through the orifice if we use the velocity after expansion (V_out) to calculate the minor loss in tube
# $$K_{Minor}=\left(\frac{A_{out}}{A_{in}}-1\right)^{2}$$
# A_out is the cross sectional area of the tube connecting to the lever and A_in is the cross sectional area of the flow inside valve after contraction and A_in equals to cross sectional area of the orifice in valve multiplies with vena contracta.
# $$A_{out}=A_{tube}$$
# $$A_{tube}=\frac{\pi\,D_{tube}^{2}}{4}$$
# $$A_{in}=\Pi_{vc}\,A_{ValveOrifice}$$
# $$A_{in}=\frac{\Pi_{vc}\,\pi\,D_{ValveOrifice}^{2}}{4}$$
# $$K_{Minor}=\left(\frac{A_{tube}}{\Pi_{vc}\,A_{ValveOrifice}}-1\right)^{2}$$
# And we assume the vena contracta in the tank is the same as that in the valve.
# **Third**, according to the formula of minor loss:
# $$H_{MinorLoss}=\frac{K_{Minor}\,V^{2}}{2\,g}$$
#
# We can express the out flow velocity with head loss and cross sectional area.
# $$V_{out}=\frac{\Pi_{vc}\,A_{ValveOrifice}}{A_{tube}-\Pi_{vc}\,A_{ValveOrifice}}\,\left(2\,g\,H_{MinorLoss}\right)^{0.5}$$
#
# The chemical feeding rate could also be expressed:
# $$Q_{Chemical}=A_{tube}\,V_{out}$$
#
# $$Q_{Chemical}=\frac{\Pi_{vc}\,A_{tube}\,A_{ValveOrifice}}{A_{tube}-\Pi_{vc}\,A_{ValveOrifice}}\,\left(2\,g\,H_{MinorLoss}\right)^{0.5}$$
# **Fourth**, we use a equal-length lever to make the head loss inside the tank equals to the head loss of chemical dosing tube.
# $$H_{WaterElevation}=H_{MinorLoss}$$
#
# $$\frac{Q_{EntranceTank}}{A_{TankOrifice}}=\Pi_{vc}\,\left(2\,g\,H_{MinorLoss}\right)^{0.5}$$
#
# Then, the flow rate of chemical dosage is related to the flow rate of raw water in tank.
# $$Q_{Chemical}=\frac{Q_{EntranceTank}\,A_{tube}}{A_{TankOrifice}}\,\frac{A_{ValveOrifice}}{A_{tube}-\Pi_{vc}\,A_{ValveOrifice}}$$
#
# Thus, chemical feeding rate is directly related to the flow rate in the tank. And change of orifice area can also change the chemical feeding rate.
#
# At the beginning, we assumed that if out flow cross sectional area is far larger than in flow cross sectional area, then, we could build a linear relationship between chemical feeding rate and the orifice area in valve and this can further build a linear relationship between chemical concentration after mixing and orifice area. However, later we found this linear relationship would only works for very small orifice area and won't be applicable during the whole operation process. Thus we build a graph for the diameter of orifice in valve and the chemical concentration after mixing.
#
# **Fifth**, in terms of the chemical concentration after mixing, we can build the equation based on mass balance:
# $${C_{stock},Q_{Chemical}}={({Q_{Chemical}+Q_{EntranceTank})}C_{Mix}}$$
#
# Based on this equation and equations above, we can derive the relationship between chemical concentration after mixing and valve orifice cross sectional area.
# $$C_{Mix}=\frac{{Q_{Chemical}}{C_{Stock}}}{Q_{Chemical}+Q_{EntranceTank}}$$
#
# $$C_{mix}=\frac{C_{Stock}{A_{tube}\,A_{ValveOrifice}}}{{A_{TankOrifice}\,\left(A_{tube}-\Pi_{vc}\,A_{ValveOrifice}\right)}+{A_{tube}\,A_{ValveOrifice}}}$$
# ---
#
# ## Valve and Reducer
#
# 1.Valve
#
# Our calculation is based on the valve we picked. The producers we choose are Swagelok (https://www.swagelok.com/en) and <NAME>(http://www.gfps.com/country_US/en_US/profile/locations.html).
#
# There are several constraints to the our valve.
#
# **First**, to meet AguaClara standards, this valve would not require the use of electricity.
#
# **Second**, our design's goal is to keep minor loss dominant, thus require the system to achieve a relative low flow rate, so the valve should function normally even given relatively small flow rate or fluid velocity.
#
# **Third**, the valve should be chlorine resistant.
# Major component of our chemical is sodium hypochlorite and it can hydrolysis and create hypochlorous acid. However, product infomation of valves from various producers seldom mention the chlorine resisitence, they usually offer information of resistence to sea water and hydrochloric acid. Hypochlorous acid is oxidized acid, it is commonly known that PVC has good resistence to hypochlorous acid, but we found that some inner part of the PVC valve is not chlorine resistant.
#
# On the other hand, metal materials usually used to create valves, such as alloy 400 and brass, does not report to have super great resistence to oxidized acid. Titanium can resist hypochlorous acid but it is too expensive. Eventually, after doing some research, we found that under the constant flow condition, the stainless steel could resist the exposure to chlorine in a long period of time. 316 stainless steel can tolerate up to 5 ppm chlorine. Datas online indicates that both 304 and 316 stainless steel should resist long term exposure in most chlorinated fresh waters which is in agreement with general experience.
#
# **Fourth**, the valve's should provide precise flow control. The valve should have enough usable opening range, and within that range the flow and opening degree should have a consitant quantitative relationship, it could be better if the valve can create a near linear relationship between handle position and flow rate(or any other kind of quantitative relationship). Which also suggest that the flow should increase with a steady rate when operating the valve, any kind surge in the rate would add difficulty to the future operation. fraction openning & concentration after mix is linear
#
# The needle valve which is usually applied to small system. According to our current calculation, and our stock tank concentration, our target concentration after mixing should be 0-4mg/L. According to our calculation, it could reach a range of chlorine dosage which is desirable.
#
# 
# ## <center>Figure 5 Picture of needle valve</center>
#
# This needle valve(SS-OKF2) is produced by Swagelok, its body Material is 316 Stainless Steel which report to have good resistence for long term exposure in the chlorine. So we expect it to perform well in resisting the chlorine in our chemical dosage system. And the size of the valve is shown in the CAD drawing below.
#
# 
# ## <center>Figure 6 Layout of needle valve</center>
#
# 2.Reducer
# Our calculation are based on minor loss dominant, the system need generate more minor loss to reduce the error in calculation. The total headloss for our system is constant, so we need higher minor loss coefficient.
# By using reducer we can apply tubes with larger diameter to the valve, for our current design we can apply 1/4 inch tube to 1/8 inch valve, which could maintian our minor loss coefficient above an acceptable value even when we fully open the valve.(If we don't use reducer, when valve are completely open, the valve and tube would have small difference between sectional area, obviously we can not generate much minor loss with that)
#
#
# We find high-pressure 304 stainless steel pipe fitting on (https://www.mcmaster.com/). It is straight reducer, 1/4 x 1/8 NPT Female, made by 304 Stainless Steel. The size of the valve is shown in the drawing attached below.
#
# 
# ## <center>Figure 7 Layout of reducer</center>
# ---
#
# ## Calculation
# +
from aide_design.play import*
#Below are the items that were imported by the code above so that you know what abbreviations to use in your code.
# Third-party imports
#import numpy as np
#import pandas as pd
#import matplotlib.pyplot as plt
#import matplotlib
# AIDE imports
#import aide_design
#import aide_design.pipedatabase as pipe
#from aide_design.units import unit_registry as u
#from aide_design import physchem as pc
#import aide_design.expert_inputs as exp
#import aide_design.materials_database as mat
#import aide_design.utility as ut
#import aide_design.k_value_of_reductions_utility as k
#import aide_design.pipeline_utility as pipeline
#import warnings
# -
# ## Entrance tank parameters
##First we need to design the parameters of the entrance tank. The depth of entrance tank is designed to be 60 cm high so that operators are
##able to monitor the entrance tank
##The target flow rate is designed to be 60L/s, which is the maximum treating flow rate for current design. However, according to our theory,
##The performance of our system won't be affected by entrance tank flow rate.
Q_EntranceTank = 60 * u.L/u.s
vc = 0.62
#To adjust the water elevation around half meter, we designed the entrance tank diameter to be 20cm
D_TankOrifice = 20 * u.cm
A_TankOrifice = np.pi/4*D_TankOrifice**2
##The water elevation is calculated and it is the head loss for the entrance tank flow through tank orifice.
H_WaterElevation = (Q_EntranceTank/(A_TankOrifice*vc))**2/(2*pc.gravity)
print ('The maximum head loss for a maximum flow rate of',Q_EntranceTank,'is',H_WaterElevation.to(u.cm),'with the orifice diameter of',D_TankOrifice)
# ---
#
# ## Chlorine storage tank parameters
##When typing the design code, we found that the chemical feeding rate should be higher to reduce the effect of major loss inside the tube
##Thus, we decide to dilute the chemical solution from previous design
##Once the chemical flow rate is increased, we may not ignore chemical flow rate when calculating the total flow rate after mixing.
##The stock concentration is higher for our design to first achieve a wider concentration range that this system can achieved,Second,to reduce
##the flow rate in case the flow turns from laminar to turbulent and the major loss becomes dominant. Since the maximum mixing concentration
##should be 4mg/L, we use this to constraint the stock concentration
C_Stock = 13.3 * u.g/u.L
C_required = 2 *u.mg/u.L
##When calculating the chemical flow rate required to achieve a given chemical concentration after mixing, we take chemical flow rate
##into consideration
Q_Chemical = Q_EntranceTank*C_required/(C_Stock-C_required)
print(Q_Chemical.to(u.ml/u.s),'of',C_Stock,'solution is needed to achieve',C_required,'chemical concentration after mixing.')
# ---
#
# ## Drain Time
#
# * Current design CDC parameters can be found in:
# http://designserver.cee.cornell.edu/designs/EtFlocSedFi/7667/60Lps/About.html
#This parameter can give us an intuition about the working frequency of the operator
# Assume the tank is of the same size as what used in previous Agua design
V_stank = 450 *u.L
#Compared with current Agua design drain time
Q_Chlor_Agua = 10.6*u.ml/u.s
# The drain time determines working frequency of operators to fill in the storage tank, in previous Agua design, this time is near 12 hours
Time_drain = V_stank/Q_Chemical
Time_drain_Agua = V_stank/Q_Chlor_Agua
print('Operators have to fill the storage tank of current Agua design after',Time_drain_Agua.to(u.hr))
print('Operators have to fill the storage tank of our design system after',Time_drain.to(u.hr))
print('The drain time of our systems is increased, the working frequency for the operators will be reduced by our system')
# ---
#
# ## Valve Orifice Parameters
##The diameter of the valve orifice at that time is calculated below
##We use a 1/8 inch diameter valve and two reducers to connect 1/4 inch tube to the valve.
D_tube = 1/4 * u.inch
A_tube = np.pi*D_tube**2/4
A_ValveOrifice = A_tube*Q_Chemical/(vc*Q_Chemical+Q_EntranceTank*A_tube/A_TankOrifice)
D_ValveOrifice = (4*A_ValveOrifice/np.pi)**0.5
print('The diameter of valve orifice should be',D_ValveOrifice.to(u.inch),'to achieve',C_required,'concentration after mixing with',D_tube,'diameter of dosing tube.')
# ---
#
# ## Function describe the relationship between Orifice diameter and chemical concentration
# +
##To further show the relationship between orifice diameter and chemical concentration after mixing, we defined a function for this relationship
##This function based on the last formula in our theoretical basis to calculate the mixing concentration as valve orifice diameter changes
def orifice(D_ValveOrifice,vc,D_tube,D_TankOrifice,C_Stock):
A_ValveOrifice = np.pi*D_ValveOrifice**2/4
A_tube = np.pi*D_tube**2/4
A_Tank = np.pi*D_TankOrifice**2/4
C_mix = C_Stock*(A_tube*A_ValveOrifice/(A_TankOrifice*(A_tube-vc*A_ValveOrifice)))/(1+(A_tube*A_ValveOrifice/(A_TankOrifice*(A_tube-vc*A_ValveOrifice))))
return C_mix
##Use this function above, we plot the relationship between faction opening and mixing concentration.
##First we design an array of valve orifice diameter from 0 to 1/8 inch and also the fraction opening from 0 to 1
D_orifice_array = np.zeros(101)*u.inch
Fraction_array = np.zeros(101)
for i in range(0,101):
D_orifice_array[i] = 1/8 * u.inch/100*i
Fraction_array[i] = (D_orifice_array[i]/(1/8*u.inch))**2
##And then we calculate the mixing concentration based on these parameters and the function we developed
C_array = np.zeros(101)*u.mg/u.L
for j in range(0,101):
C_array[j] = (orifice(D_orifice_array[j],vc,D_tube,D_TankOrifice,C_Stock)).to(u.mg/u.L)
plt.plot(Fraction_array,C_array)
plt.xlabel('Fraction Opening')
plt.ylabel('Chemical concentration mg/L')
plt.title('Figure 8 Chemical concentration after mixing vs. Orifice diameter inside valve')
plt.show()
# -
# ---
#
# ## Assumptions
# **First**, we assume the chemical solution from constant head to the lever is dominated by minor loss caused by the valve. And we also assume the minor loss only happens at the valve, there's no other minor loss along the tube.
#
# **Second**, we assume the vena contracta is the same for both tube and tank.
#
# **Third**, we assume the surface area of the float is large enough to make the submergence depth change of the float negligible.
#
# **Fourth**, we assume the flow is laminar, so that the friction factor will be relatively small compared with turbulent flow
# ---
#
# ## Error Analysis
# * Two variables can affect the fraction of major loss: water elevation in entrance tank and valve orifice area.
##To analyse the reliability of our system, we take the major loss into consideration and designed an array of different total head loss to see
##what faction of total head loss the major loss would take and how will this fraction changes with total head loss changes
##If our system is reliable, major loss should be less than 10% of total head loss.
H_array = np.zeros(101)*u.m
for i in range(0,101):
H_array[i]=0.6/100*i*u.m
L = 1 *u.m
##We use the orifice diamter that can make a 2mg/L concentration after mixing for 60L/s raw water (Headloss 0.48 m)
D_valve = (0.09239* u.inch).to(u.m)
vc = 0.62
A_in = vc*np.pi*D_valve**2/4
K_minor = (A_tube/A_in-1)**2
temp = 25 * u.degC
nu = (pc.viscosity_dynamic(temp))
density = (pc.density_water(temp)).to(u.kg/u.m**3)
##Solving a quadratic equation to find our the velocity inside the tube and further calculate major loss and minor loss in it
a = K_minor/(2*pc.gravity)
b = (32*nu*L/(density*pc.gravity*D_tube**2)).to(u.s)
ans_array = np.zeros(101)
for i in range(0,101):
ans_array[i]= ((-1*b+(b**2+4*a*H_array[i])**0.5)/(2*a)).magnitude
v_array = np.zeros(101)*u.m/u.s
for i in range(0,101):
v_array[i]=ans_array[i]*u.m/u.s
hf_array = np.zeros(101)*u.m
he_array = np.zeros(101)*u.m
ratio = np.zeros(101)
for i in range(0,101):
hf_array[i] = (b*v_array[i]).to(u.m)
he_array[i] = (a*v_array[i]**2).to(u.m)
ratio[i] = hf_array[i]/H_array[i]*100
plt.plot(H_array,ratio)
plt.title('Figure 9 The fraction of major loss')
plt.xlabel('Water elevation in Entrance Tank / m')
plt.ylabel('Major Loss / Total Headloss percentage')
plt.show()
D_tube
##When calculating the major loss in tube, we assume it to be laminar flow, since turbulent flow will have a higher friction factor
##Since we want major loss to be small relative to minor loss, we should constraint it to be laminar
##After calculating the velocity, we need to check whether the flow is laminar (<2300) or not
re = np.zeros(101)
for i in range (0,101):
re[i] = density * v_array[i]*D_tube/nu
plt.plot(H_array,re)
plt.title('Figure 10 Reynolds Number for flow in tube vs. Water elevation in Entrance Tank')
plt.xlabel('Water elevation in Entrance Tank / m')
plt.ylabel('Reynolds Number')
plt.show()
# +
##To check how much the performance of our system would deviate from our theoretical value, we calculating the mixing concentration under different
##water elevation in entrance tank and compared to our the theoretical value at this fraction opening, which is 2mg/L
Q_chem_array = np.zeros(101)*u.ml/u.s
Q_tank_array = np.zeros(101)*u.L/u.s
C_array = np.zeros(101)*u.mg/u.L
C_theoretical_array = np.zeros(101)*u.mg/u.L
for i in range(0,101):
Q_tank_array[i] = vc*A_TankOrifice*(2*pc.gravity*H_array[i])**0.5
Q_chem_array[i] = np.pi*D_tube**2*v_array[i]/4
C_array[i] = (C_Stock*Q_chem_array[i]/(Q_tank_array[i]+Q_chem_array[i])).to(u.mg/u.L)
C_theoretical_array[i] = 2 *u.mg/u.L
plt.plot(H_array,C_array)
plt.plot(H_array,C_theoretical_array)
plt.xlabel('Water elevation in Entrance Tank / m')
plt.ylabel('Chemical Concentration after mixing')
plt.title('Figure 11 Comparison between actual mixing concentration and theoretical mixing concentration')
plt.legend(['Actual mixing concentration','Theoretical mixing concentration'],loc='best')
plt.show()
# -
##We increase the valve orifice diameter to see the change in major loss fraction.
H_array = np.zeros(101)*u.m
for i in range(0,101):
H_array[i]=0.6/100*i*u.m
L = 1 *u.m
##We use the orifice diamter that can make large than 2mg/L concentration after mixing for 60L/s raw water (Headloss 0.48 m)
D_valve = (0.12* u.inch).to(u.m)
vc = 0.62
A_in = vc*np.pi*D_valve**2/4
K_minor = (A_tube/A_in-1)**2
temp = 25 * u.degC
nu = (pc.viscosity_dynamic(temp))
density = (pc.density_water(temp)).to(u.kg/u.m**3)
##Solving a quadratic equation to find our the velocity inside the tube and further calculate major loss and minor loss in it
a = K_minor/(2*pc.gravity)
b = (32*nu*L/(density*pc.gravity*D_tube**2)).to(u.s)
ans_array = np.zeros(101)
for i in range(0,101):
ans_array[i]= ((-1*b+(b**2+4*a*H_array[i])**0.5)/(2*a)).magnitude
v_array = np.zeros(101)*u.m/u.s
for i in range(0,101):
v_array[i]=ans_array[i]*u.m/u.s
hf_array = np.zeros(101)*u.m
he_array = np.zeros(101)*u.m
ratio = np.zeros(101)
for i in range(0,101):
hf_array[i] = (b*v_array[i]).to(u.m)
he_array[i] = (a*v_array[i]**2).to(u.m)
ratio[i] = hf_array[i]/H_array[i]*100
plt.plot(H_array,ratio)
plt.title('Figure 12 The fraction of major loss with increased valve orifice opening.')
plt.xlabel('Water elevation in Entrance Tank / m')
plt.ylabel('Major Loss / Total Headloss percentage')
plt.show()
##We decrease the valve orifice diameter to see the change in major loss fraction.
H_array = np.zeros(101)*u.m
for i in range(0,101):
H_array[i]=0.6/100*i*u.m
L = 1 *u.m
##We use the orifice diamter that can make smaller than 2mg/L concentration after mixing for 60L/s raw water (Headloss 0.48 m)
D_valve = (0.08* u.inch).to(u.m)
vc = 0.62
A_in = vc*np.pi*D_valve**2/4
K_minor = (A_tube/A_in-1)**2
temp = 25 * u.degC
nu = (pc.viscosity_dynamic(temp))
density = (pc.density_water(temp)).to(u.kg/u.m**3)
##Solving a quadratic equation to find our the velocity inside the tube and further calculate major loss and minor loss in it
a = K_minor/(2*pc.gravity)
b = (32*nu*L/(density*pc.gravity*D_tube**2)).to(u.s)
ans_array = np.zeros(101)
for i in range(0,101):
ans_array[i]= ((-1*b+(b**2+4*a*H_array[i])**0.5)/(2*a)).magnitude
v_array = np.zeros(101)*u.m/u.s
for i in range(0,101):
v_array[i]=ans_array[i]*u.m/u.s
hf_array = np.zeros(101)*u.m
he_array = np.zeros(101)*u.m
ratio = np.zeros(101)
for i in range(0,101):
hf_array[i] = (b*v_array[i]).to(u.m)
he_array[i] = (a*v_array[i]**2).to(u.m)
ratio[i] = hf_array[i]/H_array[i]*100
plt.plot(H_array,ratio)
plt.title('Figure 13 The fraction of major loss with decreased valve orifice opening.')
plt.xlabel('Water elevation in Entrance Tank / m')
plt.ylabel('Major Loss / Total Headloss percentage')
plt.show()
A_tube = D_tube**2 * np.pi/4
V_tube = v_array[100]
print(nu)
Re_tube=(V_tube*D_tube/pc.viscosity_kinematic(temp)).to(u.dimensionless)
print(Re_tube)
hf_array
# ---
#
# ## Profits of our system
# Compared to the previous CDC system,
#
# **First**, our system gets rid of the LFOM, slider and the long dosing tubes. With a single valve, the construction and operation of the system would be simplier. Also, the entrance tank flow rate will be more accurate compared with current design using LFOM.
#
# **Second**, the drain time of our storage tank is near 14 hours, which means operators can fill less frequently. This is because we use a more concentrated solution to reduce the flow and prevent the flow changes from laminar to turbulent
#
# **Third**,our system can provide a wider but proper concentration range after mixing. Using the parameters we designed, it could offer concentration ranges from 0 to 4 mg/L of chlorination, which means our design can achieve a wider mixing concentration range and can achieve the upper limit for chlorine dose(4mg/L).The current Agua Clara design can only reaches a maximum of 2 mg/L concentration after mixing. And since we do not have slider or LFOM to disturb the result, this system should be more reliable compared with system driven by major loss.
#
# **Fourth**, major loss only takes up less than 10% of total headloss when the system is operating at its designated work flow rate, thus, this system is quite reliable.
#
# **Fifth**, our system shows a linear relationship between fractional opening of the valve and the mixing concentration, which is easier for operation.
# ---
#
# ## Constraints of our system
#
# **First**, this new chemical dosing controller cannot be designed to treat low maximum flow rate, such as 1L/s. Since the height of the tank should be within 1 meter so that operators won't have any difficulties in monitoring the raw water inside the tank, the diameter of the orifice at the bottom of the tank will sharply decrease as the flow rate decrease. Given the same dosing tube, valve and chemical concentration in storage tank, a little change in valve diameter can incredibly increase the chemical concentration after mixing to be around 100 mg/L. Or, we have to use more clean water to dilute the chemical concentration, which decreases the efficiency of the treatment. Thus, the tank should be designed to handle high flow rate raw water.
#
# **Second**, when water depth inside the tank is reduced to a low level, we cannot make sure the flow rate could conform to the formula we used to calculate, since when water level is low, part of water may tend to stay quiescent but not flow through the orifice at the bottom,thus, the new system should avoid the water level is too low. Moreover, major loss percentage is increasing sharply when raw water level is too low, so this system should avoid treating raw water less than 10 cm deep.
#
# **Third**, since we need a laminar flow to reduce the importance of major loss, the Reynolds number also limit the orifice diameter of the valve, once we make it larger, the Reynolds number for high raw water flow rate will result in a larger value than the 2300 laminar limit. Otherwise, the friction factor can increase sharply when the flow turns from laminar to turbulent.At that moment, our flow won't stay constant as plant flow rate increases and the performance will be far away from our theoretical expectation.
# ---
#
# ## Future Work
# **First**,our system is just based on calculation now, at least a practicable system of this model has to be built to ensure all our proposal for this new system is correct.
#
# **Second**,this system could further be adjusted to be able work with a maximum 1L/s raw water treatment target.
#
# **Third**, we need further experiments to draw scale on the knob of the valve to help the operator adjust the mixing concentration. And also find range of plant flow rate that our new design shows more reliable performance than current CDC system.
#
# **Fourth**, current CDC system is suffering from blockage due to calcium carbonate formed as calcium hypochloride react with carbon dioxide, and its performance is decreaseing with time. Our design, though uses 1/4 inch tube to get rid of such problem in tubes, may stil suffer similar blockage inside the valve, and since the opening of our valve is often smaller compared with dosing tube used, it may result in more serious problem. We need some experiment to see the severity of blockage inside the valve and found a method to solve it.
#
| Previous Final Projects/2017/Chemical Dose Controller Simplification/Team_MIC_Chemical_Dose_Controller_Simplification_Report.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import humanreadable as hr
print("\n[Examples: humanreadable.Time]")
value = "120 sec"
print("'{}' to msecs -> {}".format(value, hr.Time(value).milliseconds))
print("'{}' to minutes -> {}".format(value, hr.Time(value).minutes))
print("\n[Examples: humanreadable.BitPerSecond]")
value = "1 Gbps"
print("'{}' to Mbps -> {}".format(value, hr.BitPerSecond(value).mega_bps))
print("'{}' to Kbps -> {}".format(value, hr.BitPerSecond(value).kilo_bps))
print("'{}' to Kibps -> {}".format(value, hr.BitPerSecond(value).kibi_bps))
# +
import humanreadable as hr
print(hr.Time("1", default_unit=hr.Time.Unit.SECOND))
# +
from pytablewriter import RstGridTableWriter
writer = RstGridTableWriter()
writer.table_name = "Available units for humanreadable.Time"
writer.headers = ["Unit", "Available specifiers (str)"]
value_matrix = []
for key, values in hr.Time.get_text_units().items():
value_matrix.append([key.name, "/".join(["``{}``".format(value) for value in values])])
writer.value_matrix = value_matrix
writer.write_table()
# +
from pytablewriter import RstGridTableWriter
writer = RstGridTableWriter()
writer.table_name = "Available units for humanreadable.BitPerSecond"
writer.headers = ["Unit", "Available specifiers (str)"]
value_matrix = []
for key, values in hr.BitPerSecond.get_text_units().items():
value_matrix.append([key.name, "/".join(["``{}``".format(value) for value in values])])
writer.value_matrix = value_matrix
writer.write_table()
# -
| examples/humanreadable_examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: env
# language: python
# name: env
# ---
# <p style="text-align: right;"> ✅ Put your name here</p>
# # <p style="text-align: center;"> Pre-Class Assignment 23: Background for Quantum Computing </p>
# This notebook starts a unit introducing a different model of computation, a model that plays by the rules of <i>quantum</i> physics rather than <i>classical</i> physics.
#
# I hope you're not intimidated! Unfortunately, quantum physics gets a bad rap of being inherently confusing. This is not at all the case! Quantum physics sounds strange due to some weird consequences that we'll see shortly, but it's actually really easy to <i>do</i>, involving some simple mathematics that you have probably seen before. And it doesn't require you to know any classical physics! (If you do, you'll see why quantum physics is a strange theory.)
#
# Quantum computing is a relatively new field that started in the 1980s and 1990s. Due to recent advances in experimental physics and engineering, we have today some of the world's first quantum computers, and the field has received a lot of attention recently. At the end of this unit, you'll have the opportunity to program a quantum computer!
# ## <p style="text-align: center;"> Itinerary for Quantum Computing Unit </p>
# <table align="center" style="width:50%">
# <tr>
# <td style="text-align:center"><b>Assignment</b></td>
# <td style="text-align:center"><b>Topic</b></td>
# <td style="text-align:center"><b>Description</b></td>
# </tr>
# <tr>
# <td bgcolor="yellow" style="text-align:center">Pre Class 23</td>
# <td bgcolor="yellow" style="text-align:center">Background for Quantum Computing</td>
# <td bgcolor="yellow" style="text-align:center">How Computers Store Information</td>
# </tr>
# <tr>
# <td style="text-align:center">In Class 23</td>
# <td style="text-align:center">Classsical and Quantum Bits</td>
# <td style="text-align:center">Information in Quantum States</td>
# </tr>
# <tr>
# <td style="text-align:center">Pre Class 24</td>
# <td style="text-align:center">Software for Quantum Computing</td>
# <td style="text-align:center">High Level Software and the Circuit Model</td>
# </tr>
# <tr>
# <td style="text-align:center">In Class 24</td>
# <td style="text-align:center">Programming Quantum Computers</td>
# <td style="text-align:center">Manipulating Quantum Bits to Perform Useful Computations</td>
# </tr>
# </table>
# ### <p style="text-align: center;"> Before you start... </p>
# Take ten seconds to answer these survey questions:
from IPython.display import HTML
HTML(
"""
<iframe
src="https://goo.gl/forms/aTOqrX354o9n52r92"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
# ## <p style="text-align: center;"> Learning Goals for Today's Pre-Class Assignment </p>
# By the end of today's pre-class assignment, you should be able to:
#
# 1. Describe how computers store information using binary digits.
# 1. State the fundamental difference between classical and quantum computers in terms of how they store information.
# 1. Review/learn <b><font color="green">complex numbers</font></b>, <b><font color="red">probability</font></b> distributions, and <b><font color="blue">vectors</font></b> to more deeply understand quantum binary digits.
# # <p style="text-align: center;"> How Computers Store Information </p>
# Watch the following video to learn about <b>binary digits</b>, or <b>bits</b>, the fundamental unit of information for all data in a computer.
"""How computers work: binary & data."""
from IPython.display import YouTubeVideo
YouTubeVideo("USCBCmwMCDA", width=640, height=360)
# <b>Question:</b> What are the possible values of a bit?
# <font size=8 color="#009600">✎</font> **Answer:** Erase the contents of this cell and put your answer here!
# The video mentioned that 1001 in binary is equal to 9 in decimal. You should understand how to convert from binary to decimal ($1001$ means $1 \cdot 2^3 + 0 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 = 9$). There's a cool trick for doing this in Python, shown below.
"""Cool trick! Converting from binary to decimal."""
int("1001", 2)
# Here, the first argument to `int` is what gets converted to a number. The second argument to `int` represents the base of the number system to use (binary = base 2). You can change the first argument to get different resulting numbers and test your understanding of binary.
# <b>Question:</b> All data on a computer--including text, images, and sound--is stored in bits. Pick one of these (text, images, or sound) and explain how bits are used to represent this information.
# <font size=8 color="#009600">✎</font> **Answer:** Erase the contents of this cell and put your answer here!
# <b>Question:</b> What do we use to physically represent bits in computers?
# <font size=8 color="#009600">✎</font> **Answer:** Erase the contents of this cell and put your answer here!
# # <p style="text-align: center;"> How Quantum Computers Store Information </p>
# Recall the last statement from the above video on bits and data:
#
# <blockquote> <i> <font size="4"> "If you want to understand how computers work on the inside, it all comes down to these simple ones and zeros and the electrical signals in the circuits behind them." </font> </i> </blockquote>
#
# In the same way, if you want to understand how <b>quantum</b> computers work, it all comes down to how information is stored.
#
# <blockquote> <i> <font size="4"> Quantum computers store information in <b>quantum bits</b>, or <b>qubits</b> (pronounced "CUE bits") for short. </font> </i> </blockquote>
#
# Watch the following short video to get introduced to qubits.
"""Introduction to qubits."""
from IPython.display import YouTubeVideo
YouTubeVideo("KBpYK3i3kDs",width=640,height=360)
# # <p style="text-align: center;"> Understanding Qubits: Three Key Concepts </p>
# To understand a qubit, we only have to understand three concepts.
#
# 1. <b><font color="green">Complex numbers.</font></b>
# 1. <b><font color="red">Probability.</font></b>
# 1. <b><font color="blue">Vectors.</font></b>
#
# Watch the next three videos to see each concept in turn, and complete the exercises to test your understanding.
#
# The goal of these concepts is to understand a qubit at a deeper level. Each may seem unrelated, but everything will tie together at the end of the notebook.
"""Imports for the notebook."""
import numpy as np
import matplotlib.pyplot as plt
# ## <p style="text-align: center;"> Concept #1: <font color="green">Complex Numbers</font> </p>
# Watch the following video on complex numbers.
"""Complex numbers."""
from IPython.display import YouTubeVideo
YouTubeVideo("3AmdT0CsLbk",width=640,height=360)
# ### <p style="text-align: center;"> <font color="green">Video Recap</font> </p>
# * The <b><font color="green">imaginary unit</font></b>, which we'll denote $i$, is defined by the property that $i^2 = -1$. (<b>Note:</b> In Python, `j` is used for the imaginary unit.)
#
# * A <b><font color="green">complex number</font></b> has the form
#
# \begin{equation}
# \alpha = a + b i
# \end{equation}
#
# where $a$ and $b$ are real numbers. (The symbol $\alpha$ is the Greek letter alpha. We'll use Greek letters for complex numbers to not confuse them with real numbers.)
#
# * The <b><font color="green">addition of two complex numbers</font></b> is defined by
#
# \begin{equation}
# \alpha + \beta = (a + b i) + (c + d i) := (a + c) + (b + d)i .
# \end{equation}
#
# * We define the <b><font color="green">complex conjugate</font></b> of a complex number $\alpha = a + bi$ to be
#
# \begin{equation}
# \alpha^* := a - bi .
# \end{equation}
#
# (That is, we flip the sign of the imaginary part.)
#
# * The <b><font color="green">modulus squared</font></b> of $\alpha$ is defined to be the product of itself with its complex conjugate:
#
# \begin{equation}
# |\alpha|^2 := \alpha^* \alpha = a^2 + b^2
# \end{equation}
#
# As you might guess, the <b><font color="green">modulus</font></b> is just the square root of the modulus squared.
# ### <p style="text-align: center;"> <font color="green">Exercise: Working with Complex Numbers</font> </p>
# <font size=8 color="#009600">✎</font> **Do this:** Run the cell below to see how to perform some operations on complex numbers in Python.
# +
"""Working with complex numbers in Python."""
# define two complex numbers
alpha = 1 + 2j # note: j is used for the imaginary unit in Python
beta = 3 - 4j
print("alpha =", alpha)
print("beta =", beta)
# print out the type of alpha
print("\ntype(alpha) =", type(alpha))
# print out the real and imaginary part of alpha
print("\nThe real part of alpha is", alpha.real)
print("The imaginary part of alpha is", alpha.imag)
# print out the sum of alpha and beta
print("\nalpha + beta =", alpha + beta)
# print out the complex conjugate of alpha and beta
print("\nalpha* =", alpha.conjugate())
print("beta* =", beta.conjugate())
# -
# <font size=8 color="#009600">✎</font> **Do this:** Write a function called `modulus_squared` that inputs a complex number $\alpha$ and returns its modulus squared $|\alpha|^2 = \alpha^* \alpha$.
#
# <b>Important:</b> Make sure your function returns a `float`, not a `complex` number. You can do this by using the `real` part of the modulus squared.
"""Put code for implementing your function here!"""
def modulus_squared(alpha):
pass
"""ANSWER."""
def modulus_squared(alpha):
# one way
return (alpha.conjugate() * alpha).real
# another way
return abs(alpha)**2
# The next cell contains test cases for your function. If your function is correct, this cell will execute without error. (Note: `assert EXPRESSION` throws an error if the `EXPRESSION` is `False`. Otherwise, nothing happens. For this reason, it's often used to test code.)
"""Test cases: run this cell to ensure your function is correct."""
assert np.isclose(modulus_squared(3+4j), 25.0)
assert np.isclose(modulus_squared(1), 1.0)
assert np.isclose(modulus_squared(1j), 1.0)
assert np.isclose(modulus_squared(-3 - 4j), 25.0)
# ## <p style="text-align: center;"> Concept #2: <font color="red">Probability</font> </p>
# Watch the following video on probability distributions.
"""Probability."""
from IPython.display import YouTubeVideo
YouTubeVideo("rfmmhXzi5lk",width=640,height=360)
# ### <p style="text-align: center;"> <font color="red">Video Recap</font> </p>
# A <b><font color="red">probability distribution</font></b> is a list of numbers $p_1, ..., p_n$ that satisfy the following conditions:
#
# * Each probability is non-negative.
#
# \begin{equation}
# p_i \ge 0
# \end{equation}
#
# * The sum over all probabilites is equal to one.
#
# \begin{equation}
# \sum_{i = 1}^{n} p_i = 1 .
# \end{equation}
# ### <p style="text-align: center;"> <font color="red">Exercise: Working with Probabilities</font> </p>
# **Question:** Could the following list of numbers be a probability distribution? Why or why not?
"""Potential probability distribution."""
distribution = np.array([0.1, 0.3, 0.2, 0.2, 0.1, 0.2])
# <font size=8 color="#009600">✎</font> **Answer:** Erase the contents of this cell and put your answer here!
# **Question:** Write a function, called `is_valid`, that inputs a numpy array and returns `True` if the list of numbers defines a valid probability distribution, else returns `False`.
"""Put code for implementing your function here!"""
def is_valid(array):
pass
"""ANSWER."""
def is_valid(array):
if any(array < 0) or sum(array) != 1:
return False
return True
# Run the next cell to test your function. If your function is correct, no errors should be thrown.
"""Run this cell to test your function."""
assert is_valid(np.array([0.5, 0.3, 0.2]))
assert not is_valid(np.array([0.2, 0.4, 0.2]))
assert not is_valid(np.array([1.0, -1.0, 1.0]))
# ## <p style="text-align: center;"> Concept #3: <font color="blue">Linear Algebra & Vectors </font> </p>
# Watch the following video on vectors.
"""Linear algebra and vectors."""
from IPython.display import YouTubeVideo
YouTubeVideo("klDm1eC1gxg",width=640,height=360)
# ### <p style="text-align: center;"> <font color="blue">Video Recap</font> </p>
# * A <b><font color="blue">vector</font></b> is the formal mathematical term for a list of numbers. (You may understand vectors as objects with size and direction, which is an equally valid definition. For the purposes of quantum computing, it's more convenient to think of vectors as just lists of numbers.)
#
# * An example of a vector is
#
# \begin{equation}
# |0\rangle := \left[ \begin{matrix}
# 1 \\
# 0 \\
# \end{matrix} \right],
# \end{equation}
#
# and another example of a vector is
#
# \begin{equation}
# |1\rangle := \left[ \begin{matrix}
# 0 \\
# 1 \\
# \end{matrix} \right]
# \end{equation}
#
# * The angled-bracket notation $|\rangle$ denotes that an object is a vector. The number inside of the angled brackets is a label for which vector it is. (You'll see why we label the vectors 0 and 1 in the next In Class Assignment. In principle, though, any symbol could be used to label the vector.)
#
# * <font color="blue"><b>Vector addition</b></font> is defined component-wise. For example,
#
# \begin{equation}
# |0\rangle + |1\rangle = \left[ \begin{matrix}
# 1 \\
# 0 \\
# \end{matrix} \right] +
# \left[ \begin{matrix}
# 0 \\
# 1 \\
# \end{matrix} \right]
# =
# \left[ \begin{matrix}
# 1 + 0 \\
# 0 + 1 \\
# \end{matrix} \right]
# =
# \left[ \begin{matrix}
# 1 \\
# 1 \\
# \end{matrix} \right]
# \end{equation}
#
# * We can also take <font color="blue"><b>scalar multiples</b></font> of vectors, for example
#
# \begin{equation}
# \alpha |0\rangle = \alpha \left[ \begin{matrix}
# 1 \\
# 0 \\
# \end{matrix} \right]
# =
# \left[ \begin{matrix}
# \alpha \cdot 1 \\
# \alpha \cdot 0 \\
# \end{matrix} \right]
# =
# \left[ \begin{matrix}
# \alpha \\
# 0 \\
# \end{matrix} \right].
# \end{equation}
#
# In general, we multiply each component of the vector by the number $\alpha$.
#
# * This allows us to write <b>superpositions</b>, which are scalar multiples and sums of vectors. That is, equations of the form
#
# \begin{equation}
# \alpha |0\rangle + \beta |1\rangle
# \end{equation}
#
# * In Python, Numpy arrays handle vector operations for us.
# ### <p style="text-align: center;"> <font color="blue">Exercise: Working with Vectors</font> </p>
# The following cell shows how we use Numpy arrays to work with vectors in Python.
# +
"""Using numpy to perform vector operations."""
# the |0> == zero vector and |1> == one vector from above
zero = np.array([1, 0], dtype=np.complex64)
one = np.array([0, 1], dtype=np.complex64)
# print out the vectors
print("|0> =", zero)
print("|1> =", one)
# some complex numbers
alpha = 0.5 + 0.5j
beta = 1 - 2j
# -
# <font size=8 color="#009600">✎</font> **Do this:** Run the following code cell to see how Numpy arrays handle vector operations for us. Complete the last portion, labeled `TODO`.
# +
"""Run this cell. Complete the last portion."""
# print out the sum of zero and one
print("|0> + |1> =", zero + one)
# compute and print out alpha * |0>
print("alpha |0> =", alpha * zero)
# compute and print out beta * |1>
print("beta |1>> =", beta * one)
# TODO: print out the superposition alpha |0> + beta |1>
# -
"""ANSWER."""
# TODO: print out the superposition alpha |0> + beta |1>
print("alpha |0> + beta |1> =", alpha * zero + beta * one)
# <b>Question:</b> Is this output of the cell above what you expect based on the definition of vector addition and scalar multiples of vectors?
# <font size=8 color="#009600">✎</font> **Answer:** Erase the contents of this cell and put your answer here!
# # <p style="text-align: center;"> Tying Together the Concepts </p>
# When we introduced a qubit, we said it could be the state $|0\rangle$, $|1\rangle$, or superpositions of $|0\rangle$ and $|1\rangle$. We can now fully understand this statement.
#
# A <b>superposition</b> is a sum of scalar multiples of vectors. So, the most general state of a <b>qubit</b> can be written
#
# \begin{equation}
# |\psi\rangle = \alpha |0\rangle + \beta |1\rangle
# =
# \left[ \begin{matrix}
# \alpha \\
# \beta \\
# \end{matrix} \right]
# \end{equation}
#
# where $|\alpha|^2 + |\beta|^2 = 1$.
#
# That is, a <b>qubit</b> is a <b><font color="blue">vector</font></b> of <b><font color="green">complex numbers</font></b>. These complex numbers determine the <b><font color="red">probability</font></b> of measuring a particular state, as we'll discuss in upcoming assignments.
#
# Unlike bits, which are only 0 or 1, qubits can exist in superposition states. This is the first idea that there is "more" processing power with qubits (quantum computers) than with bits (classical computers).
#
#
# However, this isn't the entire story. <i>Teaser of what's to come:</i> When we measure qubits, we record either 0 with a probability that depends on $\alpha$, the coefficient of $|0\rangle$. In particular,
#
# \begin{equation}
# p(\text{measuring 0}) = |\alpha|^2
# \end{equation}
#
# Similarly for measuring 1:
#
# \begin{equation}
# p(\text{measuring 1}) = |\beta|^2
# \end{equation}
#
# This is why we requre $|\alpha|^2 + |\beta|^2 = 1$ for qubits. The next in class assignment will explore measurements further and give you more practice working with qubits.
#
# (Brief remark for those interested: A qubit is an example of a wavefunction in quantum physics. A wavefunction is a mathematical description of a quantum system. In the discrete case (like a qubit), it consists of a vector of complex numbers which determine the probability of measuring particular states.)
# # <p style="text-align: center;"> Assignment Wrapup </p>
# ## <p style="text-align: center;"> Survey </p>
from IPython.display import HTML
HTML(
"""
<iframe
src="https://goo.gl/forms/n00m87at8mHLAbZN2"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
# ## <p style="text-align: center;"> Congrats, You're Finished! </p>
# Now, you just need to submit this assignment by uploading it to the course <a href="https://d2l.msu.edu/">Desire2Learn</a> web page for today's submission folder. (Don't forget to add your name in the first cell.)
# <p style="text-align: right;"><b>© Copyright 2019, Michigan State University Board of Trustees.</b></p>
| quantum/Day-23_Pre-Class_QuantumComputingBackground-INSTRUCTOR.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Wide Format (Untidy)
# The **wide format** (or the **untidy format**) is a matrix where each row is an individual, and each column is an observation. In this case, the heatmap makes a visual representation of the matrix: each square of the heatmap represents a cell. The color of the cell changes according to its value.
# In order to draw a heatmap with a wide format dataset, you can use the `heatmap()` function of seaborn.
# +
# library
import seaborn as sns
import pandas as pd
import numpy as np
# Create a dataset
df = pd.DataFrame(np.random.random((5,5)), columns=["a","b","c","d","e"])
# Default heatmap: just a visualization of this square matrix
sns.heatmap(df)
# -
# ## Correlation Matrix (Square)
# Suppose you measured **several variables** for **n individuals**. A common task is to check if some variables are **correlated**. You can easily calculate the correlation between each pair of variable, and plot this as a **heatmap**. This lets you discover which variable is related to the other.
#
# As a difference from the previous example, you will give a correlation matrix as an input instead of a wide format data.
# +
# library
import seaborn as sns
import pandas as pd
import numpy as np
# Create a dataset
df = pd.DataFrame(np.random.random((100,5)), columns=["a","b","c","d","e"])
# Calculate correlation between each pair of variable
corr_matrix=df.corr()
# plot it
sns.heatmap(corr_matrix, cmap='PuOr')
# -
# Note that in this case, both correlations (i.e. from a to b and from b to a) will appear in the heatmap. You might want to plot a half of the heatmap using `mask` argument like this example:
# +
# library
import seaborn as sns
import pandas as pd
import numpy as np
np.random.seed(0)
# Create a dataset
df = pd.DataFrame(np.random.random((100,5)), columns=["a","b","c","d","e"])
# Calculate correlation between each pair of variable
corr_matrix=df.corr()
# Can be great to plot only a half matrix
# Generate a mask for the upper triangle
mask = np.zeros_like(corr_matrix)
mask[np.triu_indices_from(mask)] = True
# Draw the heatmap with the mask
sns.heatmap(corr_matrix, mask=mask, square=True)
# -
# ## Long Format (Tidy)
# In the **tidy** or **long** format, each line represents an observation. You have 3 columns: individual, variable name, and value (x, y and z). You can plot a heatmap from this kind of data as follow:
# +
# library
import seaborn as sns
import pandas as pd
import numpy as np
# Create long format
people = np.repeat(("A","B","C","D","E"),5)
feature = list(range(1,6))*5
value = np.random.random(25)
df = pd.DataFrame({'feature': feature, 'people': people, 'value': value })
# Turn long format into a wide format
df_wide = df.pivot_table( index='people', columns='feature', values='value')
# plot it
sns.heatmap(df_wide)
| src/notebooks/90-heatmaps-with-various-input-format.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Des cartes interactives dans ipython Notebook ?
# avec Folium ( Leaflet.js + openstreetmap )
#
# http://python-visualization.github.io/folium/
# https://github.com/python-visualization/folium
# ## Getting started
import folium
# liste des layers disponibles :
# http://python-visualization.github.io/folium/module/map.html#tilelayer
# +
map_osm = folium.Map(location=[45.0136, 6.6750], zoom_start=12 )
# , tiles='Stamen Terrain')
folium.map.Marker(location=[45.0136, 6.675], popup='Portland, OR',
icon=folium.Icon(color='blue', icon_color='white',icon='fa-globe', prefix='fa') ).add_to(map_osm)
lines = [[45.0136, 6.675], [45.0238, 6.66588]]
folium.features.PolyLine(lines).add_to(map_osm)
# -
map_osm
| test_map_with_folium.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <img src="http://image.yes24.com/goods/87039632/800x0" width="200" height="200"><br>
# </center>
#
#
# - Github: [yoonkt200](https://github.com/yoonkt200/python-data-analysis)
#
#
#
#
#
#
#
#
#
#
# # Chapter02 텍스트 마이닝 첫걸음
#
# - 이장의 핵심개념
# - 웹 크롤링으로 데이터를 수집한다
# - 키워드 추출의 방법을 알아본다.
# - 키워드 간의 연관 관계를 분석한다.
# - 텍스트 분석 결과를 시각화 한다.
#
# ### 2.1 웹 크롤링으로 기초 데이터 수집하기
#
# 이번 절에서는 '나무위키 최근 변경 페이지'의 텍스트 데이터르 웹 크롤링 Web Crawling 으로 수집한 다음, 데이터 내에서 등장한 키워드의 출현 빈도를 분석해 보겠다. 이를 통해 우리는 나무위키 페이지에서 현재 가장 '핫한'키워드가 무엇인지 분석할 수 있다. 웹 크롤링 혹은 웹 스크래핑 Web Scraping은 인터넷에 있는 웹페이지를 방문해서 페이지의 자료를 자동으로 수집하는 작업을 의미한다. 여기에선 파이썬으로 웹 크롤링을 진행하겠다.
#
# #### 대상 페이지의 구조 살펴보기
#
# 크롤링을 위한 첫 번째 단계는 인터넷 익스플로러, 크롬 등의 웹브라우저를 실행하여 크롤링의 대상이 될 페이지 구조를 살펴보는 것이다.
# - 먼저 웹 브라우저의 '개발자 도구'를 실행한다. 단축키 Ctrl + Shift + l
# - 리스트의 URL 정보를 수집
#
# #### 웹 크롤링 라이브러리 사용하기
#
# 파이썬에서는 BeautifulSoup 과 requests라는 라이브러리로 웹 크롤러를 만들 수 있다. requests는 특정 URL로부터 HTML 문서를 가져오는 작업을 수행하고, BeautifulSoup 모듈은 HTML 문서에서 데이터를 추출하는 작업을 수행한다. 이 모듈들을 사용하기에 앞서, 터미널(cmd) 혹은 아나콘다 프롬프트를 실행하여 아래의 세 가지 파이썬 모듈을 설치해야한다.
#
# +
# #!pip3 install lxml beautifulsoup4 requests
# # !pip3 install lxml
# # !pip3 install bs4
# # !pip3 install html5lib
# +
# -*- coding: utf-8 -*-
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
import requests
from bs4 import BeautifulSoup
import re
import lxml
# -
# 다음 코드에서는 requests.get()함수로 URL의 HTML 문서를 가져온 뒤, 이를 BeautifulSOup()클래스의 soup 객체로 변환한다. 그리고 find(), find_all() 함수를 사용하여 특정 HTML 태그 혹은 특정 HTML 클래스를 가진 데이터를 가져온다.
# - 페이지 리스트 가져오기
# +
# 크롤링할 사이트 주소를 정의한다
source_url = "https://namu.wiki/RecentChanges"
# 사이트의 HTML 구조에 기반하여 크롤링을 수행한다.
req = requests.get(source_url)
html = req.content
soup = BeautifulSoup(html, 'html.parser')
contents_table = soup.find(name="table")
table_body = contents_table.find(name="tbody")
table_rows = table_body.find_all(name="tr")
# a 태그의 href 속성을 리스트로 추출하여 크롤링할 페이지 리스트를 생성한다.
page_url_base = "https://namu.wiki"
page_urls = []
for index in range(0, len(table_rows)):
first_td = table_rows[index].find_all('td')[0]
td_url = first_td.find_all('a')
if len(td_url) > 0:
page_url = page_url_base + td_url[0].get('href')
page_urls.append(page_url)
# 중복 url을 제거한다.
page_urls = list(set(page_urls))
for page in page_urls[:5]:
print(page)
# -
# 위의 코드는 개발자 도구로 살펴본 HTML 구조에 기반하여 table -> tbody -> tr -> td -> a 태그 순의 HTML 계층 구조를 좁혀나가는 과정이다. 이과정을 통해 목표 태그에 도달했을 때, get(href) 함수로 태그의 속성 정보를 추출한다. get()함수는 해당 태그가 가지고 있는 특정한 속성을 추출한다.
#
# ### 텍스트 정보 수집하기
#
# 이전과 마찬가지로 개발자 도구의 마우스 포인터 모양 아이콘을 클릭한다. 그리고 문서의 '제목'부분, 문서의 '카테고리'부분, 그리고 '본문'부분을 클릭하여 HTML의 구조를 파악한다. 문서의 전체 내용은 'article'이라는 태그 안에 구성되어 있다. 그리고 제목은 'h1'태그, 카테고리 부분은 'ul' 태그 영역 안에 존재하며, 본문의 내용은 'wiki-pararaph'라는 클래스로 구성된 'div'태그 안에 위치하고 있다. 다음 코드는 최근 변경된 문서 중 한 페이지의 텍스트 정보를 크롤링한 것이다. 이전 단계와 다른 점은 get()함수 대신 text()함수를 사용하여 태그의 텍스트 정보만을 추출했다는 점이다.
# - URL 페이지 정보를 기반으로 크롤링하기
# +
# 하나의 최근 변경된 문서를 크롤링 한다.
req = requests.get(page_urls[0])
html = req.content
soup = BeautifulSoup(html, 'html.parser')
contents_table = soup.find(name="article")
title = contents_table.find_all('h1')[0]
category = contents_table.find_all('ul')[0]
content_paragraphs = contents_table.find_all(name="div", attrs={"class":"wiki-paragraph"})
content_corpus_list = []
# 크롤링한 문서 정보 출력
for paragraphs in content_paragraphs:
content_corpus_list.append(paragraphs.text)
content_corpus = "".join(content_corpus_list)
print(title.text)
print("\n")
print(category.text)
print("\n")
print(content_corpus)
# -
# ### 2.2 나무위키 최근 변경 페이지 키워드 분석하기
#
# 이제 분석에 사용할 데이터가 준비되었으니, 본격적으로 텍스트 마이닝을 알아보자.
#
# #### Step 1 크롤링:웹 데이터 가져오기
#
# 이전 단계와 동일한 방법으로 웹 데이터를 크롤링하자. 단, 이번에는 모든 URL의 데이터를 가져와 보자. 다음 코드를 실행하여 나무위키에서 최근 변경이 일어난 페이들의 URL을 page_urls라는 변수에 저장한다.
#
# - BeautifulSoup을 이용해 웹 크롤링하기
# +
# -*- coding:utf-8 -*-
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import requests
from bs4 import BeautifulSoup
import re
# +
# 크롤링할 사이트 주소를 정의한다
source_url = "https://namu.wiki/RecentChanges"
# 사이트의 HTML 구조에 기반하여 크롤링을 수행한다.
req = requests.get(source_url)
html = req.content
soup = BeautifulSoup(html, 'html.parser')
contents_table = soup.find(name="table")
table_body = contents_table.find(name="tbody")
table_rows = table_body.find_all(name="tr")
# +
# a태그의 href 속성을 리스트로 추출하여, 크로링 할 페이지 리스트를 생성하자.
page_url_base = "https://namu.wiki"
page_urls = []
for index in range(0, len(table_rows)):
first_td = table_rows[index].find_all('td')[0]
td_url = first_td.find_all('a')
if len(td_url) > 0:
page_url = page_url_base + td_url[0].get('href')
if 'png' not in page_url:
page_urls.append(page_url)
# 중복 url을 제거 하자
page_urls = list(set(page_urls))
# -
# 이제 이 주소들에 다시 한번 접급하여 문서의 본문과 제목, 그리고 카테고리에 등장하는 텍스트 데이터를 가져와 보자.
# - 나무위키 최근 변경 데이터 크롤링하기
# +
# 크롤링한 데이터를 데이터 프레임으로 만들기 위해 준비합니다.
columns = ['title', 'category', 'content_text']
df = pd.DataFrame(columns=columns)
# 각 페이지별 '제목', '카테고리', '본문' 정보를 데이터 프레임으로 만듭니다.
for page_url in page_urls:
# 사이트의 html 구조에 기반하여 크롤링을 수행합니다.
req = requests.get(page_url)
html = req.content
soup = BeautifulSoup(html, 'html.parser')
contents_table = soup.find(name="article")
title = contents_table.find_all('h1')[0]
# 카테고리 정보가 없는 경우를 확인합니다.
if len(contents_table.find_all('ul')) > 0:
category = contents_table.find_all('ul')[0]
else:
category = None
content_paragraphs = contents_table.find_all(name="div", attrs={"class":"wiki-paragraph"})
content_corpus_list = []
# 페이지 내 제목 정보에서 개행 문자를 제거한 뒤 추출합니다. 만약 없는 경우, 빈 문자열로 대체합니다.
if title is not None:
row_title = title.text.replace("\n", " ")
else:
row_title = ""
# 페이지 내 본문 정보에서 개행 문자를 제거한 뒤 추출합니다. 만약 없는 경우, 빈 문자열로 대체합니다.
if content_paragraphs is not None:
for paragraphs in content_paragraphs:
if paragraphs is not None:
content_corpus_list.append(paragraphs.text.replace("\n", " "))
else:
content_corpus_list.append("")
else:
content_corpus_list.append("")
# 페이지 내 카테고리정보에서 “분류”라는 단어와 개행 문자를 제거한 뒤 추출합니다. 만약 없는 경우, 빈 문자열로 대체합니다.
if category is not None:
row_category = category.text.replace("\n", " ")
else:
row_category = ""
# 모든 정보를 하나의 데이터 프레임에 저장합니다.
row = [row_title, row_category, "".join(content_corpus_list)]
series = pd.Series(row, index=df.columns)
df = df.append(series, ignore_index=True)
# -
df.head()
# 위의 실행 결과는 모든 URL의 텍스트 데이터를 가져온 뒤, 이를 데이터 프레임의 형태로 변환한 것이다. 데이터에 등장하는 불필요한 문자인 '\n','분류'는 크롤링 과정에서 replace()함수로 제거 한다.
#
# ### Step 2 추출: 키워드 정보 추출하기
#
# 다음은 수집한 텍스트 데이터에서 키워드 정보를 추출하는 단계이다. 이를 위해 **텍스트 전처리** 작업이 필요하다. 텍스트 전처리는 특수문자나 외국어를 제거하는 등의 과정을 포함한다. 그런데 이는 언어와 상황마다 조금씩 다를 수 있다. 예를 들어 스팸메일을 분류하는 텍스트 마이닝의 경우, 특수문자나 외국어가 분석의 중요한 힌트가 될 수 있기 때문에 이를 제거하지 않는 편이다. 반면, 키워드 분석처럼 '단어'를 추출하는 것이 목적이라면 특정 언어의 글자만을 추출하기도 한다.
#
# 파이썬에서는 're'라는 모듈을 통해 정규표현식을 사용할 수 있다. 정규표현식이란 특정한 규칙을 가진 문자열의 집합을 표현하는 형식이다. 만약 다음 코드와 같이 re.compile('[^ㄱㅡㅣ가ㅡ힣]+')이라는 코드로 한글에 대한 정규표현식을 정의하면 대상이 되는 텍스트 데이터에서 한글만 추출할 수 있게 된다.
#
# - 텍스트 데이터 전처리하기
#
# 텍스트 정제 함수: 한글 이외의 문자는 전부 제거 한다.
def text_cleaning(text):
hangul = re.compile('[^ ㄱ-ㅣ 가-힣]+') #한글의 정규표현식을 나타낸다.
result = hangul.sub('',text)
return result
print(text_cleaning(df['content_text'][0]))
# 모든 데이터에 전처리를 적용하기 위해서는 apply() 함수를 사용한다. 다음 코드는 title,category,content_text 3개의 피처에 apply()함수를 적용한 것이다. 이를 head()함수로 출력하면 한글을 제외한 문자들이 제거된 것을 확인할 수 있다.
# - 모든 데이터에 전처리 적용하기
# 각 피처마다 데이터 전처리를 적용한다.
df['title'] = df['title'].apply(lambda x: text_cleaning(x))
df['category'] = df['category'].apply(lambda x: text_cleaning(x))
df['content_text'] = df['content_text'].apply(lambda x: text_cleaning(x))
df.head()
# 다음 과정은 키워드를 추출한 뒤, 빈도 분석을 수행하는 과정이다. 여기서 키워드를 추출한다는 것은 무엇을 의미 할까? 키워드 추출이란 좁은 의미에서는 **명사, 혹은 형태소 단위의 문자열**을 추출하는 것이다. 이를 수행 하기 위해 **말뭉치**라는 것을 만들어야 한다. 말뭉치는 말 그대로 텍스트 데이터의 뭉텅이를 의미한다. 이번 예제에서는 제목단위, 카테고리 단위, 본문 단위의 키워드를 분석하기 위해 제목 말뭉치, 카테고리 말뭉치, 본문 말뭉치 총 3개의 말뭉치를 생성한다. 다음 코드에서는 텍스트 피처를 tolist()로 추출한 뒤, join() 함수로 말뭉치를 생성해주었다. 실행 결과는 제목 말뭉치의 출력 결과이다.
# - 말뭉치 만들기
# 각 피처마다 말뭉치를 생성한다.
title_corpus = "".join(df['title'].tolist())
category_corpus = "".join(df['category'].tolist())
content_corpus = "".join(df['content_text'].tolist())
print(title_corpus)
# - 출처: 이것이 데이터 분석이다 with 파이썬
| Chapter_2.1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dan_traff
# language: python
# name: dan_traff
# ---
# # Transfer Learning on a network, where roads are clustered into classes
import time
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import ipdb
import os
import tensorflow as tf
from tensorflow.keras.models import load_model, Model
from tensorflow.keras import backend as K
import tensorflow.keras as keras
from tensorflow.keras.layers import Layer
import dan_models
import dan_utils
from sklearn.manifold import TSNE
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.compat.v1.enable_eager_execution()
tf.executing_eagerly()
# # Load data
class_set = [2, 3, 4]
randseed = 25
res = 11
v, v_class, id_402, part1, part2, seg, det_list_class, near_road_set \
= dan_utils.load_data(class_set, res, randseed)
class_color_set = ['b', 'g', 'y', 'black', 'r']
# +
region = 4
try:
v_class[region].insert(2, 'lat', None)
v_class[region].insert(3, 'long', None)
except:
None
for i in range(len(v_class[region])):
id_ = v_class[region].iloc[i, 0]
lat = id_402.loc[id_402['id']==id_, 'lat'].values[0]
long = id_402.loc[id_402['id']==id_, 'long'].values[0]
v_class[region].iloc[i, 2] = lat
v_class[region].iloc[i, 3] = long
v_class[region].to_csv('../data/region_data/q_reg_full_%i.csv'%region)
# -
# ### Visulization
def plot_dets(det_list_class_i, if_save):
for i in range(len(id_402)):
det_id = id_402.loc[i, 'id']
cls_402 = id_402.loc[i, 'class_i']
try:
cls_det = part1[part1['det'] == det_id]['0'].values[0]
if cls_402 != cls_det:
part1.loc[part1['det'] == det_id, '0'] = cls_402
print(i)
except:
cls_det = part2[part2['det'] == det_id]['0'].values[0]
if cls_402 != cls_det:
part2.loc[part2['det'] == det_id, '0'] = cls_402
print(i)
fig = plt.figure(figsize=[40, 15], dpi=75)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
for i in range(len(det_list_class_i)):
det_id = det_list_class_i[i]
x = id_402.loc[id_402['id']==det_id, 'lat']
y = id_402.loc[id_402['id']==det_id, 'long']
# ipdb.set_trace()
if det_id in part1['det'].values:
ax1.plot(x, y, marker='+', color='red', markersize=10, markerfacecolor='none')
ax1.text(x-0.005, y, det_id, rotation=45)
elif det_id in part2['det'].values:
ax2.plot(x, y, marker='o', color='red', markersize=10, markerfacecolor='none')
ax2.text(x-0.005, y, det_id, rotation=45)
plt.show()
if if_save:
fig.savefig('../network_classification/img/%i_res%i_class_%i.png'%(randseed, res, class_i_))
print(1)
plt.close()
return
# ind, class
# 0 , blue
# 1 , green
# 2 , yellow <--
# 3 , black <--
# 4 , red <--
class_i_ = 2
plot_dets(det_list_class[class_i_], if_save=0)
# ## Evaluation of 2 datasets
def get_NSk(set1, set2):
# designated for v_class1 and 2
set1_v_mean = set1.iloc[:, 2:-1].T.mean().T
set2_v_mean = set2.iloc[:, 2:-1].T.mean().T
var1 = set1_v_mean.std()**2
var2 = set2_v_mean.std()**2
u1 = set1_v_mean.mean()
u2 = set2_v_mean.mean()
return 2*var1 / (var1 + var2 + (u1 - u2)**2)
# +
NSk_set = np.array([])
for i in class_set:
for j in class_set:
if i!=j:
NSk = get_NSk(v_class[i], v_class[j])
NSk_set = np.append(NSk_set, NSk)
print(NSk_set.mean())
# -
# # 源代码如下 (训练)
# # Input classes here
# +
# ind, class
# 0 , blue
# 1 , green
# 2 , yellow <--
# 3 , black <--
# 4 , red <--
class_src = 2
v_class1 = v_class[class_src] # source
near_road1 = np.array(near_road_set[class_src])
class_tar = 4
v_class2 = v_class[class_tar] # target
near_road2 = np.array(near_road_set[class_tar])
num_links = v_class1.shape[0]
# +
near_road_src = near_road1
flow_src = v_class1.iloc[:, 2:-1]
prop = 1 # proportion of training data
from_day = 1
to_day = 24
image_train_source, image_test_source, day_train_source, day_test_source, label_train_source, label_test_source\
= dan_utils.sliding_window(
flow_src, near_road_src, from_day, to_day, prop, num_links
)
# +
near_road_tar = near_road2
flow_tar = v_class2.iloc[:, 2:-1]
prop = 3/10
from_day = 22
to_day = 31
image_train_target, image_test_target, day_train_target, day_test_target, label_train_target, label_test_target\
= dan_utils.sliding_window(
flow_tar, near_road_tar, from_day, to_day, prop, num_links
)
dup_mul = image_train_source.shape[0]//image_train_target.shape[0]
dup_r = image_train_source.shape[0]%image_train_target.shape[0]
image_train_target, day_train_target, label_train_target = \
np.concatenate((np.tile(image_train_target, [dup_mul, 1, 1, 1]), image_train_target[:dup_r, :, :, :]), axis=0),\
np.concatenate((np.tile(day_train_target, [dup_mul, 1, 1]), day_train_target[:dup_r, :, :]), axis=0),\
np.concatenate((np.tile(label_train_target, [dup_mul, 1, 1]), label_train_target[:dup_r, :, :]), axis=0),
# -
print(image_train_target.shape)
print(image_test_target.shape)
print(day_train_target.shape)
print(day_test_target.shape)
print(label_train_target.shape)
print(label_test_target.shape)
# + tags=["outputPrepend"]
t_input = image_train_source.shape[2]
t_pre = label_train_source.shape[2]
k = image_train_source.shape[1]
#模型构建
input_data = keras.Input(shape=(k,t_input,num_links), name='input_data')
input_HA = keras.Input(shape=(num_links, t_pre), name='input_HA')
finish_model = dan_models.build_model(input_data, input_HA)
# -
#参数加载
finish_model.load_weights('../model/source_%s.h5'%class_color_set[class_src])
#模型预测
model_pre = finish_model.predict([image_test_target, day_test_target])
#预测结果存储(中间层数据)
dan_utils.save_np(model_pre.reshape(model_pre.shape[0], -1), '../model/middle_res/%i_res%i_modelpre_%s_%s.csv'%(randseed, res, class_color_set[class_src], class_color_set[class_tar]))
# +
#transfer without FT 预测精度计算
m = 5
nrmse_mean = dan_utils.nrmse_loss_func(model_pre, label_test_target, m)
mape_mean = dan_utils.mape_loss_func(model_pre, label_test_target, m)
smape_mean = dan_utils.smape_loss_func(model_pre, label_test_target, m)
mae_mean = dan_utils.mae_loss_func(model_pre, label_test_target, m)
print('nrmse = ' + str(nrmse_mean) + '\n' + 'mape = ' + str(mape_mean) + '\n' + 'smape = ' + str(smape_mean) + '\n' + 'mae = ' + str(mae_mean))
# +
import scipy.stats
def norm_data(data):
min_ = min(data)
max_ = max(data)
normalized_data = data - min_ / (max_ - min_)
return normalized_data
def js_divergence(set1, set2):
p = np.array(set1.iloc[:, 2:-1].T.mean().T)
q = np.array(set2.iloc[:, 2:-1].T.mean().T)
M=(p+q)/2
return 0.5*scipy.stats.entropy(p, M)+0.5*scipy.stats.entropy(q, M)
# return scipy.stats.entropy(p, q) # kl divergence
# -
def get_img_num():
return len(next(iter(os.walk('../model/dan_tsne_img_middle_res/')))[2])
def save_tsne_data(source, target):
N = get_img_num()/2 + 1
ipdb.set_trace()
np.savetxt('source.csv', source, delimiter=',')
np.savetxt('target.csv', target, delimiter=',')
def get_tsne_fig(source, target):
ipdb.set_trace()
pca_tsne = TSNE(n_components=2, random_state=25)
Xs_2D_1 = pca_tsne.fit_transform(source)
Xt_2D_1 = pca_tsne.fit_transform(target)
Xs_2D_1_df = pd.DataFrame(Xs_2D_1, columns=['x1', 'x2'])
Xs_2D_1_df['$X_S/X_T$'] = '$X_S$'
Xt_2D_1_df = pd.DataFrame(Xt_2D_1, columns=['x1', 'x2'])
Xt_2D_1_df['$X_S/X_T$'] = '$X_T$'
X_1 = pd.concat([Xs_2D_1_df, Xt_2D_1_df], axis=0)
X_1.index = range(len(X_1))
fig1 = sns.jointplot(data=X_1, x="x1", y='x2', hue="$X_S/X_T$", kind="kde", levels=5)
fig2 = sns.jointplot(data=X_1, x="x1", y='x2', hue="$X_S/X_T$")
N = get_img_num()/2 + 1
fig1.savefig('../model/dan_tsne_img_middle_res/%i_res%i_countour_%s_%s_shape1=%i_%i.png'\
%(randseed, res, class_color_set[class_src], class_color_set[class_tar], source.shape[1], N))
fig2.savefig('../model/dan_tsne_img_middle_res/%i_res%i_scatter_%s_%s_shape1=%i_%i.png'\
%(randseed, res, class_color_set[class_src], class_color_set[class_tar], target.shape[1], N))
# +
def cal_L2_dist(total):
# ipdb.set_trace()
total_cpu = total
len_ = total_cpu.shape[0]
L2_distance = np.zeros([len_, len_])
for i in range(total_cpu.shape[1]):
total0 = np.broadcast_to(np.expand_dims(total_cpu[:, i], axis=0), (int(total_cpu.shape[0]), int(total_cpu.shape[0])))
total1 = np.broadcast_to(np.expand_dims(total_cpu[:, i], axis=1), (int(total_cpu.shape[0]), int(total_cpu.shape[0])))
# total0 = total_cpu[:, i].unsqueeze(0).expand(int(total_cpu.size(0)), int(total_cpu.size(0)))
# total1 = total_cpu[:, i].unsqueeze(1).expand(int(total_cpu.size(0)), int(total_cpu.size(0)))
L2_dist = (total0 - total1)**2
L2_distance += L2_dist
# ipdb.set_trace()
return L2_distance
def guassian_kernel(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None):
#source = source.cpu()
#target = target.cpu()
# ipdb.set_trace()
n_samples = int(source.shape[0]*source.shape[1])+int(target.shape[0]*target.shape[1]) # number of samples
total = np.concatenate([source, target], axis=0)
L2_distance = cal_L2_dist(total)
if fix_sigma:
bandwidth = fix_sigma
else:
bandwidth = np.sum(L2_distance.data) / (n_samples**2-n_samples) # 可能出问题
bandwidth /= kernel_mul ** (kernel_num // 2)
bandwidth_list = [bandwidth * (kernel_mul**i) for i in range(kernel_num)]
kernel_val = [np.exp(-L2_distance / bandwidth_temp) for bandwidth_temp in bandwidth_list]
return sum(kernel_val) #/len(kernel_val)
def mmd_rbf_accelerate(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None):
# ipdb.set_trace()
print(source.shape)
print(target.shape)
batch_size = int(source.size)
kernels = guassian_kernel(source, target,
kernel_mul=kernel_mul, kernel_num=kernel_num, fix_sigma=fix_sigma)
loss = 0
for i in range(batch_size):
s1, s2 = i, (i+1) % batch_size
t1, t2 = s1 + batch_size, s2 + batch_size
loss += kernels[s1, s2] + kernels[t1, t2]
loss -= kernels[s1, t2] + kernels[s2, t1]
# ipdb.set_trace()
return loss / float(batch_size)
def mmd_rbf_noaccelerate(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None):
# ipdb.set_trace()
# save_tsne_data(source, target)
batch_size = int(source.shape[0]) # ?
kernels = guassian_kernel(source, target,
kernel_mul=kernel_mul, kernel_num=kernel_num, fix_sigma=fix_sigma)
XX = kernels[:batch_size, :batch_size]
YY = kernels[batch_size:, batch_size:]
XY = kernels[:batch_size, batch_size:]
YX = kernels[batch_size:, :batch_size]
# ipdb.set_trace()
loss = np.mean(XX + YY - XY - YX)
return loss
# +
middle1 = Model(inputs=[input_data, input_HA], outputs=finish_model.get_layer('dense_1').output)
middle2 = Model(inputs=[input_data, input_HA], outputs=finish_model.get_layer('dense_2').output)
middle_result_source1 = middle1([image_train_source, day_train_source])
middle_result_target1 = middle1([image_train_target, day_train_target])
middle_result_source2 = middle2([image_train_source, day_train_source])
middle_result_target2 = middle2([image_train_target, day_train_target])
# save intermidiate results
# dan_utils.save_np(middle_result_source1, '../model/middle_res/%i_res%i_middle_result_source1_%s_%s.csv'\
# %(randseed, res, class_color_set[class_src], class_color_set[class_tar]))
# dan_utils.save_np(middle_result_target1, '../model/middle_res/%i_res%i_middle_result_target1_%s_%s.csv'\
# %(randseed, res, class_color_set[class_src], class_color_set[class_tar]))
# dan_utils.save_np(middle_result_source2, '../model/middle_res/%i_res%i_middle_result_source2_%s_%s.csv'\
# %(randseed, res, class_color_set[class_src], class_color_set[class_tar]))
# dan_utils.save_np(middle_result_target2, '../model/middle_res/%i_res%i_middle_result_target2_%s_%s.csv'\
# %(randseed, res, class_color_set[class_src], class_color_set[class_tar]))
def new_loss(output_final, label_train_target):
lamb = js_divergence(v_class1.iloc[:, 2:-1], v_class2.iloc[:, 2:-1])
# lamb = 0
loss0 = K.mean(K.square(output_final - label_train_target), axis=-1)
# ipdb.set_trace()
loss1 = mmd_rbf_noaccelerate(middle_result_source1, middle_result_target1)
loss2 = mmd_rbf_noaccelerate(middle_result_source2, middle_result_target2)
# loss2 = lamb * ( mmd(middle_result_source1, middle_result_target1) + mmd(middle_result_source2, middle_result_target2) )
# loss2 = 0.001 * mmd(middle_result_source2, middle_result_target2)
# print('Lambda is %.4f'%lamb)
print(middle_result_source1.shape)
print(middle_result_target1.shape)
overall_loss = loss0 + lamb* (loss1 + loss2)
return overall_loss
# -
finish_model.compile(optimizer='adam', loss=new_loss)
# +
# middle_result_source1 = middle1([image_train_source, day_train_source])
# middle_result_target1 = middle1([image_train_target, day_train_target])
# get_tsne_fig(middle_result_source1, middle_result_target1)
# + tags=["outputPrepend"]
finish_model.fit([image_train_target, day_train_target], label_train_target, epochs=300, batch_size=4620,
validation_data=([image_test_target,day_test_target], label_test_target))
# -
model_pre = finish_model.predict([image_test_target, day_test_target])
# +
#transfer with DAN 预测精度计算
nrmse_mean = dan_utils.nrmse_loss_func(model_pre, label_test_target, m)
mape_mean = dan_utils.mape_loss_func(model_pre, label_test_target, m)
smape_mean = dan_utils.smape_loss_func(model_pre, label_test_target, m)
mae_mean = dan_utils.mae_loss_func(model_pre, label_test_target, m)
print('nrmse = ' + str(nrmse_mean) + '\n' + 'mape = ' + str(mape_mean) + '\n' + 'smape = ' + str(smape_mean) + '\n' + 'mae = ' + str(mae_mean))
# -
#模型保存
finish_model.save_weights('../model/transfer_DAN_%s_%s_mape=%.5f_nrmse=%.5f.h5'%(class_color_set[class_src], class_color_set[class_tar], mape_mean, nrmse_mean))
# +
mape_list = []
for i in range(num_links):
a1 = dan_utils.mape_loss_func(model_pre[:,i,:], label_test_target[:,i,:], m)
mape_list.append(a1)
mape_pd = pd.Series(mape_list)
mape_pd.sort_values()
# -
plt.plot(model_pre[:, 0, 0])
plt.plot(label_test_target[:, 0, 0])
mape_set = []
for i in range(25):
for j in range(3):
plt.figure()
plt.plot(model_pre[:, i, j])
plt.plot(label_test_target[:, i, j])
mape = dan_utils.mape_loss_func(model_pre[:, i, j], label_test_target[:, i, j], m)
mape_set.append(mape)
plt.title('%i%i,mape=%.3f'%(i, j, mape))
| learning_DAN/transfer_DAN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#default_exp tables
# +
#export
import json
from ssda_nlp.utility import *
# +
#no_test
with open("volume_records/166470.json", encoding="utf-8") as jsonfile:
data = json.load(jsonfile)
# +
#no_test
male = 0
female = 0
unsure = 0
for person in data["people"]:
is_owner = False
if person["relationships"] != None:
for relationship in person["relationships"]:
if relationship["relationship_type"] == "enslaver":
is_owner = True
if is_owner:
continue
first_name = person["name"].split(' ')[0]
if determine_sex(first_name) == "male":
male += 1
elif determine_sex(first_name) == "female":
female += 1
else:
unsure += 1
with open("gender_no_owners.csv", 'w', encoding="utf-8") as outfile:
outfile.write("male,female,unsure\n")
outfile.write(str(male) + ',' + str(female) + ',' + str(unsure))
# +
#no_test
ages = ["unsure"]
counts = [0]
for person in data["people"]:
if person["age"] == None:
counts[0] += 1
elif person["age"] in ages:
counts[ages.index(person["age"])] += 1
else:
ages.append(person["age"])
counts.append(1)
with open("ages.csv", 'w', encoding="utf-8") as outfile:
for age in ages:
if ages.index(age) == (len(ages) - 1):
outfile.write(age + '\n')
else:
outfile.write(age + ',')
for i in range(len(counts)):
if i == (len(counts) - 1):
outfile.write(str(counts[i]))
else:
outfile.write(str(counts[i]) + ',')
# +
#no_test
ethnonyms = ["unsure"]
counts = [0]
for person in data["people"]:
if person["ethnicities"] == None:
counts[0] += 1
elif person["ethnicities"] in ethnonyms:
counts[ethnonyms.index(person["ethnicities"])] += 1
else:
ethnonyms.append(person["ethnicities"])
counts.append(1)
with open("ethnonyms.csv", 'w', encoding="utf-8") as outfile:
for ethnonym in ethnonyms:
if ethnonyms.index(ethnonym) == (len(ethnonyms) - 1):
outfile.write(ethnonym + '\n')
else:
outfile.write(ethnonym + ',')
for i in range(len(counts)):
if i == (len(counts) - 1):
outfile.write(str(counts[i]))
else:
outfile.write(str(counts[i]) + ',')
# +
#no_test
origins = ["unsure"]
counts = [0]
for person in data["people"]:
if person["origin"] == None:
counts[0] += 1
elif person["origin"] in origins:
counts[origins.index(person["origin"])] += 1
else:
origins.append(person["origin"])
counts.append(1)
with open("origins.csv", 'w', encoding="utf-8") as outfile:
for i in range(len(origins)):
for place in data["places"]:
if place["id"] == origins[i]:
origins[i] = place["location"]
break
if i == (len(origins) - 1):
outfile.write('"' + origins[i] + '"\n')
else:
outfile.write('"' + origins[i] + '",')
for i in range(len(counts)):
if i == (len(counts) - 1):
outfile.write(str(counts[i]))
else:
outfile.write(str(counts[i]) + ',')
# +
#no_test
godparents = 0
no_godparents = 0
for event in data["events"]:
if event["type"] == "birth":
continue
has_godparent = False
for person in data["people"]:
if person["id"] == event["principal"]:
if person["relationships"] == None:
break
for relationship in person["relationships"]:
if relationship["relationship_type"] == "godparent":
has_godparent = True
break
break
if has_godparent:
godparents += 1
else:
no_godparents += 1
with open("godparents.csv", 'w', encoding="utf-8") as outfile:
outfile.write("godparents,no godparents\n")
outfile.write(str(godparents) + ',' + str(no_godparents))
# +
#no_test
enslaved = 0
unsure = 0
for person in data["people"]:
if person["status"] == "enslaved":
enslaved += 1
else:
unsure += 1
with open("status.csv", 'w', encoding="utf-8") as outfile:
outfile.write("enslaved,unsure\n")
outfile.write(str(enslaved) + ',' + str(unsure))
# -
| 73-table-output.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/konung-yaropolk/abf_passive_param/blob/main/Passive_Param.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="TnE5w05Vp_LZ"
# !pip install pyabf
# + id="OVONehmggeJD"
# #!/usr/bin/env python3
# To run script install libraries using command:
# pip install pyabf
import numpy as np
import matplotlib.pyplot as plt
import pyabf
import pyabf.tools.memtest
from statistics import mean
from math import sqrt
# Файли для підрахунку необхідно завантажити в папку проекту
# Список імен файлів без росширення, в лапках, розділені комами
FILE_LIST = [
'filename_1',
'filename_2',
'filename_3',
]
SHOW_STATS = True
SHOW_GRAPH = True
def main(filename):
print('\n\n' + '-' * 70, '\n')
# Перехоплення помилки відсутнього файлу
try:
# Відкривання abf файлу
abf = pyabf.ABF(filename + '.abf')
memtest = pyabf.tools.memtest.Memtest(abf)
except ValueError:
print(filename + '.abf','not found!\n\n')
else:
print(filename+'.abf\n\n')
if SHOW_STATS:
print('Average on', abf.sweepCount,'sweeps:\n')
print('Ra, MOhm: ', round(mean(memtest.Ra.values), 2))
print('Rm, MOhm: ', round(mean(memtest.Rm.values), 2))
print('Cm, pF: ', round(mean(memtest.CmStep.values), 2))
print('Ih, pA: ', round(mean(memtest.Ih.values), 2))
print('\n\nStandard error mean on', abf.sweepCount,'sweeps:\n')
print('Ra: ', round(np.std(memtest.Ra.values) /sqrt(abf.sweepCount), 2))
print('Rm: ', round(np.std(memtest.Rm.values) /sqrt(abf.sweepCount), 2))
print('Cm: ', round(np.std(memtest.CmStep.values) /sqrt(abf.sweepCount),2))
print('Ih: ', round(np.std(memtest.Ih.values) /sqrt(abf.sweepCount), 2))
print('\n\n')
if SHOW_GRAPH:
# Створення нового рисунку
fig = plt.figure(figsize=(8, 6))
# Виведення значень опору доступу (Ra)
ax3 = fig.add_subplot(221)
ax3.grid(alpha=.2)
ax3.plot(list(range(1, abf.sweepCount +1)), memtest.Ra.values,
".", color='black', alpha=.7, mew=0)
ax3.set_title(memtest.Ra.name)
ax3.set_ylabel(memtest.Ra.units)
# Виведення значень мембранного опору(Rm)
ax2 = fig.add_subplot(222)
ax2.grid(alpha=.2)
ax2.plot(list(range(1, abf.sweepCount +1)), memtest.Rm.values,
".", color='black', alpha=.7, mew=0)
ax2.set_title(memtest.Rm.name)
ax2.set_ylabel(memtest.Rm.units)
# Виведення значень мембранної ємності (Cm)
ax4 = fig.add_subplot(223)
ax4.grid(alpha=.2)
ax4.plot(list(range(1, abf.sweepCount +1)), memtest.CmStep.values,
".", color='black', alpha=.7, mew=0)
ax4.set_title(memtest.CmStep.name)
ax4.set_ylabel(memtest.CmStep.units)
# Виведення значень струму утримання (Ih)
ax1 = fig.add_subplot(224)
ax1.grid(alpha=.2)
ax1.plot(list(range(1, abf.sweepCount +1)), memtest.Ih.values,
".", color='black', alpha=.7, mew=0)
ax1.set_title(memtest.Ih.name)
ax1.set_ylabel(memtest.Ih.units)
# Вивсети значення на осі абсис
for ax in [ax1, ax2, ax3, ax4]:
ax.margins(0, .9)
ax.set_xlabel("Sweep number")
for tagTime in abf.tagTimesMin:
ax.axvline(tagTime, color='k', ls='--')
# Вивести рисунок
plt.tight_layout()
fig.patch.set_facecolor('white')
plt.suptitle(filename+'.abf')
plt.show()
print('\n\n\n')
for filename in FILE_LIST:
main(filename)
| Passive_Param.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project
import math
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
from scipy import sparse
import scipy.sparse.linalg
from pyunlocbox import functions, solvers
# ## <NAME>
# +
credits = pd.read_csv('../../data/tmdb_5000_credits.csv')
credits = credits[credits.cast != '[]']
movies = pd.read_csv('../../data/tmdb_5000_movies.csv')
movies.drop(['homepage', 'keywords','original_language','overview','release_date','spoken_languages', \
'status','title','tagline','vote_count'\
], \
axis=1, \
inplace=True \
)
# -
credits.drop(['title', 'crew'], axis=1, inplace=True)
credits['cast_id'] = credits['cast'].apply(lambda row: list(set(pd.read_json(row)['id'])))
#credits['cast_name'] = credits['cast'].apply(lambda row: list(set(pd.read_json(row)['name'])))
#credits['gender'] = credits['cast'].apply(lambda row: list(set(pd.read_json(row)['gender'])))
# +
frames = pd.DataFrame()
new_df = pd.DataFrame()
for idx, film in credits.iterrows():
cast_df = pd.DataFrame(eval(credits['cast'][idx]))
cast_df['credits'] = idx
cast_df = cast_df.drop(['character','order', 'credit_id', 'cast_id'],axis = 1)
frames = [new_df, cast_df]
new_df = pd.concat(frames, join = 'outer', ignore_index=True)
# -
discount_old = credits['cast_id'].apply(pd.Series).stack().value_counts()
discount_old = list(discount_old[discount_old > 4].index.astype(int))
#discount_old[:10]
nodes_df = new_df['credits'].groupby([new_df.gender, new_df.id, new_df.name]).apply(list).reset_index()
nodes_df = nodes_df[nodes_df['gender'].isin(['1','2'])]
discount_1 = nodes_df['id'].tolist()
discount = [x for x in discount_old if x in discount_1]
#nodes_df = nodes_df[nodes_df.id.isin(discount)]
#nodes_df.drop(columns=['credits'], inplace=True)
#nodes_df = nodes_df[nodes_df['gender'].isin(['1','2'])]
print('Old Values of the Discount')
print(discount_old[:10])
print(len(discount_old))
print('New Values of the Discount')
print(discount[:10])
print(len(discount))
# +
credits['cast_id'] = credits['cast_id'].apply(lambda x: [y for y in x if y in discount])
credits['edges'] = credits['cast_id'].apply(lambda x: list(itertools.combinations(x, 2)))
edges = list(credits['edges'].apply(pd.Series).stack())
edges[0:5]
edges_df = pd.DataFrame(edges)
# -
#Normally the number of edges was:
print('Normally the number of edges was:')
print(edges_df)
edges_df = edges_df.merge(nodes_df, left_on = 0, right_on='id', how='inner').drop(columns=['name','credits'])
edges_df = edges_df.merge(nodes_df, left_on = 1, right_on='id', how='inner').drop(columns=['name','credits'])
edges_df.head()
edges_df['same_gender']=0
for i in range(len(edges_df)):
if edges_df['gender_x'][i]==edges_df['gender_y'][i]:
edges_df['same_gender'][i]=1
edges_df = edges_df.drop(columns=['gender_x','id_x','gender_y','id_y'])
edges_df =edges_df[edges_df['same_gender'] == 1]
edges_df = edges_df.drop(columns=['same_gender'])
edges_df = edges_df.reset_index(drop=True)
len(edges_df)
edges_df.head()
# +
discarded_movies = set()
for idx, movie in credits.iterrows():
if len(movie['edges']) == 0:
discarded_movies.add(movie['movie_id'])
print(len(discarded_movies))
# -
credits = credits[~credits['movie_id'].isin(discarded_movies)]
credits.head()
movies['profit'] = movies['revenue']-movies['budget']
movies_credits = movies.merge(credits, left_on='id', right_on='movie_id', how='inner').drop(columns=['movie_id'])
movies_credits = movies_credits[movies_credits.genres != '[]']
movies_credits['genre_id'] = movies_credits['genres'].apply(lambda row: list(pd.read_json(row)['id']))
movies_credits['genre_name'] = movies_credits['genres'].apply(lambda row: list(pd.read_json(row)['name']))
genre = movies_credits[['cast_id', 'genre_id', 'genre_name']]
genre.loc[:, 'genre_id_disc'] = genre['genre_id'].apply(lambda x: x[0])
genre.loc[:, 'genre_name_disc'] = genre['genre_name'].apply(lambda x: x[0])
genre_df = pd.DataFrame(genre.cast_id.tolist(), index=genre.genre_name_disc).stack().reset_index(name='cast_id')[['cast_id','genre_name_disc']]
most_freq_genre = genre_df.groupby(['cast_id']).agg(lambda x:x.value_counts().index[0])
profit_df = pd.DataFrame(movies_credits.cast_id.tolist(), index=movies_credits.profit).stack().reset_index(name='cast_id')[['cast_id','profit']]
profit_df['cast_id'] = profit_df.cast_id.astype(int)
profit_df = profit_df.groupby('cast_id', as_index=False).mean()
profit_df.set_index('cast_id', inplace=True)
profit_df.head()
profit_df = ((profit_df['profit']/(10**7)).round(0))*(10**7)
profit_df = profit_df.to_frame()
ranking_df = pd.DataFrame(movies_credits.cast_id.tolist(), index=movies_credits.vote_average).stack().reset_index(name='cast_id')[['cast_id','vote_average']]
ranking_df['cast_id'] = ranking_df.cast_id.astype(int)
ranking_df = ranking_df.groupby('cast_id', as_index=False).mean()
ranking_df.set_index('cast_id', inplace=True)
ranking_df.head()
ranking_df = round(ranking_df['vote_average'] * 2) / 2
ranking_df = ranking_df.to_frame()
actors = ranking_df.merge(most_freq_genre, on='cast_id', how='inner')
actors = actors.merge(profit_df, on='cast_id', how='inner')
actors = actors.reset_index()
actors.head()
#nodes_df = new_df['credits'].groupby([new_df.gender, new_df.id, new_df.name]).apply(list).reset_index()
nodes_df = nodes_df[nodes_df.id.isin(discount)]
nodes_df.drop(columns=['credits'], inplace=True)
#nodes_df = nodes_df[nodes_df['gender'].isin(['1','2'])]
actors = actors.merge(nodes_df, left_on = 'cast_id', right_on='id', how='inner').drop(columns=['cast_id'])
actors[actors['name']=='<NAME>']
actors.sort_values(by='profit', ascending=False)
# +
#features = nodes_df.set_index('id').drop('name', axis=1)
#features.head()
# -
discount_df = pd.DataFrame(discount)
features = discount_df.merge(actors, left_on = 0, right_on='id', how='inner').drop(columns=[0])
features.head()
# ## Doing the Adjacency again
# Cause we took out some genders and our size went from 3766 to 3500
edges = edges_df.values.tolist()
len(edges)
# +
adj = pd.DataFrame(np.zeros(shape=(len(discount),len(discount))), columns=discount, index=discount)
for e1, e2 in edges:
if e1 in discount and e2 in discount:
adj.at[e1, e2] += 1
adj.at[e2, e1] += 1
else:
edges.remove((e1,e2))
adj.head()
# +
#One outlier, its <NAME>, ID=90596, index number 3415
# +
adjacency = adj.values
adj_max = adjacency.max()
adjacency = np.vectorize(lambda x: x/adj_max)(adjacency)
adjacency = pd.DataFrame(adjacency)
# -
adjacency.head()
#IF WE NEED NON WEIGHTED ADJACENCY
adjacency_non_weighted = np.copy(adjacency)
adjacency_non_weighted[adjacency_non_weighted > 0] = 1
adjacency_non_weighted = np.asmatrix(adjacency_non_weighted)
graph = nx.from_numpy_array(adjacency_non_weighted)
node_props = features.to_dict()
for key in node_props:
nx.set_node_attributes(graph, node_props[key], key)
graph.node[0]
nx.draw_spring(graph)
nx.write_gexf(graph, 'CoAppGenderAdjGephiFile.gexf')
adjacency.to_csv("CoAppGenderAdjacency.csv")
features.to_csv("CoAppGenderFeatures.csv")
edges_df.to_csv("CoAppGenderEdges.csv")
| adjacencies/CoAppearance-gender/Adj-CoApp-gender.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# if we want to do same thing again and again, to avoid writing again and again same code we use function. Once we make a function then we can use it on multiple places just writing function name. so it is easy to code AND avoid rewriting the same code.
# one more thing when we cal function, it will check all code inside the function and run that code again as long as we keep using it will behave the same. let's do with example:
def thing():
print('chair')
print('table')
thing()
print('window')
thing()
print('bed')
thing()
print('wall')
big = max('my name is <NAME>')
print(big)
tiny = min('this is my computer')
print(tiny)
#minimum is space so ans will in the form of space
#we can convert int to float and float to int by using built-in function float() or int()
x = float(45)/100
print(x)
i = 343
type(i)
f = float(i)
print(f)
type(f)
x = 3+6 * float(54)/ 9 - 43
print(x)
type(x)
# we can also conver string to float or int. as we khnow that we can't anything mul or add or sub with the string like 'haris and jawad + 3' this is fool, it return nothing except error so we have to convert string to int or float. let's go with example:
# one more thing input is a function but it always gives us string so we hv to convert it as per we need
sval = '455'
type(sval)
ival = int(sval)
print(ival + 4)
# +
x = 68
print('haris')
def mani():
print('i live in rawalpindi and im studying in federal urdu university')
print ('jawad')
mani()
x = x - 3
print (x)
# -
# arguments is a value we pass into functionas its input when we call the function.
# we use arguments so we can direct the function to do different kinds of work when we call it at different times
big = max('im haris')
print (big)
# a parameter is a variable which we use in the function defination. it is a "handle" that allows the code in the function to access the arguments for a particular function invocation.
# +
def words(latters):
if latters == ('hr'):
print('mani')
elif latters == ('hs'):
print('<NAME>')
else:
print('random numbers')
words('ks')
words('hr')
words('hs')
words('hhf')
# -
# inside the function any code will not run untill we call the function
# return statement do two thing 1: it stops the code and jumps to the next line. 2: it keep the residual value
def prectice():
return "hay"
print (prectice(),'haris')
print (prectice(), 'mani')
# +
def word(ltr):
if ltr == 'xy':
return "oh"
elif ltr == 'ab':
return "how are you"
else:
return "i am joking"
print(word('mani'), 'haris')
print(word('xy'), 'jawad')
print(word('sb'), 'kashif')
print(word('ab'), 'fadi')
# -
def addtwo(a, b):
added = a+b
return added
x = addtwo(4,9)
print(x)
| Code/3 Functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbgrader={}
# # Interact Exercise 5
# + [markdown] nbgrader={}
# ## Imports
# + [markdown] nbgrader={}
# Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# -
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display, SVG
# + [markdown] nbgrader={}
# ## Interact with SVG display
# + [markdown] nbgrader={}
# [SVG](http://en.wikipedia.org/wiki/Scalable_Vector_Graphics) is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
# + nbgrader={}
s = """
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
"""
# + nbgrader={}
SVG(s)
# + [markdown] nbgrader={}
# Write a function named `draw_circle` that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the `IPython.display.SVG` object and `IPython.display.display` function.
# + nbgrader={"checksum": "ff346dfaabec3ce8812bb0d03cf3951b", "solution": true}
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
"""Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
"""
# YOUR CODE HERE
p = """
<svg width="%d" height="%d">
<circle cx="%d" cy="%d" r="%d" fill="%s" />
</svg>
"""
svg = p % (width, height, cx, cy, r, fill)
display(SVG(svg))
# + nbgrader={}
draw_circle(cx=10, cy=10, r=10, fill='blue')
# + deletable=false nbgrader={"checksum": "6d760b87a2567cb9b9c7a9e2825cacfa", "grade": true, "grade_id": "interactex05a", "points": 4}
assert True # leave this to grade the draw_circle function
# + [markdown] nbgrader={}
# Use `interactive` to build a user interface for exploing the `draw_circle` function:
#
# * `width`: a fixed value of 300px
# * `height`: a fixed value of 300px
# * `cx`/`cy`: a slider in the range [0,300]
# * `r`: a slider in the range [0,50]
# * `fill`: a text area in which you can type a color's name
#
# Save the return value of `interactive` to a variable named `w`.
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
# YOUR CODE HERE
w = interactive(draw_circle, width=fixed(300), height=fixed(300), cx = (0,300), cy=(0,300), r = (0,50), fill= 'red');
# -
w.children[0].min
# + deletable=false nbgrader={"checksum": "5993721946f31406b1b7aac42ddd5ce4", "grade": true, "grade_id": "interactex05b", "points": 4}
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
# + [markdown] nbgrader={}
# Use the `display` function to show the widgets created by `interactive`:
# + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true}
# YOUR CODE HERE
w
# + deletable=false nbgrader={"checksum": "eeb509517655f5e40f0bbf0ae8705e72", "grade": true, "grade_id": "interactex05c", "points": 2}
assert True # leave this to grade the display of the widget
# + [markdown] nbgrader={}
# Play with the sliders to change the circles parameters interactively.
| assignments/assignment06/InteractEx05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CHARACTER LEVEL GENERATION
#
# In this notebook we build a character level RNN, using Gated Recurrent Networks (GRU).
# For a subject matter we chose President Trump's tweets, because we felt his style was noticeable. We also train our model on Shakespeare's corpus of text. With this project we wanted two deliverables.
#
# 1. To create our own RNN text generation model using either LSTM or GRU frameworks.
# 2. To build a webpage that would display a real tweet next to a fake one, allowing the user to guess which one was real.
#
# This notebook is focused on building the model.
#
# +
import pandas as pd
import numpy as np
import tensorflow as tf
import random
import sys
import pickle
import csv
import os
import matplotlib.pyplot as plt
import re
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM, RNN, Softmax, Flatten, Dropout, Input
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import load_model
from sklearn.model_selection import train_test_split
tf.enable_eager_execution()
# +
# data = pd.read_csv("../Load_Tweets/data/tweet_data.csv") # this will break if this file is moved!
# data.head()
data = pd.read_csv("../Load_Tweets/data/Full Trump Archive.csv")
data.head()
# +
# data['TEXT'][100]
data['text'][100]
# -
data['text'].dtypes
# +
# data['TEXT'].apply(lambda x: len(x)).describe()
data['text'].count()
# +
# Put all the tweets into one string
# tweet_txt = data['TEXT'][:].str.cat(sep=' ')
tweet_txt = data['text'][:].str.cat(sep=' ')
print('{} : total characters in our dataset'.format(len(tweet_txt)))
# +
# Get all the unique characters used, and make a character mapping.
# Here we set Global Variables that are used throughout the code.
# with open('../Load_Tweets/data/ArtOfTheDeal.txt') as f:
# book_txt = f.read()
# tweet_txt = tweet_txt + book_txt
# path_to_file = tf.keras.utils.get_file(
# 'shakespeare.txt',
# 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
# # Read, then decode for py2 compat.
# tweet_txt = open(path_to_file, 'rb').read().decode(encoding='utf-8')
chars = list(set(tweet_txt))
chars.sort()
char_to_index = dict((c, i) for i, c in enumerate(chars))
index_to_char = np.array(chars)
print("Number of unique characters: ", len(chars))
maxlen = 30 # 141 Chosen because the average length of a tweet in our data is 141 characters.
# -
tweet_int = np.array([char_to_index[char] for char in tweet_txt])
tweet_int[:20]
seq_length = 100
examples_per_epoch = len(tweet_txt)//seq_length
char_dataset = tf.data.Dataset.from_tensor_slices(tweet_int)
for i in char_dataset.take(5):
print(index_to_char[i.numpy()])
# +
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(index_to_char[item.numpy()])))
# +
# Here we actual build the data.
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
# -
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(index_to_char[input_example.numpy()])))
print ('Target data:', repr(''.join(index_to_char[target_example.numpy()])))
# +
# Batch size
BATCH_SIZE = 64
steps_per_epoch = examples_per_epoch//BATCH_SIZE
# Buffer size
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
# +
# Here is a model using the Keras Functional Api.
if tf.test.is_gpu_available():
rnn = tf.keras.layers.CuDNNGRU
print("We are on the GPU!!!")
else:
import functools
rnn = functools.partial(
tf.keras.layers.GRU, recurrent_activation='sigmoid')
# tf.keras.layers.LSTM
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
rnn(rnn_units,
return_sequences=True,
recurrent_initializer='glorot_uniform',
# bias_regularizer=tf.keras.regularizers.l1(l=0.01),
stateful=True),
tf.keras.layers.Dropout(rate=0.2, noise_shape=(batch_size, 1, rnn_units)),
rnn(rnn_units,
return_sequences=True,
recurrent_initializer='glorot_uniform',
# bias_regularizer=tf.keras.regularizers.l1(l=0.01),
stateful=True),
tf.keras.layers.Dense(vocab_size)
])
return model
# +
vocab_size = len(chars)
embedding_dim = 256
rnn_units = 1024
batch_size=BATCH_SIZE
model = build_model(
vocab_size = vocab_size,
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=batch_size)
model.summary()
# +
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
model.compile(
optimizer = tf.train.AdamOptimizer(),
loss = loss
)
# -
# +
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
EPOCHS = 30
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=5, restore_best_weights=True)
history = model.fit(
dataset.repeat(),
validation_data=dataset,
validation_steps=30,
epochs=EPOCHS,
steps_per_epoch=steps_per_epoch,
callbacks=[checkpoint_callback])
# +
# model tests based on 15 epochs.
# First try model: 3 GRU layers, 2 dropouts, third GRU has 2^9 nodes, down from 2^10 in between.
# Loss: 1.1043
# Second try model: 2 GRU layers, 1 dropout in between.
# Loss: 0.9172 on 50 epochs early stopped at 38 with a loss of 0.8059
# Third try model: 3 GRU layers, 2 dropouts, in between.
# Loss: 1.0251
# First try model: 3 GRU layers, 1 dropouts, third GRU has 2^9 nodes, down from 2^10 in between.
# Loss: 0.9321 after 50 epochs we got a loss of 0.8640
# +
model_g = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model_g.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model_g.build(tf.TensorShape([1, None]))
model_g.summary()
# +
model_g = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model_g.load_weights('../Saved_models/third_model_weights.h5')
model_g.build(tf.TensorShape([1, None]))
model_g.summary()
# -
def generate_text(model, start_string, length):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = length
# Converting our start string to numbers (vectorizing)
input_eval = [char_to_index[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 0.95
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a multinomial distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.multinomial(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(index_to_char[predicted_id])
return (start_string + ''.join(text_generated))
# +
l = 500
print(generate_text(model_g, start_string=u" ", length=l))
# +
l = 200
print(generate_text(model_g,
start_string=og_tweets['TEXT'][14][:30].replace("\n", ""),
length=l))
# -
# #### """ HERE I AM GENERATING A LIST OF TWEETS THAT END APPROPRIATELY. """
og_tweets = pd.read_csv("../Load_Tweets/data/original_tweets.csv")
og_tweets.head()
og_tweets['TEXT'][0][:100].replace("\n", "")
# +
# make fake tweets
# +
# make the fake tweets look good
# +
# store everything and pickle it
# +
#
# +
# Here we save the model
model.save('../Saved_models/third_model.h5')
# -
model.save_weights('../Saved_models/third_model_weights.h5')
""" HERE I AM DOING SOME MODEL TESTING """
model = load_model('../Saved_models/first_char_model.h5')
cross_entropy_loss, accuracy = model.evaluate(X, y, batch_size=128)
df = pd.read_csv('../Load_Tweets/data/original_tweets.csv')
# +
# building out support for real URLS, also need to updat model.data to accomidate new model.
link_txt = df['TEXT'][:].str.cat(sep=' ')
re.findall("(https\S*)", link_txt)
# -
| Model/Jackson_model-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [LEGALST-190] Lab 3/13: Parsing XML Data
# This lab will cover parsing XML and attribute lookup, XPath, and web scraping.
#
# *Estimated Time: 45 Minutes *
#
# ### Topics Covered:
# - XML syntax
# - locating content with XPATH
# - Web scraping
#
# ### Table of Contents
# [The Data](#section data)<br>
# 1 - [XML Syntax](#section 1)<br>
# 2 - [Using XPath and ElementTree to parse XML](#section 2)<br>
# 3 - [Web Scraping](#section 3)<br>
# 4 - [Putting it all in a dataframe](#section 4)<br>
#
# **Dependencies:**
import pandas as pd
import xml.etree.cElementTree as ET #XML Parser
from lxml import etree #ElementTree and lxml allow us to parse the XML file.
import requests #make request to server
import time #pause loop
# ----
# ## The Data<a id='section data'></a>
#
# In this notebook, you'll be working with XML files from the Old Bailey API (https://www.oldbaileyonline.org/obapi/). These files contain the proceedings of all trials from 1674 to 1913. For this lab, we'll go through the trials from 1754-1756 and 1824-1826. XML (eXtensible Markup Language) provides a hierarchical representation of data contained within different tags and nodes. We'll go over XML syntax later. We will learn how to parse through these XML files from Old Bailey and grab information from sections of an XML file.
#
# ---
# ## Section 1: XML Syntax<a id='section 1'></a>
#
# First, we'll go over the syntax of a XML file. The basic unit of XML code is called an "element" or "node" and has a start and ending tag. The tags for each element look something like this:
#
# <p style="text-align: center;"> `<exampletag>some text</exampletag>` </p>
#
# Run the next cell to look at the XML file of one of the cases from the OldBailey API!
#For now, don't worry about the code for now, we'll go through it later.
example = requests.get('https://www.oldbaileyonline.org/obapi/text?div=t17031013-13')
print(example.text)
# The `interp` tags at the beginning of the file are elements that don't have any plain text content. Note that elements may possibly be empty and not contain any text (i.e. `interp` elements mentioned earlier). If the element is empty, the tag may follow a format that looks similar to `<exampletag/>`, which is equivalent to `<exampletag></exampletag>`.
#
# Elements may also contain other elements, which we call "children". Most children are indented, but the indents aren't necessary in XML and are used for clarity to show nesting. For example, if we go down to `<persName id="t17540116-4-defend46" type="defendantName">` , we see that the `rs` tag is a child of `persName`. We will explore about children in XML more in the next section.
#
# Lastly, elements may have attributes, which are in the format `<exampletag name_of_attribute="somevalue">`. Attributes are designed to store data related to a specific elements. Attributes **must** follow the quotes format (`name = "value"`). As you can tell, in this XML file, attributes are everywhere!
# -----
# **Question 1.1:** What was the verdict of this case? Was there a punsihment and if so, what was it? List both and state whether you found it as plain text content or as an attribute.
# *Write your response here*
#ANSWER
Verdict: guilty, plain text content
Punsihment: brandingOnCheek, attribute
# ----
# ## Section 2: Using XPath and `ElementTree` to parse XML<a id='section 2'></a>
#
# Now that we know what the syntax and structure of an XML file, let's figure out how to parse through one! We are going to load the same file from the first section and use XPath (XML Path Language) to navigate through elements in this file.
#
# XPath is designed to locate content in an XML file and uses a ["tree" structure](https://www.researchgate.net/profile/Roger_Moussalli/publication/257631377/figure/fig8/AS:297441854279689@1447927072768/Example-XML-Document-and-XML-Path-Queries-a-Example-XML-Document-b-XML-Tree.png) to extract specific chunks. XPath expressions are made up of "location steps" which are separated by forward slashes.
#
# First, we need to import the file into an ElementTree instance. The ElementTree format will allow us to go through each element, sorting through tags so we can extract the data we want.
xml_file = 'data/old-bailey-example.xml'
tree = ET.ElementTree(file=xml_file)
tree
# We're going to start working from the root of the tree as XML files have a tree structure. Let's load the root of our tree.
root = tree.getroot()
print(root)
# Now that we have the root, we can now start working down the tree! With the root, we can find each child of the root by printing the tags. This will also help us for future reference, if we every want to go through other children in the XML file.
#get child tags from root
for child in root:
print(child.tag)
# Now that we have a list of children to work with let's select one using `.find`. Using `.find` requires an XPath expression which will navigate through the hierarchical structure of XML and help us keep track of the path we are taking through this file.
choose_p = root.find('p')
for child in choose_p:
print(child.tag)
# This isn't very helpful, since we're still left with a bunch of tags and on top of that, we have a lot of repeating tags and names. Let's choose `placeName` as our next tag and see what happens. Notice that in our XPath expression, we are using foward slashes to navigate to the next child.
place_name = root.find('p/placeName')
for child in place_name:
print(child.tag)
# Nothing was printed, so it looks like we hit the end! Let's use `.text` to examine the data in this element, following the `.find` path we used to get here.
print(root.find('p/placeName').text)
#alternatively, print(place_name.text)
# Looking back at the file from earlier, we found where defendant was from. Let's see another feature of XPath we can utilize if, for instance, we know all of the possible children in the XML file.
#
# With XPath, you can either use a forward slash to move to the next element or child. So in our expression earlier, by following `p/placeName`, we located any `placeName` element that is a child of `p`. Another way to navigate using XPath is using a period and a double forward slash (`.//`), which looks anywhere down the tree from your current element. So, if we start at the root and want to find any element with the tag `placeName`, we can do the following:
print(root.find('.//placeName').text)
# **Question 2.1:** What happens if you don't have the period before the double slash? What happens if you change the starting element or use the whole XML file?
# *Write your answer here*
# **Question 2.2:** Find the defendant's name by traversing through the correct elements. You can check your answer in the printed XML file from [section 1](#section 1).
#
# **Tip:** `print` your final expression so that it looks pretty!
print(...)
#SOLUTION
print(tree.find('p/persName').text)
# ***WARNING*:** If you want to use `//` to find all elements with a specific child, you need to add a period (`.//`), since the node you're currently at most likely not absolute element ( the whole tree). If you want to try it out yourself, using `root.find(//placeName)` should give you an error but `root.find(.//placeName)` should give you what you want.
# ----
# Luckily, we can use `.getiterator()`, a really helpful method from ElementTree. It creates an object which will let us iterate through all elements in the file. Using this method is powerful, as we can print each element name utilizing `.tag` or see the data for each element with `.text` and `.attrib`.
#
# We can use `.getiterator()` on `tree`, our ElementTree instance. We call it in the form:
#
# <p style="text-align: center;"> `tree.getiterator(tag=None)` </p>
#
# If you don't specify what tag you want, it'll go through the first element it comes across in `tree` and then through its children and their children, etc. If you only want elements with a specific tag name, like `placeName`, you can pass it as the argument.
#
# Let's see how helpful `.getiterator()` can be! We'll call it on tree and print out the tag and attribute of each element.
iterator = tree.getiterator()
for element in iterator:
print(element.tag)
print(element.attrib)
print()
# **Question 2.3:** Using iterator and the information of the tags above, find the names of the defendant and the plaintiff by getting the text out of each element. You can either use a conditional to specify a tag and use `.tag` for some element, or specify a tag in `.getiterator()`.
#
# ***Note:*** Because of the formatting in the XML file, the you should only get the plaintiff's first name.
for ... in ...:
if ...:
# +
#SOLUTION
for element in iterator:
if element.tag == 'persName':
print(element.text)
#<NAME>
#Catherine (no last name)
# -
# What are their names? *Write their names in this cell*
# **Question 2.4:** How do you think we can use `.attrib` to find their names? You don't have to code anything, just explain how you can using `.attrib`.
# *Write your response here*
# **Question 2.5:** Use `.getiterator()` again, and a new method, `.itertext()`, to get the entire text of the proceeding. Utilizing `.itertext()` method will return all inner text from every child.
#
# **Hint:** Find the tag that will return you the entire text of the trial and a way to join all the text from the file together.
#
# <sub>***Note:*** The text in these XML files are a little wonky, so if the printed text doesn't look formatted well, it's ok.</sub>
for ... in ...:
...
#SOLUTION
for element in iterator:
if element.tag == "p":
print(''.join(list(element.itertext())))
# **Question 2.6:** Since the textual data is pretty messy in the XML files of these proceedings, where do you think the data you need might be held and how might you go about extracting this data?
# *Write your response here*
# ----
# ## Section 3: Web Scraping<a id='section 3'></a>
# We learned how to get parse through one XML file. The Old Bailey API has a total of **197751** cases. Fortunately, we are only going to use the ones from 1754-1756 and 1824-1826, but that still only narrows the number of cases to 6506!
#
# Don't worry though, you're not going to manually download each case yourself. This is where web scraping comes into play. With web scraping, we can automate data collection to get all 6506 cases.
#
# Before we start scraping, we need to know how `requests` works. The `requests` library gets (`.get`!) you a response object from a web server and will automatically decode the content from the server, from which you can use `.text` to see the document! Requests through the Old Bailey API will return an XML file, which we can then write as a file and save.
#
# Let's take a look at all of the terms we can use to choose the specific cases we want. We use `.json()` here since the parameters are stored as a JSON object.
requests.get('http://www.oldbaileyonline.org/obapi/terms').json()
# If you wanted to explore the full list in your web browser, click [this link](https://www.oldbaileyonline.org/obapi/terms).
#
# Now that you've had a chance to look through some of the terms, let's see how to grab the specific XML files.
#
# Clicking the URL below returns a JSON object of the number of IDs and the frequency of each term in which every trial contains the term "sheffield" and the offence categrory "deception" from June 14th, 1847 onward. Also, each trial ID that satisfies the terms is returned; the count parameter in this case returns 10 trial IDs, but if left unspecified, the API will return a maximum count of 1000 IDs.
#
# https://www.oldbaileyonline.org/obapi/ob?term0=trialtext_sheffield&term1=offcat_deception&term2=fromdate_18470614&breakdown=offsubcat&count=10&start=0
#
# Although the terms for time are listed as numbers, the format for the term is
# `fromdate_(starting date)` and `todate_(ending date)` without the parentheses.
# **Question 3.1:** Use requests.get(...) to get the all trial IDs between the years 1754 and 1756 and return it as a JSON object.
trials = ...
trials
#SOLUTION
trials = requests.get('https://www.oldbaileyonline.org/obapi/ob?term0=fromdate_17540116&term1=todate_17561208&&start=0').json()
trials
# Now, lets pick some trials from `trial['hits']`, so we have a list of IDs we can work with.
#
# **Question 3.2:** Select the first 10 trials by splicing through the list that we retrieved from the previous cell.
first_10 = ...
first_10
#SOLUTION
first_10 = trials['hits'][:10]
first_10
# Using the trial IDs from the previous cell, we are going to format the URL in a way so that we can get the XML file for each trial. In order to get the XML file using the Old Bailey API, we must follow this URL format:
#
# <p style="text-align: center;">`http://www.oldbaileyonline.org/obapi/text?div=(enter trial ID here without parenthesis)` </p>
#
# For example, http://www.oldbaileyonline.org/obapi/text?div=t16740429-1 gives you the link to the XML file of the first proceeding in the database.
#
#
# **Question 3.3:** Get the XML file of the first trial in first_10. A successful `.get` request returns `<Response [200]>`.
...
#SOLUTION
url = 'http://www.oldbaileyonline.org/obapi/text?div={}'.format(first_10[0])
response = requests.get(url)
response
# Run the next cell to see the XML format of the text!
print(response.text)
# We can save the XML file:
trial_number = 't17540116-11' #trial ID (make sure its a string)
with open('data/old-bailey/old-bailey-' + trial_number + '.xml', 'w') as file:
file.write(response.text)
# ### Challenge: Scraping all trials from 1754 - 1756
#
# Now that you know how to find the trial IDs for certain parameters as well as get an XML file using `requests.get(some_url)`, iterate through each ID in the list of trials (use `trials['hits']` for the list of IDs) we got from 1754-1756 earlier. You can choose how many trials you want to save.
#SOLUTION
for trial in ...:
#format URL
#get text from URL
#save the file **store in data/old-bailey/file_name
#one second pause so servers aren't overloaded
time.sleep(1)
#SOLUTION
for trial in trials['hits'][:30]:
#format URL
url = 'http://www.oldbaileyonline.org/obapi/text?div={}'.format(trial)
print(url)
#get text from URL
text = requests.get(url).text
#save the file
with open('data/old-bailey/old-bailey-' + trial + '.xml', 'w') as file:
file.write(text)
#one second pause so servers aren't overloaded
time.sleep(1)
# You can check if you saved the XML files by executing the cell below!
# !ls data/old-bailey/
# This cell will show you the XML file.
# !cat data/old-bailey/old-bailey-t17540116-1.xml
# ----
# ## Section 4: Putting it all in a dataframe<a id='section 4'></a>
#
# Now that we have a bunch of XML files and know how to parse through them to extract data, let's put the data from the XML files into a dataframe. As you probably saw earlier from printing the text of the court proceeding, the text was incredibly messy. Feel free to process the text yourself, but specifically for this last section, we'll use the data from each attribute to put in our dataframe.
#
# **Question 4.1:** Complete the body of a function `table_of_cases`, which returns a dataframe with the "type" of data as a column label and the value from that attribute in that column. Make sure to account for cases that either won't have as many attributes as others (e.g. there are two defendants in one trial, but only one in the other). The body of the function is structured for you.
#
# **Tips:** Open up different trials to see all "type" keys in attributes. Which tag contains the attributes with information you can use? And how will you account for repeating "type" keys showing up repeatedly (e.g. surname, given, etc.) so that you don't replace the value you already have in the existing column with the same key?
def table_of_cases(xml_file_name):
#load file
file = ET.ElementTree(...)
#create an iterator object
iterate = ...
#create empty dataframe
table = ...
#create a possible index for repeating "types"
i = 1
for ... in ...:
if element.tag == ...:
#get attrib
t = ...
#get value of type
val = [...]
#labels of columns in table
label = list(...)
#change possible index to string
num = ...
#Implement conditional clauses to check if we already have
#the "type" as a column label. If there is, how
#can we make a unique label for the repeating column name?
if ... not in ...:
...
#conditional clause 2
elif ... not in ...:
...
#conditional clause 3
elif ... in ...:
...
return table
#SOLUTION
def table_of_cases(xml_file_name):
file = ET.ElementTree(file = xml_file_name)
iterate = file.getiterator()
i = 1
table = pd.DataFrame()
for element in iterate:
if element.tag == "interp":
t = element.attrib['type']
val = [element.attrib['value']]
labels = list(table.columns.values)
num = str(i)
if t not in labels:
table[t] = val
elif t+num not in labels:
table[t+num] = val
elif t+num in labels:
num = str(i+1)
table[t+num] = val
return table
# **Question 4.2:** Now, use `table_of_cases` to load the attribute data from each XML file that you scraped. Load a blank dataframe so you can append the table of information after each call. Use the argument `ignore_index = True` in `.append` so that the indices will be formatted correctly.
#
# **Note:** Use the same file name format used when scraping these files and load from the correct directory, or else you won't be able to load the data.
table = ...
for ...:
raw_data = ... #leave it as file name
data_to_table = ....
table = ...
table
#SOLUTION
table = pd.DataFrame()
for i in trials['hits'][:30]:
raw_data = 'data/old-bailey/old-bailey-'+ i +'.xml'
data = table_of_cases(raw_data)
table = table.append(data, ignore_index=True)
table
# That's it! Now you know how to parse through XML files using XPath and web scrape using the `requests` library!
# ## Bibliography
#
# - All files from Old Bailey API - https://www.oldbaileyonline.org/obapi/
# - ElementTree information adapted from <NAME>. (2013, April). Python 101 – Intro to XML Parsing with ElementTree.
# https://www.blog.pythonlibrary.org/2013/04/30/python-101-intro-to-xml-parsing-with-elementtree/
#
# - Web Scraping code adapted from MEDST-250 Notebook developed by <NAME>.
# https://github.com/ds-modules/MEDST-250/tree/master/04%20-%20XML_Day_1
#
# - Image source from https://www.researchgate.net/publication/257631377_Efficient_XML_Path_Filtering_Using_GPUs
#
# ----
# Notebook developed by: <NAME>
#
# Data Science Modules: http://data.berkeley.edu/education/modules
| labs/3-13/3-13_Parsing_XML_Data_solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# <center><h1>Computational Statistics:</h1></center>
# <center><h1>Simulating Outcomes</h1></center>
# + [markdown] slideshow={"slide_type": "slide"}
# Preview
# -------
#
# - Use Python's `random` module to simulating probabilities of common events, specifically dice rolling
# + slideshow={"slide_type": "notes"}
reset -fs
# + slideshow={"slide_type": "slide"}
# Python's random module generates pseudo-random numbers
# from random import <tab>
# + slideshow={"slide_type": "fragment"}
# Import a function that chooses amoung discrete outcomes
from random import choice
# + slideshow={"slide_type": "slide"}
# Roll a die 🎲
choice([1, 2, 3, 4, 5, 6])
# + [markdown] slideshow={"slide_type": "fragment"}
# How could we roll a 4-sided die? or a 20-sided die?
# + slideshow={"slide_type": "slide"}
# Let's create a variable for number of sides
from typing import List
def faces(n_sides: int) -> List[int]:
"Enumerate the number of faces for a die"
return list(range(1, n_sides+1))
# + slideshow={"slide_type": "notes"}
# Traditional die: A cube
n_sides = 6
faces = list(range(1, n_sides+1))
choice(faces)
# + [markdown] slideshow={"slide_type": "slide"}
# That is a lot of typing. Let's make a function:
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "notes"}
from typing import List
def faces(n_sides: int) -> List[int]:
"Enumerate the faces of a die"
return list(range(1, n_sides+1))
# + slideshow={"slide_type": "fragment"}
# faces?
# + slideshow={"slide_type": "slide"}
# Roll a 20-sided die
choice(faces(n_sides=20))
# + [markdown] slideshow={"slide_type": "slide"}
# What if we want to roll more than one die?
# + slideshow={"slide_type": "fragment"}
from random import choices
# choices?
# + [markdown] slideshow={"slide_type": "fragment"}
# Let's check out the documentation for [random.choices](https://docs.python.org/3/library/random.html)
# + slideshow={"slide_type": "slide"}
# Let's roll our die a couple of times
choices(population=faces(n_sides=6),
k=10)
# + slideshow={"slide_type": "slide"}
# How could we roll a 20-sided die 1,000 times?
rolls = choices(population=faces(n_sides=4),
weights=[10, 5, 10, 40],
k=1_000)
rolls
# + slideshow={"slide_type": "slide"}
# What if we want to cheat by having a weighted die?
# + slideshow={"slide_type": "notes"}
rolls = choices(population=faces(n_sides=4),
weights=[30, 30, 20, 10],
k=25)
print(rolls)
# + [markdown] slideshow={"slide_type": "fragment"}
# That is hard to understand the raw numbers. Let's organized them …
# + [markdown] slideshow={"slide_type": "slide"}
# Data Scientists ❤️ counting
# ------
#
# Data Science is mostly about counting.
# + slideshow={"slide_type": "fragment"}
# Let's count the outcomes of rolls …
from collections import Counter
rolls_counts = Counter(rolls)
# + slideshow={"slide_type": "notes"}
from collections import Counter
Counter(rolls)
# + slideshow={"slide_type": "slide"}
# How could we order the results?
# + slideshow={"slide_type": "notes"}
rolls_counts = Counter(rolls)
# Sort by faces
sorted(rolls_counts.items(), key=lambda x: x[0])
# Sort by counts
sorted(rolls_counts.items(), key=lambda x: x[1])
# Store the sorted results as a dictionary
rolls_counts = dict(sorted(rolls_counts.items(), key=lambda x: x[0]))
# + [markdown] slideshow={"slide_type": "slide"}
# <center><h2>Fun Fact: My dog is named "Lambda" 🐶</h2></center>
# <center><img src="images/lambda_dog.jpg" width="70%"/></center>
# + [markdown] slideshow={"slide_type": "slide"}
# <center><h2>Visualization is an important technique for Data Science</h2></center>
# + [markdown] slideshow={"slide_type": "fragment"}
# Python's visualization is kinda mess. It is a bazaar, not a cathedral. There are many options. Some people say too many options.
# + [markdown] slideshow={"slide_type": "fragment"}
# `matplotlib` is the default.
# + slideshow={"slide_type": "slide"}
import matplotlib.pyplot as plt
# %matplotlib inline
# + slideshow={"slide_type": "slide"}
# How should we plot these?
rolls_counts
# + slideshow={"slide_type": "fragment"}
# Let's plot the value
plt.bar(x=rolls_counts.keys(),
height=rolls_counts.values());
# + [markdown] slideshow={"slide_type": "slide"}
# <center><h2>Any questions?</h2></center>
# + slideshow={"slide_type": "slide"}
# Let's create a function for rolling a pair of dice 🎲 🎲
roll_2_dice = (lambda: choices(population=faces(n_sides=6), k=2))
# + slideshow={"slide_type": "fragment"}
roll_2_dice()
# + slideshow={"slide_type": "fragment"}
# How could we add up the two dice?
[sum(roll_2_dice()) for _ in range(10_000)]
# + slideshow={"slide_type": "notes"}
sum(roll_2_dice())
# + slideshow={"slide_type": "slide"}
# How would we simulate roll a pair many times and tracking the outcomes?
# + slideshow={"slide_type": "notes"}
rolls = [sum(roll_2_dice()) for _ in range(50_0000)]
# + slideshow={"slide_type": "slide"}
# Let's count those outcomes
rolls_counts = Counter(rolls)
# + slideshow={"slide_type": "slide"}
# Plot the outcome of simulating many dice rolling
labels, values = zip(*rolls_counts.items()) # Unpack dict
plt.bar(x=labels,
height=values);
# + [markdown] slideshow={"slide_type": "slide"}
# Let's explore the properties of those counts
# ----
# + slideshow={"slide_type": "slide"}
# What is the most frequency outcome?
# + slideshow={"slide_type": "fragment"}
rolls_counts.most_common(n=3)
# + slideshow={"slide_type": "fragment"}
# If you roll two dice, how likely is it that your sum is greater than 7
# + slideshow={"slide_type": "fragment"}
sum(v for k, v in rolls_counts.items() if k > 7) / sum(rolls_counts.values())
# + slideshow={"slide_type": "fragment"}
# The analytical solution: 15 ways of rolling greate 7 out of possible 36
15/36
# + slideshow={"slide_type": "slide"}
# How could we sample from the emperical distribution?
# + slideshow={"slide_type": "fragment"}
# rolls_counts.elements?
# + slideshow={"slide_type": "fragment"}
from random import sample
sample(population=list(rolls_counts.elements()), k=10)
# + [markdown] slideshow={"slide_type": "slide"}
# Review
# ------
#
# - Using Python, we can:
# - Define discrete outcomes
# - Simulate the results of those outcomes
# - Organized and present the results
# + [markdown] slideshow={"slide_type": "fragment"}
# - We have use most of the common tools of Data Science:
# - Probability
# - Counting
# - Sorting
# - Visualization
#
# + [markdown] slideshow={"slide_type": "slide"}
#
# + [markdown] slideshow={"slide_type": "notes"}
# References
# ----
#
# - [Python's Standard Library Examples for random module](https://docs.python.org/3/library/random.html#examples-and-recipes)
# - <NAME>'s _Modern Python: Big Ideas and Little Code in Python_
# - [Video](https://www.amazon.com/Lesson-Implementing-k-means-Unsupervised-Learning/dp/B0782H9R1B)
# - [Code](https://github.com/rhettinger/modernpython)
#
#
| 2_simulating_outcomes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Segmentation en phrases
# ## Imports
import os
import sys
import nltk
from nltk.tokenize import sent_tokenize
# ## Fichiers d'inputs et d'outputs
infile = "../data/all.txt"
outfile = "../data/sents.txt"
# ## Segmentation en phrases du corpus complet et création d'un nouveau fichier
# **Important** : pour traiter le corpus complet, indiquez `LIMIT = None`
LIMIT = 1000000
# + tags=[]
with open(outfile, 'w', encoding="utf-8") as output:
with open(infile, encoding="utf-8", errors="backslashreplace") as f:
content = f.readlines()
content = content[:LIMIT] if LIMIT is not None else content
n_lines = len(content)
for i, line in enumerate(content):
if i % 10000 == 0:
print(f'processing line {i}/{n_lines}')
sentences = sent_tokenize(line)
for sent in sentences:
output.write(sent + "\n")
print("Done")
# -
| module4/s3_sentence_tokenizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Requirements
#
# - install node2vec code and add executable to your $PATH (code: https://snap.stanford.edu/node2vec)
# - compile GED code (graph embedding divergence),
# the base implementation of the framework in C (code included, also in https://github.com/ftheberge/Comparing_Graph_Embeddings)
# - new package to install: 'pip install graphrole'
# - adjust location of data and code in next cell
#
# +
## the data directory
datadir='../Datasets/'
## location of the GED code
GED='../GED/GED'
# + slideshow={"slide_type": "slide"}
import igraph as ig
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
from sklearn.linear_model import LinearRegression
from collections import Counter
import os
import umap
import pickle
import partition_igraph
import subprocess
import scipy.sparse.linalg as lg
from sklearn.cluster import KMeans, DBSCAN
from sklearn.model_selection import train_test_split
from sklearn.metrics import adjusted_mutual_info_score as AMI
from graphrole import RecursiveFeatureExtractor, RoleExtractor
from sklearn.metrics import accuracy_score, roc_auc_score, roc_curve, confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import calinski_harabasz_score as CHS
## node and edge colors
cls_edges = 'gainsboro'
cls = ['silver','dimgray','black']
# -
# # A few useful functions
# +
def binary_operator(u, v, op='had'):
if op=='had':
return u * v
if op=='l1':
return np.abs(u - v)
if op=='l2':
return (u - v) ** 2
if op=='avg':
return (u + v) / 2.0
def readEmbedding(fn="_embed", N2K=None):
D = pd.read_csv(fn, sep=' ', skiprows=1, header=None)
D = D.dropna(axis=1)
if N2K!=None:
x = [N2K[i] for i in D[0]]
D[0] = x
D = D.sort_values(by=0)
Y = np.array(D.iloc[:,1:])
return Y
## Read embedding from file in node2vec format
## For visualization: use UMAP if dim > 2
def embed2layout(fn="_embed"):
D = pd.read_csv(fn, sep=' ', skiprows=1, header=None)
D = D.dropna(axis=1)
D = D.sort_values(by=0)
Y = np.array(D.iloc[:,1:])
if Y.shape[1]>2:
Y = umap.UMAP().fit_transform(Y)
ly = []
for v in range(Y.shape[0]):
ly.append((Y[v][0],Y[v][1]))
return ly
## Computing JS divergence with GED code given edgelist, communities and embedding
def JS(edge_file, comm_file, embed_file):
x = GED+' -g '+edge_file+' -c '+comm_file+' -e '+embed_file
s = subprocess.run(x, shell=True, stdout=subprocess.PIPE)
x = s.stdout.decode().split(' ')
div = float(x[1])
return(div)
## Hope with various Sim
def Hope(g, sim='katz', dim=2, verbose=False, beta=.01, alpha=.5):
if g.is_directed() == False:
dim = dim*2
A = np.array(g.get_adjacency().data)
beta = beta
alpha = alpha
dim = dim
n = g.vcount()
## Katz
if sim == 'katz':
M_g = np.eye(n) - beta * A
M_l = beta * A
## Adamic-Adar
if sim == 'aa':
M_g = np.eye(n)
D = np.diag(g.degree())
M_l = np.dot(np.dot(A,D),A)
## Common neighbors
if sim == 'cn':
M_g = np.eye(n)
M_l = np.dot(A,A)
## rooted page rank
if sim == 'rpr':
P = []
for i in range(n):
s = np.sum(A[i])
P.append([x/s for x in A[i]])
P = np.array(P)
M_g = np.eye(n)-alpha*P
M_l = (1-alpha)*np.eye(n)
S = np.dot(np.linalg.inv(M_g), M_l)
u, s, vt = lg.svds(S, k=dim // 2)
X1 = np.dot(u, np.diag(np.sqrt(s)))
X2 = np.dot(vt.T, np.diag(np.sqrt(s)))
X = np.concatenate((X1, X2), axis=1)
p_d_p_t = np.dot(u, np.dot(np.diag(s), vt))
eig_err = np.linalg.norm(p_d_p_t - S)
if verbose:
print('SVD error (low rank): %f' % eig_err)
if g.is_directed() == False:
d = dim//2
return X[:,:d]
else:
return X
## save to disk to compute divergence
def saveEmbedding(X, g, fn='_embed'):
with open(fn,'w') as f:
f.write(str(X.shape[0]) + " " + str(X.shape[1])+'\n')
for i in range(X.shape[0]):
f.write(g.vs[i]['name']+' ')
for j in range(X.shape[1]):
f.write(str(X[i][j])+' ')
f.write('\n')
## Laplacian eigenmaps
def LE(g, dim=2):
L_sym = np.array(g.laplacian(normalized=True))
w, v = lg.eigs(L_sym, k=dim + 1, which='SM')
idx = np.argsort(w) # sort eigenvalues
w = w[idx]
v = v[:, idx]
X = v[:, 1:]
return X.real
def bmatrix(a):
"""Returns a LaTeX bmatrix
:a: numpy array
:returns: LaTeX bmatrix as a string
"""
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{bmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{bmatrix}']
return '\n'.join(rv)
# +
## To produce LaTeX from a DataFrame
#df = df.round(decimals=3)
#print(df.to_latex(index=False))
#print(df.to_latex(index=True))
# -
# # Figure 6.1
## To illustrate random walks
g = ig.Graph.Erdos_Renyi(n=4,p=0,directed=True)
g.vs['label'] = ['A','B','C','D']
g.vs['color'] = 'white'
g.add_edges([(0,1),(1,2),(1,3),(2,1),(3,2)])
ig.plot(g,'tiny.eps',bbox=(0,0,300,200),vertex_label_size=10)
# # Prepare or load datasets
#
# * g: small ABCD graph (100 nodes), mainly for visualization and quick exampes
# * G: large ABCD graph (1000 nodes), for experiments
# * z: zachary graph, for visualzation
# ## 1. Small ABCD graph
# + active=""
# ## ABCD graph -- small enough for viz
# ## We used the following parameters:
# n = "100" # number of vertices in graph
# t1 = "3" # power-law exponent for degree distribution
# d_min = "5" # minimum degree
# d_max = "15" # maximum degree
# d_max_iter = "1000" # maximum number of iterations for sampling degrees
# t2 = "2" # power-law exponent for cluster size distribution
# c_min = "25" # minimum cluster size
# c_max = "50" # maximum cluster size
# c_max_iter = "1000" # maximum number of iterations for sampling cluster sizes
# xi = "0.2" # fraction of edges to fall in background graph
# isCL = "false" # if "false" use configuration model, if "true" use Chung-Lu
# degreefile = "degrees.dat" # name of file that contains vertex degrees
# communitysizesfile = "comm_sizes.dat" # name of file that contains community sizes
# communityfile = "abcd_100_comm.dat" # name of file that contains assignments of vertices to communities
# networkfile = "abcd_100.dat" # name of file that contains edges of the generated graph
#
# +
## read graph and communities
g = ig.Graph.Read_Ncol(datadir+'ABCD/abcd_100.dat',directed=False)
c = np.loadtxt(datadir+'ABCD/abcd_100_comms.dat',dtype='uint16',usecols=(1))
g.vs['comm'] = [c[int(x['name'])-1] for x in g.vs]
## print a few stats
print(g.vcount(),'vertices,',g.ecount(),'edges,','avg degreee',np.mean(g.degree()),'communities',max(g.vs['comm']))
## ground truth
gt = {k:(v-1) for k,v in enumerate(g.vs['comm'])}
## map between int(name) to key
n2k = {int(v):k for k,v in enumerate(g.vs['name'])}
## define the colors and node sizes here
g.vs['size'] = 7
g.es['color'] = cls_edges
g.vs['color'] = [cls[i-1] for i in g.vs['comm']]
ig.plot(g, 'abcd.eps', bbox=(0,0,300,200))
# -
# ## 2. Larger ABCD graph
# + active=""
# ## ABCD graph -- larger for experiments
# ## We used the following parameters:
# n = "1000" # number of vertices in graph
# t1 = "3" # power-law exponent for degree distribution
# d_min = "10" # minimum degree
# d_max = "100" # maximum degree
# d_max_iter = "1000" # maximum number of iterations for sampling degrees
# t2 = "2" # power-law exponent for cluster size distribution
# c_min = "50" # minimum cluster size
# c_max = "150" # maximum cluster size
# c_max_iter = "1000" # maximum number of iterations for sampling cluster sizes
# xi = "0.6" # fraction of edges to fall in background graph
# isCL = "false" # if "false" use configuration model, if "true" use Chung-Lu
# degreefile = "degrees.dat" # name of file that contains vertex degrees
# communitysizesfile = "comm_sizes.dat" # name of file that contains community sizes
# communityfile = "abcd_1000_comm.dat" # name of file that contains assignments of vertices to communities
# networkfile = "abcd_1000.dat" # name of file that contains edges of the generated graph
#
# -
## read graph and communities
G = ig.Graph.Read_Ncol(datadir+'ABCD/abcd_1000.dat',directed=False)
c = np.loadtxt(datadir+'ABCD/abcd_1000_comms.dat',dtype='uint16',usecols=(1))
G.vs['comm'] = [c[int(x['name'])-1] for x in G.vs]
## print a few stats
print(G.vcount(),'vertices,',G.ecount(),'edges,','avg degreee',np.mean(G.degree()),'communities',max(G.vs['comm']))
## ground truth
GT = {k:(v-1) for k,v in enumerate(G.vs['comm'])}
## map between int(name) to key
N2K = {int(v):k for k,v in enumerate(G.vs['name'])}
## define the colors and node sizes here
cls_edges = 'gainsboro'
G.vs['size'] = 5
G.es['color'] = cls_edges
G.vs['color'] = 'black'
ig.plot(G, bbox=(0,0,400,300)) ## communities are far from obvious in 2d layout!
# ## 3. Zachary (karate) graph
#
z = ig.Graph.Famous('zachary')
z.vs['size'] = 7
z.vs['name'] = [str(i) for i in range(z.vcount())]
z.es['color'] = cls_edges
z.vs['comm'] = [0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,1,0,0,1,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1]
z.vs['color'] = [cls[i*2] for i in z.vs['comm']]
ig.plot(z, 'zachary.eps', bbox=(0,0,300,200))
# # Show various 2d layouts using small Zachary graph
ly = z.layout('kk')
ig.plot(z, 'layout_kk.eps', layout=ly, bbox=(0,0,300,200))
ly = z.layout('fr')
ig.plot(z, 'layout_fr.eps', layout=ly, bbox=(0,0,300,200))
ly = z.layout('mds')
ig.plot(z, 'layout_mds.eps', layout=ly, bbox=(0,0,300,200))
ly = z.layout('circle')
ig.plot(z, 'layout_circle.eps', layout=ly, bbox=(0,0,300,200))
ly = z.layout('grid')
ig.plot(z, 'layout_grid.eps', layout=ly, bbox=(0,0,300,200))
ly = z.layout('sugiyama')
ig.plot(z, 'layout_tree.eps', layout=ly, bbox=(0,0,300,200))
# # Perform several embeddings -- Zachary graph
# * node2vec from source code
# * HOPE with different similarities
# * Laplacian Eigenmaps
# * visualize some good and bad results
#
# We use the framework to compute "graph embedding divergence" (GED.c)
# +
L = []
## Hope
for dim in [2,4,8,16]:
for sim in ['katz','aa','cn','rpr']:
X = Hope(z,sim=sim,dim=dim)
saveEmbedding(X,z)
jsd = JS(datadir+'Zachary/zachary.edgelist',datadir+'Zachary/zachary.ecg','_embed')
L.append([dim,'hope',sim,jsd])
## LE
for dim in [2,4,8,16]:
X = LE(z,dim=dim)
saveEmbedding(X,z)
jsd = JS(datadir+'Zachary/zachary.edgelist',datadir+'Zachary/zachary.ecg','_embed')
L.append([dim,'le',' ',jsd])
## node2vec is in my path
for dim in [2,4,8,16]:
for (p,q) in [(1,0),(0,1),(1,1)]:
x = 'node2vec -i:'+datadir+'Zachary/zachary.edgelist -o:_embed -d:'+str(dim)+' -p:'+str(p)+' -q:'+str(q)
r = os.system(x)
jsd = JS(datadir+'Zachary/zachary.edgelist',datadir+'Zachary/zachary.ecg','_embed')
L.append([dim,'n2v',str(p)+' '+str(q),jsd])
# -
D = pd.DataFrame(L,columns=['dim','algo','param','jsd'])
D = D.sort_values(by='jsd',axis=0)
D.head()
# +
## re-run and plot top result
dim, algo, param, div = D.iloc[0]
if algo=='n2v':
s = param.split()
p = float(s[0])
q = float(s[1])
x = 'node2vec -i:'+datadir+'Zachary/zachary.edgelist -o:_embed -d:'+str(dim)+' -p:'+str(p)+' -q:'+str(q)
r = os.system(x)
elif algo=='hope':
X = Hope(z,sim=param,dim=dim)
saveEmbedding(X,z)
else:
X = LE(z,dim=dim)
saveEmbedding(X,z)
l = embed2layout()
z.vs['ly'] = [l[int(v['name'])] for v in z.vs]
ig.plot(z, 'zac_high.eps', layout=z.vs['ly'], bbox=(0,0,300,200))
# -
D.tail()
# +
## plot bottom one
dim, algo, param, div = D.iloc[-1]
if algo=='n2v':
s = param.split()
p = float(s[0])
q = float(s[1])
x = 'node2vec -i:'+datadir+'Zachary/zachary.edgelist -o:_embed -d:'+str(dim)+' -p:'+str(p)+' -q:'+str(q)
r = os.system(x)
elif algo=='hope':
X = Hope(z,sim=param,dim=dim)
saveEmbedding(X,z)
else:
X = LE(z,dim=dim)
saveEmbedding(X,z)
l = embed2layout()
z.vs['ly'] = [l[int(v['name'])] for v in z.vs]
ig.plot(z, 'zac_low.eps', layout=z.vs['ly'], bbox=(0,0,300,200))
# -
# # Perform several embeddings -- small ABCD graph
# * node2vec from source code
# * HOPE different similarities
# * Laplacian Eigenmaps
# * visualize some good and bad results
# +
L = []
DIM = [2,4,8,16,24,32]
## Hope
for dim in DIM:
for sim in ['katz','aa','cn','rpr']:
X = Hope(g,sim=sim,dim=dim)
saveEmbedding(X,g)
jsd = JS(datadir+'ABCD/abcd_100.dat',datadir+'ABCD/abcd_100.ecg','_embed')
L.append([dim,'hope',sim,jsd])
## LE
for dim in DIM:
X = LE(g,dim=dim)
saveEmbedding(X,g)
jsd = JS(datadir+'ABCD/abcd_100.dat',datadir+'ABCD/abcd_100.ecg','_embed')
L.append([dim,'le',' ',jsd])
## node2vec is in my path
for dim in DIM:
for (p,q) in [(1,0),(1,.5),(0,1),(.5,1),(1,1)]:
x = 'node2vec -i:'+datadir+'ABCD/abcd_100.dat -o:_embed -d:'+str(dim)+' -p:'+str(p)+' -q:'+str(q)
r = os.system(x)
jsd = JS(datadir+'ABCD/abcd_100.dat',datadir+'ABCD/abcd_100.ecg','_embed')
L.append([dim,'n2v',str(p)+' '+str(q),jsd])
# -
D = pd.DataFrame(L,columns=['dim','algo','param','jsd'])
D = D.sort_values(by='jsd',axis=0)
D.head()
# +
## re-run top one and plot
dim, algo, param, div = D.iloc[0]
if algo=='n2v':
s = param.split()
p = float(s[0])
q = float(s[1])
x = 'node2vec -i:'+datadir+'ABCD/abcd_100.dat -o:_embed -d:'+str(dim)+' -p:'+str(p)+' -q:'+str(q)
r = os.system(x)
elif algo=='hope':
X = Hope(g,sim=param,dim=dim)
saveEmbedding(X,g)
else:
X = LE(g,dim=dim)
saveEmbedding(X,g)
l = embed2layout()
g.vs['ly'] = [l[int(v['name'])-1] for v in g.vs]
ig.plot(g, layout=g.vs['ly'], bbox=(0,0,300,200))
# -
D.tail()
# +
## bottom one(s)
dim, algo, param, div = D.iloc[-1]
if algo=='n2v':
s = param.split()
p = float(s[0])
q = float(s[1])
x = 'node2vec -i:'+datadir+'ABCD/abcd_100.edgelist -o:_embed -d:'+str(dim)+' -p:'+str(p)+' -q:'+str(q)
r = os.system(x)
elif algo=='hope':
X = Hope(g,sim=param,dim=dim)
saveEmbedding(X,g)
else:
X = LE(g,dim=dim)
saveEmbedding(X,g)
l = embed2layout()
g.vs['ly'] = [l[int(v['name'])-1] for v in g.vs]
ig.plot(g, layout=g.vs['ly'], bbox=(0,0,300,200))
# -
# # Large ABCD graph -- find a good embedding with the framework
# * we only look as 16 configurations with HOPE for now
# * we'll consider more in the large classification experiment later
# +
# %%time
## this is slower - we try 16 combinations with HOPE
## store in _embed_best
L = []
jsd_best = 1
DIM = [16,32,48,64]
## Hope
for dim in DIM:
for sim in ['katz','aa','cn','rpr']:
X = Hope(G, sim=sim, dim=dim)
saveEmbedding(X,G)
jsd = JS(datadir+'ABCD/abcd_1000.dat',datadir+'ABCD/abcd_1000.ecg','_embed')
L.append([dim,'hope',sim,jsd])
if jsd<jsd_best:
jsd_best=jsd
os.system('cp _embed abcd_1000_embed_best')
# -
## this is saved as _embed_best
D = pd.DataFrame(L,columns=['dim','algo','param','jsd'])
D = D.sort_values(by='jsd',axis=0)
D.head(1)
# # Classification on larger ABCD graph
#
# * we use a random forest model on embedded space
# * we split the data as train and test
# * the goal is to recover the communities for each node
#
## used saved embedding
X = readEmbedding(fn="abcd_1000_embed_best")
y = G.vs['comm']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.75, random_state=0)
# +
# Create the model with 100 trees
model = RandomForestClassifier(n_estimators=100,
bootstrap = True,
max_features = 'sqrt')
# Fit on training data
model.fit(X_train, y_train)
# Class predictions on test data
y_pred = model.predict(X_test)
# -
## Confusion matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
## percent correct
print('\naccuracy:',sum(cm.diagonal())/sum(sum(cm)))
# +
## For LaTeX file
#print(bmatrix(cm)+'\n')
# -
## compare with random classifier -- assuming we know only the number of classes
y_pred = [x+1 for x in np.random.choice(12,size=len(y_test),replace=True)]
cm = confusion_matrix(y_test, y_pred)
# print(cm)
## percent correct
print('\nAccuracy:',sum(cm.diagonal())/sum(sum(cm)))
## compare with random classifier -- using class proportions in training data
ctr = Counter(y_train)
x = [ctr[i+1] for i in range(12)]
s = np.sum(x)
p = [i/s for i in x]
y_pred = [x+1 for x in np.random.choice(12,size=len(y_test),replace=True,p=p)]
cm = confusion_matrix(y_test, y_pred)
# print(cm)
## percent correct
print('\nAccuracy:',sum(cm.diagonal())/sum(sum(cm)))
# # Clustering
# * we run graph clustering (Louvain, ECG)
# * we compare with vector space embedding using same embedding
# * we use k-means (various k) and DBSCAN
# * recall there are 12 ground truth community
X = readEmbedding(fn="abcd_1000_embed_best")
# +
L = []
K = [6,9,12,15,24] ## for k-means
REP = 30
for i in range(REP):
## kmeans
for k in K:
cl = KMeans(n_clusters=k).fit(X)
d = {k:v for k,v in enumerate(cl.labels_)}
scr = CHS(X,cl.labels_)
ami = AMI(list(GT.values()),list(d.values()))
L.append(['km'+str(k),scr,ami])
## ECG
ec = G.community_ecg().membership
scr = G.modularity(ec)
ami = AMI(list(GT.values()),ec)
L.append(['ecg',scr,ami])
## Louvain -- permute as this is not done in igraph
p = np.random.permutation(G.vcount()).tolist()
GG = G.permute_vertices(p)
l = GG.community_multilevel().membership
ll = [-1]*len(l)
for i in range(len(l)):
ll[i] = l[p[i]]
scr = G.modularity(ll)
ami = AMI(list(GT.values()),ll)
L.append(['ml',scr,ami])
# +
## results with best score for 3 algorithms
D = pd.DataFrame(L,columns=['algo','scr','ami'])
x = list(D[[x.startswith('km') for x in D['algo']]].sort_values(by='scr',ascending=False)['ami'])[0]
print('K-Means:',x)
x = list(D[D['algo']=='ml'].sort_values(by='scr',ascending=False)['ami'])[0]
print('Louvain:',x)
x = list(D[D['algo']=='ecg'].sort_values(by='scr',ascending=False)['ami'])[0]
print('ECG:',x)
# +
## boxplot AMI results
A = []
algo = ['km6','km9','km12','km15','km24','ml','ecg']
for a in algo:
A.append(D[D['algo']==a]['ami'])
B = pd.DataFrame(np.transpose(A),
columns=['k-means(6)','k-means(9)','k-means(12)','k-means(15)',
'k-means(24)','Louvain','ECG'])
B.boxplot(rot=30,figsize=(7,5))
plt.ylabel('Adjusted Mutual Information (AMI)');
#plt.savefig('embed_cluster.eps')
# +
## DBSCAN -- best results -- we tried different epsilon and dim
## test via calinski_harabasz_score or silhouette_score or davies_bouldin_score
## best result with min_samples = 8
top = 0
for dim in [4,8,16,24,32,40,48,64]:
for ms in [8]:
U = umap.UMAP(n_components=24).fit_transform(X)
for e in np.arange(.40,.50,.0025):
cl = DBSCAN(eps=e, min_samples=ms ).fit(U)
labels = cl.labels_
s = CHS(U,labels) ## score
if s>top:
top=s
e_top=e
d_top=dim
m_top=ms
#print(d_top,e_top)
U = umap.UMAP(n_components=d_top).fit_transform(X)
cl = DBSCAN(eps=e_top, min_samples=ms).fit(U)
b = [x>-1 for x in cl.labels_]
l = list(GT.values())
v = [l[i] for i in range(len(l)) if b[i]]
print('AMI without outliers:',AMI(v,cl.labels_[b]))
# -
print('AMI with outliers:',AMI(list(GT.values()),cl.labels_))
# # Link prediction
#
# * we drop 10% edges and re-compute the embedding (same parameters as best one)
# * we train a logistic regression model
# * we use validation set to pick best operator
# * we apply final model to test set
#
# Link/edge embeddings for the positive and negative edge samples is obtained
# by applying a binary operator on the embeddings of the source and target nodes
# of each sampled edge. We consider 4 different operators and select via validation.
#
# +
## pick 10% edges at random, save new graph as Gp
test_size = int(np.round(.1*G.ecount()))
test_eid = np.random.choice(G.ecount(),size=test_size,replace=False)
Gp = G.copy()
Gp.delete_edges(test_eid)
## compute embedding on Gp
X = Hope(G,sim='rpr',dim=64)
# -
## validation round in Gp to select operator
for op in ['had','l1','l2','avg']:
## all edges (positive cases)
F = []
for e in Gp.es:
F.append(binary_operator(X[e.tuple[0]],X[e.tuple[1]],op=op))
size = len(F)
f = [1]*size
## features for node pairs without edges
ctr = 0
while ctr < size:
e = np.random.choice(Gp.vcount(),size=2,replace=False)
if Gp.get_eid(e[0],e[1],directed=False,error=False) == -1:
F.append(binary_operator(X[e[0]],X[e[1]],op=op))
ctr += 1
F = np.array(F)
f.extend([0]*size)
X_train, X_test, y_train, y_test = train_test_split(F, f, test_size=0.1, random_state=0)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print('Accuracy of logistic regression classifier with',op,'on validation set: {:.2f}'.format(logreg.score(X_test, y_test)))
# +
## Train model with best operator ('l1' or 'l2' here, but this may vary)
op = 'l1'
F = []
for e in Gp.es:
F.append(binary_operator(X[e.tuple[0]],X[e.tuple[1]],op=op))
size = len(F)
f = [1]*size
## features for node pairs without edges
ctr = 0
while ctr < size:
e = np.random.choice(Gp.vcount(),size=2,replace=False)
if Gp.get_eid(e[0],e[1],directed=False,error=False) == -1:
F.append(binary_operator(X[e[0]],X[e[1]],op=op))
ctr += 1
F = np.array(F)
f.extend([0]*size)
logreg = LogisticRegression()
logreg.fit(F,f)
## prepare test set -- dropped edges from G and random pairs
## all edges (positive cases)
op = 'l1'
X_test = []
for i in test_eid:
e = G.es[i]
X_test.append(binary_operator(X[e.tuple[0]],X[e.tuple[1]],op=op))
size = len(X_test)
y_test = [1]*size
ctr = 0
while ctr < size:
e = np.random.choice(G.vcount(),size=2,replace=False)
if G.get_eid(e[0],e[1],directed=False,error=False) == -1:
X_test.append(binary_operator(X[e[0]],X[e[1]],op=op))
ctr += 1
X_test = np.array(X_test)
y_test.extend([0]*size)
y_pred = logreg.predict(X_test)
print('Accuracy of logistic regression classifier with',op,'on test set: {:.2f}'.format(logreg.score(X_test, y_test)))
confusion_matrix(y_test, y_pred)
# -
logit_roc_auc = roc_auc_score(y_test, logreg.predict(X_test))
fpr, tpr, thresholds = roc_curve(y_test, logreg.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, color='gray',label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('')
plt.legend(loc="lower right")
plt.savefig('embed_link.eps')
plt.show()
# ## Larger study -- use accuracy for picking embedding
#
# - we training-validation-test split
# - this can be long to run -- a pickle file with the results is included in data directory
# - to re-run from scratch, uncomment the next cell
# + active=""
# # train/val/test, split the id's in proportion 25/25/50
# ids = [i for i in range(G.vcount())]
# id_trainval, id_test = train_test_split(ids, test_size=.5) ## split test
# id_train, id_val = train_test_split(id_trainval, test_size=.5) ## split train/val
#
# y_all = G.vs['comm']
# y_train = [y_all[i] for i in id_train]
# y_trainval = [y_all[i] for i in id_trainval]
# y_val = [y_all[i] for i in id_val]
# y_test = [y_all[i] for i in id_test]
#
# ## loop over several algos, parameters
# L = []
#
# ## LE
# for dim in [2,4,8,16,24,32,48]:
# X = LE(G, dim=dim)
# X_train = X[id_train,:]
# X_val = X[id_val,:]
# saveEmbedding(X,G)
# jsd = JS('abcd_1000.dat','_ecg','_embed')
#
# # Create the model with 100 trees
# model = RandomForestClassifier(n_estimators=100,
# bootstrap = True,
# max_features = 'sqrt')
# # Fit on training data
# model.fit(X_train, y_train)
#
# # Actual class predictions
# y_pred = model.predict(X_val)
# scr = accuracy_score(y_val,y_pred)
# L.append([dim,'le',0,jsd,scr])
#
# ## HOPE
# for dim in [2,4,8,16,24,32,48]:
# for sim in ['katz','aa','cn','rpr']:
# X = Hope(G,sim=sim,dim=dim)
# X_train = X[id_train,:]
# X_val = X[id_val,:]
# saveEmbedding(X,G)
# jsd = JS('abcd_1000.dat','_ecg','_embed')
#
# # Create the model with 100 trees
# model = RandomForestClassifier(n_estimators=100,
# bootstrap = True,
# max_features = 'sqrt')
# # Fit on training data
# model.fit(X_train, y_train)
#
# # Actual class predictions
# y_pred = model.predict(X_val)
# scr = accuracy_score(y_val,y_pred)
# L.append([dim,'hope',sim,jsd,scr])
#
# ## node2vec
# ## node2vec is in my path
# for dim in [2,4,8,16,24,32,48]:
# for (p,q) in [(1,0),(1,.5),(0,1),(.5,1),(1,1)]:
# x = 'node2vec -i:'+datadir+'ABCD/abcd_1000.dat -o:_embed -d:'+str(dim)+' -p:'+str(p)+' -q:'+str(q)
# r = os.system(x)
# X = readEmbedding(N2K=N2K)
# jsd = JS('abcd_1000.dat','_ecg','_embed')
# X_train = X[id_train,:]
# X_val = X[id_val,:]
# # Create the model with 100 trees
# model = RandomForestClassifier(n_estimators=100,
# bootstrap = True,
# max_features = 'sqrt')
#
# # Fit on training data
# model.fit(X_train, y_train)
#
# # Actual class predictions
# y_pred = model.predict(X_val)
# scr = accuracy_score(y_val,y_pred)
# L.append([dim,'n2v',str(p)+' '+str(q),jsd,scr])
#
# +
# save L and train/val/test ids
#pickle.dump( (id_train,id_val,id_trainval,id_test,L), open( "abcd_1000_embeddings.pkl", "wb" ) )
## load L and train/val/test ids
id_train,id_val,id_trainval,id_test,L = pickle.load(open(datadir+"ABCD/abcd_1000_embeddings.pkl","rb"))
y_all = G.vs['comm']
y_train = [y_all[i] for i in id_train]
y_trainval = [y_all[i] for i in id_trainval]
y_val = [y_all[i] for i in id_val]
y_test = [y_all[i] for i in id_test]
# -
R = pd.DataFrame(L,columns=['dim','algo','param','div','acc'])
from scipy.stats import kendalltau as tau
print(tau(R['div'],R['acc']))
## sort by Divergence on validation set
R = R.sort_values(by='div',axis=0,ascending=True)
size = R.shape[0]
R['rank_div'] = np.arange(1,size+1,1)
R.head()
## sort by Accuracy on validation set
R = R.sort_values(by='acc',axis=0,ascending=False)
size = R.shape[0]
R['rank_acc'] = np.arange(1,size+1,1)
R.head()
## quite a range of accuracy on the validation set!
R.tail()
# +
## retrain and score in order of validation set's accuracy
## and aply to test set.
top_acc = []
for i in range(size):
dim, algo, param, div, acc, rk_a, rk_d = R.iloc[i]
if algo=='n2v':
s = param.split()
p = float(s[0])
q = float(s[1])
x = 'node2vec -i:'+datadir+'ABCD/abcd_1000.dat -o:_embed -d:'+str(dim)+' -p:'+str(p)+' -q:'+str(q)
r = os.system(x)
X = readEmbedding(N2K=N2K)
if algo=='hope':
X = Hope(G,sim=param,dim=dim)
if algo=='le':
X = LE(G, dim=dim)
X_trainval = X[id_trainval,:]
X_test = X[id_test,:]
# Create the model with 100 trees
model = RandomForestClassifier(n_estimators=100,
bootstrap = True,
max_features = 'sqrt')
# Fit on training data
model.fit(X_trainval, y_trainval)
# Actual class predictions
y_pred = model.predict(X_test)
scr = accuracy_score(y_test,y_pred)
top_acc.append(scr)
R['test'] = top_acc
print('mean accuracy over all models on test set:',np.mean(R['test']))
# -
R = R.sort_values(by='test',axis=0,ascending=False)
R['rank_test'] = np.arange(1,size+1,1)
R.head()
# +
## top results on test set w.r.t. divergence on validation set
R = R.sort_values(by='div',axis=0,ascending=True)
top_div = R['test'][:10]
## top results on test set w.r.t. accuracy on validation set
R = R.sort_values(by='acc',axis=0,ascending=False)
top_acc = R['test'][:10]
# -
## pd with mu
B = pd.DataFrame(np.transpose(np.array([top_acc,top_div])),
columns=['Top-10 validation set accuracy','Top-10 divergence score'])
B.boxplot(rot=0,figsize=(7,5))
plt.ylabel('Test set accuracy',fontsize=14);
#plt.savefig('embed_classify.eps')
plt.plot(R['rank_acc'],R['test'],'.',color='black')
plt.xlabel('Rank',fontsize=14)
plt.ylabel('Test set accuracy',fontsize=14);
#plt.savefig('rank_accuracy.eps');
plt.plot(R['rank_div'],R['test'],'.',color='black')
plt.xlabel('Rank',fontsize=14)
plt.ylabel('Test set accuracy',fontsize=14);
#plt.savefig('rank_divergence.eps');
## random classification -- AMI
ctr = Counter(y_trainval)
x = [ctr[i+1] for i in range(12)]
s = np.sum(x)
p = [i/s for i in x]
y_pred = [x+1 for x in np.random.choice(12,size=len(y_test),replace=True,p=p)]
cm = confusion_matrix(y_test, y_pred)
print('\nRandom classifier accuracy on test set:',sum(cm.diagonal())/sum(sum(cm)))
# ## ReFex: illustrate roles on Zachary graph
#
# We use the 'graphrole' package
#
# extract features
feature_extractor = RecursiveFeatureExtractor(z, max_generations=5)
features = feature_extractor.extract_features()
print(f'\nFeatures extracted from {feature_extractor.generation_count} recursive generations:')
features.head(10)
# assign node roles in a dictionary
role_extractor = RoleExtractor(n_roles=3)
role_extractor.extract_role_factors(features)
node_roles = role_extractor.roles
role_extractor.role_percentage.head()
# +
#import seaborn as sns
# build color palette for plotting
unique_roles = sorted(set(node_roles.values()))
#color_map = sns.color_palette('Paired', n_colors=len(unique_roles))
# map roles to colors
role_colors = {role: cls[i] for i, role in enumerate(unique_roles)}
# store colors for all nodes in G
z.vs()['color'] = [role_colors[node_roles[node]] for node in range(z.vcount())]
## Plot with node labels
z.vs()['size'] = 10
#z.vs()['label'] = [v.index for v in z.vs()]
z.vs()['label_size'] = 0
ig.plot(z, 'refex.eps', bbox=(0,0,300,300))
| Notebooks/Chapter_6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2 (Tensorflow)
# language: python
# name: tensorflow
# ---
# +
# A Convolutional Network implementation example using TensorFlow library
# This example is using the MNIST database of handwritten digits
# Author: <NAME>
# +
import tensorflow as tf
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/home/eweill/data", one_hot=True)
# +
# Parameters
learning_rate = 0.001
training_iters = 200000
batch_size = 128
display_step = 10
# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
keep_prob = tf.placeholder(tf.float32) # Dropout (keep probability)
# +
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# Create model
def conv_net(x, weights, biases, dropout):
# Reshape input picture
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)
# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
# +
# Store layer weights and biases
weights = {
# 5x5 conv, 1 input, 32 outputs
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 5x5 conv, 32 inputs, 64 outputs
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
# fully connected, 7*7*64 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
# 1024 inputs, 10 outputs (class prediction)
'out': tf.Variable(tf.random_normal([1024, n_classes]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = conv_net(x, weights, biases, keep_prob)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# -
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
keep_prob: dropout})
if step % display_step == 0:
# Calculate batch loss and accuracy
loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
y: batch_y,
keep_prob: 1.})
print "Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc)
step += 1
print "Optimization Finished!"
# Calculate accuracy for 256 mnist test images
print "Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
y: mnist.test.labels[:256],
keep_prob: 1.})
| notebooks/3_NeuralNetworks_convolutional_network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv('G:\Meu Drive\My Laboratory\Jupyter Lab\databases\datasets\\db.csv', encoding = 'utf-8', sep = ';')
df.loc[0:3, ['Ano', 'Zero_km']]
df[(df.Motor == 'Motor 4.0 Turbo') & (df.Zero_km == False ) & (df.Ano >= 2003)].head()
df[(df.Motor.isin(['Motor 4.0 Turbo', 'Motor V8'])) & (df.Valor >= 100000) | (df.Valor <= 80000) ].head()
df[['Motor', 'Ano']].value_counts().head(20)
df['Zero_km'].value_counts()
# +
#motor_grp = df.groupby('Motor')
#motor_grp.get_group('Motor Diesel')
# -
filt = df['Motor'] == 'Motor V8'
df.loc[filt]['Zero_km'].value_counts()
| Jupyter Lab/library's/pandas_library/datasets_/dataset_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] graffitiCellId="id_slc0gil" id="1C259CBE4A394E02B3454D9680B0536A" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# # 词嵌入基础
#
# 我们在[“循环神经网络的从零开始实现”](https://zh.d2l.ai/chapter_recurrent-neural-networks/rnn-scratch.html)一节中使用 one-hot 向量表示单词,虽然它们构造起来很容易,但通常并不是一个好选择。一个主要的原因是,one-hot 词向量无法准确表达不同词之间的相似度,如我们常常使用的余弦相似度。
#
# Word2Vec 词嵌入工具的提出正是为了解决上面这个问题,它将每个词表示成一个定长的向量,并通过在语料库上的预训练使得这些向量能较好地表达不同词之间的相似和类比关系,以引入一定的语义信息。基于两种概率模型的假设,我们可以定义两种 Word2Vec 模型:
# 1. [Skip-Gram 跳字模型](https://zh.d2l.ai/chapter_natural-language-processing/word2vec.html#%E8%B7%B3%E5%AD%97%E6%A8%A1%E5%9E%8B):假设背景词由中心词生成,即建模 $P(w_o\mid w_c)$,其中 $w_c$ 为中心词,$w_o$ 为任一背景词;
#
# 
#
# 2. [CBOW (continuous bag-of-words) 连续词袋模型](https://zh.d2l.ai/chapter_natural-language-processing/word2vec.html#%E8%BF%9E%E7%BB%AD%E8%AF%8D%E8%A2%8B%E6%A8%A1%E5%9E%8B):假设中心词由背景词生成,即建模 $P(w_c\mid \mathcal{W}_o)$,其中 $\mathcal{W}_o$ 为背景词的集合。
#
# 
#
# 在这里我们主要介绍 Skip-Gram 模型的实现,CBOW 实现与其类似,读者可之后自己尝试实现。后续的内容将大致从以下四个部分展开:
#
# 1. PTB 数据集
# 2. Skip-Gram 跳字模型
# 3. 负采样近似
# 4. 训练模型
# + graffitiCellId="id_y7ocw2l" id="8627003642CB441780806CBC552BFAC1" jupyter={} slideshow={"slide_type": "slide"} tags=[]
import collections
import math
import random
import sys
import time
import os
import numpy as np
import torch
from torch import nn
import torch.utils.data as Data
# + [markdown] graffitiCellId="id_ube5b27" id="DD9999F086964C808616928EC7B736C0" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ## PTB 数据集
#
# 简单来说,Word2Vec 能从语料中学到如何将离散的词映射为连续空间中的向量,并保留其语义上的相似关系。那么为了训练 Word2Vec 模型,我们就需要一个自然语言语料库,模型将从中学习各个单词间的关系,这里我们使用经典的 PTB 语料库进行训练。[PTB (Penn Tree Bank)](https://catalog.ldc.upenn.edu/LDC99T42) 是一个常用的小型语料库,它采样自《华尔街日报》的文章,包括训练集、验证集和测试集。我们将在PTB训练集上训练词嵌入模型。
#
# ### 载入数据集
#
# 数据集训练文件 `ptb.train.txt` 示例:
# ```
# aer banknote berlitz calloway centrust cluett fromstein <NAME> ...
# pierre N years old will join the board as a nonexecutive director nov. N
# mr. is chairman of n.v. the dutch publishing group
# ...
# ```
# + graffitiCellId="id_9374ybr" id="FF5B1C79764A4EA8AA61C3DE984CBA0D" jupyter={} slideshow={"slide_type": "slide"} tags=[]
with open('/Users/janti/Boyu/0-DeepLearning/d2lzh1981/data/ptb/ptb.train.txt', 'r') as f:
lines = f.readlines() # 该数据集中句子以换行符为分割
raw_dataset = [st.split() for st in lines] # st是sentence的缩写,单词以空格为分割
print('# sentences: %d' % len(raw_dataset))
# 对于数据集的前3个句子,打印每个句子的词数和前5个词
# 句尾符为 '' ,生僻词全用 '' 表示,数字则被替换成了 'N'
for st in raw_dataset[:3]:
print('# tokens:', len(st), st[:5])
# + [markdown] graffitiCellId="id_whcovuv" id="4694FB2B910840BB8C65162A14881EB0" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### 建立词语索引
# + graffitiCellId="id_u6zhq97" id="70DD6E74F6854C289BA21B367D2397B4" jupyter={} slideshow={"slide_type": "slide"} tags=[]
counter = collections.Counter([tk for st in raw_dataset for tk in st]) # tk是token的缩写
counter = dict(filter(lambda x: x[1] >= 5, counter.items())) # 只保留在数据集中至少出现5次的词
idx_to_token = [tk for tk, _ in counter.items()]
token_to_idx = {tk: idx for idx, tk in enumerate(idx_to_token)}
dataset = [[token_to_idx[tk] for tk in st if tk in token_to_idx]
for st in raw_dataset] # raw_dataset中的单词在这一步被转换为对应的idx
num_tokens = sum([len(st) for st in dataset])
'# tokens: %d' % num_tokens
# + [markdown] graffitiCellId="id_4zjy016" id="845942F28174462A87ADDBAE82957D15" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### 二次采样
#
# 文本数据中一般会出现一些高频词,如英文中的“the”“a”和“in”。通常来说,在一个背景窗口中,一个词(如“chip”)和较低频词(如“microprocessor”)同时出现比和较高频词(如“the”)同时出现对训练词嵌入模型更有益。因此,训练词嵌入模型时可以对词进行二次采样。 具体来说,数据集中每个被索引词 $w_i$ 将有一定概率被丢弃,该丢弃概率为
#
#
# $$
# P(w_i)=\max(1-\sqrt{\frac{t}{f(w_i)}},0)
# $$
#
#
# 其中 $f(w_i)$ 是数据集中词 $w_i$ 的个数与总词数之比,常数 $t$ 是一个超参数(实验中设为 $10^{−4}$)。可见,只有当 $f(w_i)>t$ 时,我们才有可能在二次采样中丢弃词 $w_i$,并且越高频的词被丢弃的概率越大。具体的代码如下:
# + graffitiCellId="id_yg9kj6g" id="4B82B59FCC<KEY>" jupyter={} slideshow={"slide_type": "slide"} tags=[]
def discard(idx):
'''
@params:
idx: 单词的下标
@return: True/False 表示是否丢弃该单词
'''
return random.uniform(0, 1) < 1 - math.sqrt(
1e-4 / counter[idx_to_token[idx]] * num_tokens)
subsampled_dataset = [[tk for tk in st if not discard(tk)] for st in dataset]
print('# tokens: %d' % sum([len(st) for st in subsampled_dataset]))
def compare_counts(token):
return '# %s: before=%d, after=%d' % (token, sum(
[st.count(token_to_idx[token]) for st in dataset]), sum(
[st.count(token_to_idx[token]) for st in subsampled_dataset]))
print(compare_counts('the'))
print(compare_counts('join'))
# + [markdown] graffitiCellId="id_u88x2eb" id="18BE417013E049A3B5A3EC7B607EE554" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### 提取中心词和背景词
# + graffitiCellId="id_a0ayzaz" id="56C4719FE9A64B468F9B0081DD25ABBC" jupyter={} slideshow={"slide_type": "slide"} tags=[]
def get_centers_and_contexts(dataset, max_window_size):
'''
@params:
dataset: 数据集为句子的集合,每个句子则为单词的集合,此时单词已经被转换为相应数字下标
max_window_size: 背景词的词窗大小的最大值
@return:
centers: 中心词的集合
contexts: 背景词窗的集合,与中心词对应,每个背景词窗则为背景词的集合
'''
centers, contexts = [], []
for st in dataset:
if len(st) < 2: # 每个句子至少要有2个词才可能组成一对“中心词-背景词”
continue
centers += st
for center_i in range(len(st)):
window_size = random.randint(1, max_window_size) # 随机选取背景词窗大小
indices = list(range(max(0, center_i - window_size),
min(len(st), center_i + 1 + window_size)))
indices.remove(center_i) # 将中心词排除在背景词之外
contexts.append([st[idx] for idx in indices])
return centers, contexts
all_centers, all_contexts = get_centers_and_contexts(subsampled_dataset, 5)
tiny_dataset = [list(range(7)), list(range(7, 10))]
print('dataset', tiny_dataset)
for center, context in zip(*get_centers_and_contexts(tiny_dataset, 2)):
print('center', center, 'has contexts', context)
# + [markdown] graffitiCellId="id_161ief2" id="3853063E40E14F7DA7DCA410A0A3C617" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# *注:数据批量读取的实现需要依赖负采样近似的实现,故放于负采样近似部分进行讲解。*
# + [markdown] graffitiCellId="id_s7nai85" id="E0C8301D7DB340CDABD81A23520A340D" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ## Skip-Gram 跳字模型
#
# 在跳字模型中,每个词被表示成两个 $d$ 维向量,用来计算条件概率。假设这个词在词典中索引为 $i$ ,当它为中心词时向量表示为 $\boldsymbol{v}_i\in\mathbb{R}^d$,而为背景词时向量表示为 $\boldsymbol{u}_i\in\mathbb{R}^d$ 。设中心词 $w_c$ 在词典中索引为 $c$,背景词 $w_o$ 在词典中索引为 $o$,我们假设给定中心词生成背景词的条件概率满足下式:
#
#
# $$
# P(w_o\mid w_c)=\frac{\exp(\boldsymbol{u}_o^\top \boldsymbol{v}_c)}{\sum_{i\in\mathcal{V}}\exp(\boldsymbol{u}_i^\top \boldsymbol{v}_c)}
# $$
#
# + [markdown] graffitiCellId="id_j38gtz0" id="C7B5BB9D5FD54489BF6E3F42E31B835F" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### PyTorch 预置的 Embedding 层
# + graffitiCellId="id_7he6kmh" id="68DF20EE03824200887D18AA3363E7F9" jupyter={} slideshow={"slide_type": "slide"} tags=[]
embed = nn.Embedding(num_embeddings=10, embedding_dim=4)
print(embed.weight)
x = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.long)
print(embed(x))
# + [markdown] graffitiCellId="id_hevvqgv" id="8421DAEC55314EBE80D2EC59EAB73099" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### PyTorch 预置的批量乘法
# + graffitiCellId="id_8i4omp4" id="12B88D54095F48F58397626D8F1935E3" jupyter={} slideshow={"slide_type": "slide"} tags=[]
X = torch.ones((2, 1, 4))
Y = torch.ones((2, 4, 6))
print(torch.bmm(X, Y).shape)
# + [markdown] graffitiCellId="id_ol1tdyu" id="80F8440D701D4F3CBD4C4FFD3608E697" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### Skip-Gram 模型的前向计算
# + graffitiCellId="id_x6q9jp9" id="AAA4F7E268764809836AAADD7FD2A8AE" jupyter={} slideshow={"slide_type": "slide"} tags=[]
def skip_gram(center, contexts_and_negatives, embed_v, embed_u):
'''
@params:
center: 中心词下标,形状为 (n, 1) 的整数张量
contexts_and_negatives: 背景词和噪音词下标,形状为 (n, m) 的整数张量
embed_v: 中心词的 embedding 层
embed_u: 背景词的 embedding 层
@return:
pred: 中心词与背景词(或噪音词)的内积,之后可用于计算概率 p(w_o|w_c)
'''
v = embed_v(center) # shape of (n, 1, d)
u = embed_u(contexts_and_negatives) # shape of (n, m, d)
pred = torch.bmm(v, u.permute(0, 2, 1)) # bmm((n, 1, d), (n, d, m)) => shape of (n, 1, m)
return pred
# + [markdown] graffitiCellId="id_rb8yuyq" id="8FE236CF785F474F9BBB4BB3D88AA4CC" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ## 负采样近似
#
# 由于 softmax 运算考虑了背景词可能是词典 $\mathcal{V}$ 中的任一词,对于含几十万或上百万词的较大词典,就可能导致计算的开销过大。我们将以 skip-gram 模型为例,介绍负采样 (negative sampling) 的实现来尝试解决这个问题。
#
# 负采样方法用以下公式来近似条件概率 $P(w_o\mid w_c)=\frac{\exp(\boldsymbol{u}_o^\top \boldsymbol{v}_c)}{\sum_{i\in\mathcal{V}}\exp(\boldsymbol{u}_i^\top \boldsymbol{v}_c)}$:
#
#
# $$
# P(w_o\mid w_c)=P(D=1\mid w_c,w_o)\prod_{k=1,w_k\sim P(w)}^K P(D=0\mid w_c,w_k)
# $$
#
#
# 其中 $P(D=1\mid w_c,w_o)=\sigma(\boldsymbol{u}_o^\top\boldsymbol{v}_c)$,$\sigma(\cdot)$ 为 sigmoid 函数。对于一对中心词和背景词,我们从词典中随机采样 $K$ 个噪声词(实验中设 $K=5$)。根据 Word2Vec 论文的建议,噪声词采样概率 $P(w)$ 设为 $w$ 词频与总词频之比的 $0.75$ 次方。
#
#
# # ??????
# + graffitiCellId="id_st81puo" id="C6A1BDB699EB49AA8EEB49C295EACBF7" jupyter={} slideshow={"slide_type": "slide"} tags=[]
def get_negatives(all_contexts, sampling_weights, K):
'''
@params:
all_contexts: [[w_o1, w_o2, ...], [...], ... ]
sampling_weights: 每个单词的噪声词采样概率
K: 随机采样个数
@return:
all_negatives: [[w_n1, w_n2, ...], [...], ...]
'''
all_negatives, neg_candidates, i = [], [], 0
population = list(range(len(sampling_weights)))
for contexts in all_contexts:
negatives = []
while len(negatives) < len(contexts) * K:
if i == len(neg_candidates):
# 根据每个词的权重(sampling_weights)随机生成k个词的索引作为噪声词。
# 为了高效计算,可以将k设得稍大一点
i, neg_candidates = 0, random.choices(
population, sampling_weights, k=int(1e5))
neg, i = neg_candidates[i], i + 1
# 噪声词不能是背景词
if neg not in set(contexts):
negatives.append(neg)
all_negatives.append(negatives)
return all_negatives
sampling_weights = [counter[w]**0.75 for w in idx_to_token]
all_negatives = get_negatives(all_contexts, sampling_weights, 5)
# + [markdown] graffitiCellId="id_g7431va" id="43F68EDDF24849BE8791D07F0618F9DD" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# *注:除负采样方法外,还有层序 softmax (hiererarchical softmax) 方法也可以用来解决计算量过大的问题,请参考[原书10.2.2节](https://zh.d2l.ai/chapter_natural-language-processing/approx-training.html#%E5%B1%82%E5%BA%8Fsoftmax)。*
# + [markdown] graffitiCellId="id_jhrav4e" id="F991A9C6E28C42848DD393E4F891D49D" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### 批量读取数据
# + graffitiCellId="id_shkut5w" id="FF3BDA1024C94EB3A760EBA5FD583B4A" jupyter={} slideshow={"slide_type": "slide"} tags=[]
class MyDataset(torch.utils.data.Dataset):
def __init__(self, centers, contexts, negatives):
assert len(centers) == len(contexts) == len(negatives) # negatives:噪音词
self.centers = centers
self.contexts = contexts
self.negatives = negatives
def __getitem__(self, index):
return (self.centers[index], self.contexts[index], self.negatives[index])
def __len__(self):
return len(self.centers)
def batchify(data):
'''
用作DataLoader的参数collate_fn
@params:
data: 长为batch_size的列表,列表中的每个元素都是__getitem__得到的结果
@outputs:
batch: 批量化后得到 (centers, contexts_negatives, masks, labels) 元组
centers: 中心词下标,形状为 (n, 1) 的整数张量
contexts_negatives: 背景词和噪声词的下标,形状为 (n, m) 的整数张量
masks: 与补齐相对应的掩码,形状为 (n, m) 的0/1整数张量
labels: 指示中心词的标签,形状为 (n, m) 的0/1整数张量
'''
max_len = max(len(c) + len(n) for _, c, n in data)
centers, contexts_negatives, masks, labels = [], [], [], []
for center, context, negative in data:
cur_len = len(context) + len(negative)
centers += [center]
contexts_negatives += [context + negative + [0] * (max_len - cur_len)]
masks += [[1] * cur_len + [0] * (max_len - cur_len)] # 使用掩码变量mask来避免填充项对损失函数计算的影响
labels += [[1] * len(context) + [0] * (max_len - len(context))]
batch = (torch.tensor(centers).view(-1, 1), torch.tensor(contexts_negatives),
torch.tensor(masks), torch.tensor(labels))
return batch
batch_size = 512
num_workers = 0 if sys.platform.startswith('win32') else 4
dataset = MyDataset(all_centers, all_contexts, all_negatives)
data_iter = Data.DataLoader(dataset, batch_size, shuffle=True,
collate_fn=batchify,
num_workers=num_workers)
for batch in data_iter:
for name, data in zip(['centers', 'contexts_negatives', 'masks',
'labels'], batch):
print(name, 'shape:', data.shape)
break
# + [markdown] graffitiCellId="id_hf5q360" id="3AC214600B9F4A73B87CD4B57384BB3B" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ## 训练模型
#
# ### 损失函数
#
# 应用负采样方法后,我们可利用最大似然估计的对数等价形式将损失函数定义为如下
#
#
# $$
# \sum_{t=1}^T\sum_{-m\le j\le m,j\ne 0} [-\log P(D=1\mid w^{(t)},w^{(t+j)})-\sum_{k=1,w_k\sim P(w)^K}\log P(D=0\mid w^{(t)},w_k)]
# $$
#
#
# 根据这个损失函数的定义,我们可以直接使用二元交叉熵损失函数进行计算:
# + graffitiCellId="id_ap2woj6" id="6BDDED9801FF43B98CD51ED86B12D450" jupyter={} slideshow={"slide_type": "slide"} tags=[]
class SigmoidBinaryCrossEntropyLoss(nn.Module):
def __init__(self):
super(SigmoidBinaryCrossEntropyLoss, self).__init__()
def forward(self, inputs, targets, mask=None):
'''
@params:
inputs: 经过sigmoid层后为预测D=1的概率
targets: 0/1向量,1代表背景词,0代表噪音词
@return:
res: 平均到每个label的loss
'''
inputs, targets, mask = inputs.float(), targets.float(), mask.float()
res = nn.functional.binary_cross_entropy_with_logits(inputs, targets, reduction="none", weight=mask)
res = res.sum(dim=1) / mask.float().sum(dim=1)
return res
loss = SigmoidBinaryCrossEntropyLoss()
pred = torch.tensor([[1.5, 0.3, -1, 2], [1.1, -0.6, 2.2, 0.4]])
label = torch.tensor([[1, 0, 0, 0], [1, 1, 0, 0]]) # 标签变量label中的1和0分别代表背景词和噪声词
mask = torch.tensor([[1, 1, 1, 1], [1, 1, 1, 0]]) # 掩码变量
print(loss(pred, label, mask))
def sigmd(x):
return - math.log(1 / (1 + math.exp(-x)))
print('%.4f' % ((sigmd(1.5) + sigmd(-0.3) + sigmd(1) + sigmd(-2)) / 4)) # 注意1-sigmoid(x) = sigmoid(-x)
print('%.4f' % ((sigmd(1.1) + sigmd(-0.6) + sigmd(-2.2)) / 3))
# + [markdown] graffitiCellId="id_zz57d6p" id="3F1EB2F8E34D45088D8A8872FAE1C726" jupyter={} slideshow={"slide_type": "slide"} tags=[]
# ### 模型初始化
# + graffitiCellId="id_k9ax2h6" id="EE565E07272B40C691196EED5695BC5D" jupyter={} slideshow={"slide_type": "slide"} tags=[]
embed_size = 100
net = nn.Sequential(nn.Embedding(num_embeddings=len(idx_to_token), embedding_dim=embed_size),
nn.Embedding(num_embeddings=len(idx_to_token), embedding_dim=embed_size))
# + [markdown] graffitiCellId="id_9ahavii" id="F178519AEE504029B3603E4D006BA839" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### 训练模型
# + graffitiCellId="id_9fx6rj4" id="2DA9D996EE3E44B49DE0D2FAC370E1DD" jupyter={} slideshow={"slide_type": "slide"} tags=[]
def train(net, lr, num_epochs):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print("train on", device)
net = net.to(device)
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
for epoch in range(num_epochs):
start, l_sum, n = time.time(), 0.0, 0
for batch in data_iter:
center, context_negative, mask, label = [d.to(device) for d in batch]
pred = skip_gram(center, context_negative, net[0], net[1])
l = loss(pred.view(label.shape), label, mask).mean() # 一个batch的平均loss
optimizer.zero_grad()
l.backward()
optimizer.step()
l_sum += l.cpu().item()
n += 1
print('epoch %d, loss %.2f, time %.2fs'
% (epoch + 1, l_sum / n, time.time() - start))
train(net, 0.01, 5)
# + [markdown] graffitiCellId="id_yb3aapa" id="855E5C6515884CD596D3B1C1E0CC1978" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ```
# train on cpu
# epoch 1, loss 0.61, time 221.30s
# epoch 2, loss 0.42, time 227.70s
# epoch 3, loss 0.38, time 240.50s
# epoch 4, loss 0.36, time 253.79s
# epoch 5, loss 0.34, time 238.51s
# ```
#
# *注:由于本地CPU上训练时间过长,故只截取了运行的结果,后同。大家可以自行在网站上训练。*
# + [markdown] graffitiCellId="id_bw1dtd1" id="A5DAB2B7CC6A41668D2F5A0061C54728" jupyter={} mdEditEnable=false slideshow={"slide_type": "slide"} tags=[]
# ### 测试模型
# + graffitiCellId="id_wm2rrhl" id="838B4878856C457889DAA35A29948029" jupyter={} slideshow={"slide_type": "slide"} tags=[]
def get_similar_tokens(query_token, k, embed):
'''
@params:
query_token: 给定的词语
k: 近义词的个数
embed: 预训练词向量
'''
W = embed.weight.data
x = W[token_to_idx[query_token]]
# 添加的1e-9是为了数值稳定性
cos = torch.matmul(W, x) / (torch.sum(W * W, dim=1) * torch.sum(x * x) + 1e-9).sqrt()
_, topk = torch.topk(cos, k=k+1)
topk = topk.cpu().numpy()
for i in topk[1:]: # 除去输入词
print('cosine sim=%.3f: %s' % (cos[i], (idx_to_token[i])))
get_similar_tokens('chip', 3, net[0])
# + [markdown] graffitiCellId="id_htzr4u1" id="3E45FF89FD794158AAD153F605942091" jupyter={} slideshow={"slide_type": "slide"} tags=[]
# ```
# cosine sim=0.446: intel
# cosine sim=0.427: computer
# cosine sim=0.427: computers
# ```
# + [markdown] graffitiCellId="id_g9tjvnn" id="17C8660147A846FEB9389CEF51AEC536" jupyter={} slideshow={"slide_type": "slide"} tags=[]
# ## 参考
# * [Dive into Deep Learning](https://d2l.ai/chapter_natural-language-processing/word2vec.html). Ch14.1-14.4.
# * [动手学深度学习](http://zh.gluon.ai/chapter_natural-language-processing/word2vec.html). Ch10.1-10.3.
# * [Dive-into-DL-PyTorch on GitHub](https://github.com/ShusenTang/Dive-into-DL-PyTorch/blob/master/code/chapter10_natural-language-processing/10.3_word2vec-pytorch.ipynb)
| notes/task7/task7-word2vec.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Creating Graphs from Folder Structures with Python
# In this notebook you will see how to use the [folderstats](https://github.com/njanakiev/folderstats) Python module to explore and analyze folder structures visualy as a graph.
#
# # Installation
#
# For this notebook you'll want to install the [folderstats](https://github.com/njanakiev/folderstats) which you can do with:
#
# ```bash
# pip install folderstats
# ```
#
# You can use this module either as a command-line tool or as a module in your script. Here you will see how to do that as a module.
#
# # Analyze Folder Structure of NetworkX Repository
#
# You can download the [NetworkX](https://github.com/networkx/networkx) repository with:
#
# ```bash
# git clone https://github.com/networkx/networkx
# ```
#
# Now you can now create a Pandas dataframe from the folder structure with this line of code:
# +
import folderstats
df = folderstats.folderstats('networkx')
df.head()
# -
# Great! You can already do some analysis on the data, but we are focusing on file tree as a graph. Let's start by creating a graph from the folder structure by using the `id` amd `parent` columns. To do this you can use the [NetworkX](https://github.com/networkx/networkx) module:
# +
import networkx as nx
# Sort files and directories by id
df_sorted = df.sort_values(by='id')
G = nx.Graph()
for i, row in df_sorted.iterrows():
if row.parent:
G.add_edge(row.id, row.parent)
# Print some additional information
print(nx.info(G))
# -
# # Exploring the Graph for the Folder Structure
#
# We have the graph, but how can we visualize it? This can be done
# %matplotlib inline
# +
import matplotlib.pyplot as plt
from networkx.drawing.nx_pydot import graphviz_layout
# Calculate positions of graph
pos_dot = graphviz_layout(G, prog='dot')
# -
# This returns a dictionary of positions that you can later use with the [networkx.draw_networkx_nodes()](https://networkx.github.io/documentation/latest/reference/generated/networkx.drawing.nx_pylab.draw_networkx_nodes.html) and [networkx.draw_networkx_edges()](https://networkx.github.io/documentation/latest/reference/generated/networkx.drawing.nx_pylab.draw_networkx_edges.html) functions to plot the graphs:
fig = plt.figure(figsize=(12, 8))
nodes = nx.draw_networkx_nodes(G, pos_dot, node_size=2, node_color='C0')
edges = nx.draw_networkx_edges(G, pos_dot, edge_color='C0', width=0.5)
plt.axis('off');
# Calculate positions of graph
pos_twopi = graphviz_layout(G, prog='twopi', root=1)
fig = plt.figure(figsize=(10, 10))
nodes = nx.draw_networkx_nodes(G, pos_twopi, node_size=2, node_color='C0')
edges = nx.draw_networkx_edges(G, pos_twopi, edge_color='C0', width=0.5)
plt.axis('off')
plt.axis('equal');
# # Conclusion
#
# You have learned in this notebook how to use the [folderstats](https://github.com/njanakiev/folderstats) module in concert with the [NetworkX](https://github.com/networkx/networkx) module to create beautiful graphs of folder structures/trees. You can read more on this topic in the article [Analyzing Your File System and Folder Structures with Python](https://janakiev.com/blog/python-filesystem-analysis/).
| notebooks/folderstats-graphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
# +
#importing the input.csv as a table, with passwords as the rows, and the range and letter for each password's policy requirements
passwords = pd.read_csv("input.csv",sep=' ',names=('Range', 'Letter'),index_col=2)
passwords['Letter'] = passwords['Letter'].str.split(pat=':',expand=True) #removes the colon in the letter column
passwords['Range'] = passwords['Range'].str.split(pat='-') #removes the dash between numbers in the range column and makes each entry in the range column a list of min,max values
#finding how many passwords are valid
counter = 0
for p,row in passwords.iterrows(): #for each row in passwords, iterating each row...
letter = row['Letter'] #defines where the letter is in the table by pulling it from the Letter column
Min = int(row['Range'][0]) #defines the min number for the letter as being the first integer in the Range column lists
Max = int(row['Range'][1]) #defines the max number for the letter as being the second integer in the Range column lists
if p.count(letter) >= Min and p.count(letter) <= Max: #if the number of our desired letter in each password is more than or equal to the min range number and less than or equal to the max range number then...
counter+=1 #add 1 to the counter
print("Part 1: ",counter) #prints the amount of passwords that are valid
#finding how many passwords are valid using new criteria
counter2 = 0
for p,row in passwords.iterrows():
letter = row['Letter']
position1 = int(row['Range'][0])
position2 = int(row['Range'][1])
if p[position1-1] == letter or p[position2-1] == letter: #if the letters at position 1 and position 2 both match the letter required...
if p[position1-1] != p[position2-1]: #and if position 1 and position 2 are not equal to each other...
counter2+=1 #add 1 to the counter
print("Part 2: ",counter2)
# -
| Day 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spatially-Resolved Mass-Metallicity Relation with MaNGA
#
# We're going to construct the spatially-resolved mass-metallicity relation (MZR) for a MaNGA galaxy, where mass refers to stellar mass and metallicity refers to gas-phase oxygen abundance.
#
# ### Roadmap
# 1. Compute metallicity.
# 2. Select spaxels that are
# 1. star-forming,
# 2. not flagged as "bad data," and
# 3. above a signal-to-noise ratio threshold.
# 3. Compute stellar mass surface density.
# 4. Plot metallicity as a function of stellar mass surface density.
# ## Key Terms
#
# **DAP**: MaNGA Data Analysis Pipeline, which fits the MaNGA data cubes with stellar continuum and emission line models to produce model data cubes and maps of measured quantities.
# **data cube**: 3D data structure with 1D spectra arranged in a 2D spatial grid.
# **IFU**: integral field unit
# **Marvin**: MaNGA data access, exploration, visualization, and analysis ecosystem (web site, API, and Python package).
# **spaxels**: spatial pixels
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import os
from os.path import join
path_notebooks = os.path.abspath('.')
path_data = join(os.path.split(path_notebooks)[0], 'data')
# ## Load Maps for Galaxy
#
# Download `manga-8077-6104-MAPS-SPX-GAU-MILESHC.fits.gz` from CourseWeb and move it into the `data/` directory of this repo.
#
# Then import the Marvin `Maps` class from `marvin.tools.maps` and initialize a `Maps` object using the full path to this file.
# +
from marvin.tools.maps import Maps
filename = join(path_data, 'manga-8077-6104-MAPS-SPX-GAU-MILESHC.fits.gz')
maps = Maps(filename=filename)
# -
# ## Measure Metallicity
#
# Metallicities have large systematic uncertainties depending on whether they are calibrated using the "direct method" or photoionization models. The direct method relies on observations faint auroral lines that get exponentially weaker with increasing metallicity and so are difficult to use at high metallicities. Photoionization models are suffer from weaknesses, included the need to rely on simplified geometries for HII regions. Calibrations based on the direct method are empirical, whereas those based on photoionization models are known as theoretical calibrations.
#
# <img src="images/kewley2008.png" style="width: 400px;"/>
#
# This figure shows the median MZRs for various metallicity calibrations from Kewley et al. (2008).
# - Lines (1)-(4) are theoretical calibrations.
# - Lines (5)-(8) are mostly empirical calibrations.
# - Lines (9)-(10) are purely empirical calibrations.
#
# ### Pettini & Pagel (2004) N2 metallicity calibration
#
# We are going to use the N2 metallicity calibration (their Equation 1) from Pettini & Pagel (2004), so go ahead and look it up now. One of the benefits of this calibration is that the required lines are very close in wavelength, so the reddening correction is negligible.
#
# Get [NII] 6585 and Halpha flux maps from the Marvin `Maps` object. Note: MaNGA (and Marvin) use the wavelengths of lines in vaccuum, whereas they are usually reported in air, hence the slight offsets.
nii = maps.emline_gflux_nii_6585
ha = maps.emline_gflux_ha_6564
# Calculate the necessary line ratio.
#
# Marvin can do map arithmetic, which propagates the inverse variances and masks, so you can just do `+`, `-`, `*`, `/`, and `**` operations as normal. (Note: taking the log of a Marvin `Map` will work for the values but the inverse variance propagation does not correctly propagate the inverse variance yet.)
n2 = nii / ha
logn2 = np.log10(n2)
# Finally, calculate the metallicity.
oh = 8.90 + 0.57 * logn2
# ## Select Spaxels
# ### Using the BPT Diagram to select star-forming spaxels
#
# Metallicity indicators only work for star-forming spaxels, so we need a way to select only these spaxels.
#
# The classic diagnostic diagram for classify the emission from galaxies (or galactic sub-regions) as star-forming or non-star-forming (i.e., from active galactic nuclei (AGN) or evolved stars) was originally proposed in <NAME>, & Terlevich (1981) and is known as the **BPT diagram**.
#
# The BPT diagram uses ratios of emission lines to separate thermal and non-thermal emission.
#
# The classic BPT diagram uses [OIII]5007 / Hbeta vs. [NII]6583 / Halpha, but there are several versions of the BPT diagram that use different lines ratios.
#
# <img src="bpt_kauffmann2003.png" style="width: 400px;"/>
#
# Dotted line: Kewley et al. (2001) maximal starburst.
# Dashed line: Kauffmann et al. (2003) more useful selection criteria.
# ### BPT Diagrams with Marvin
#
# Let's use Marvin's `maps.get_bpt()` method to make BPT diagrams for this galaxy.
#
# **red line**: maximal starbust (Kewley et al 2001) -- everything to the right is non-star-forming.
# **dashed black line**: conservative star-forming cut (Kauffmann et al. 2003) -- everything to the left is star-forming.
#
# Line ratios that fall in between these two lines are designated "Composite" with contributions from both star-forming and non-star-forming emission.
#
# **blue line**: separates non-star-forming spaxels into Seyferts and LINERs.
#
# Seyferts are a type of AGNs.
#
# LINERs (Low Ionization Nuclear Emission Regions) are not always nuclear (LIER is a better acronym) and not always AGN (oftern hot evolved stars).
#
# Sometimes these diagnostic diagrams disagree with each other, hence the "Ambiguous" designation.
#
# Try using `maps.get_bpt?` to read the documentation on how to use this function.
masks_bpt, __, __ = maps.get_bpt()
# The BPT masks are dictionaries of dictionaries of a boolean (True/False) arrays. We are interested in the spaxels that are classified as star-forming in all three BPT diagrams are designated as ``True``, which is designated with the `global` key. Print this mask.
masks_bpt['sf']['global']
# ### Masks
# MaNGA (and SDSS generally) use bitmasks to communicate data quality.
#
# Marvin has built-in methods to convert from the bitmasks integer values to individual bits or labels and to create new masks by specifying a set of labels.
#
# Show the mask schema with `n2.pixmask.schema`.
n2.pixmask.schema
# Select non-star-forming spaxels (from the BPT mask) and set their mask value to the DAP's DONOTUSE value with the `n2.pixmask.labels_to_value()` method. Note that we are selecting spaxels that we want from the BPT mask (i.e., `True` is a spaxel to keep), whereas we are using the pixmask to select spaxels that we want to exclude (i.e., `True` is a spaxel to ignore).
mask_non_sf = ~masks_bpt['sf']['global'] * n2.pixmask.labels_to_value('DONOTUSE')
# Select spaxels classified by the DAP as bad data according to the masks for spaxels with no IFU coverage, with unreliable measurements, or otherwise unfit for science. Use the `n2.pixmask.get_mask` method.
mask_bad_data = n2.pixmask.get_mask(['NOCOV', 'UNRELIABLE', 'DONOTUSE'])
# Select spaxels with signal-to-noise ratios (SNRs) > 3 on both [NII] 6585 and Halpha.
#
# `ha.ivar` = inverse variance = $\frac{1}{\sigma^2}$, where $\sigma$ is the error.
min_snr = 3.
mask_nii_low_snr = (np.abs(nii.value * np.sqrt(nii.ivar)) < min_snr)
mask_ha_low_snr = (np.abs(ha.value * np.sqrt(ha.ivar)) < min_snr)
# Do a [bitwise (binary) OR](https://www.tutorialspoint.com/python/bitwise_operators_example.htm) to create a master mask of spaxels to ignore.
mask = mask_non_sf | mask_bad_data | mask_nii_low_snr | mask_ha_low_snr
# ## Plot the Metallicity Map
#
# Plot the map of metallicity using the `plot()` method from your Marvin `Map` metallicity object. Also, mask undesirable spaxels and label the colorbar.
#
# Note: solar metallicity is about 8.7.
fig, ax = oh.plot(mask=mask, cblabel='12+log(O/H)')
# ## Compute Stellar Mass Surface Density
#
# 1. Read in spaxel stellar mass measurements from the Firefly spectral fitting catalog (Goddard et al. 2017).
# 2. Convert spaxel angular size to a physical scale in pc.
# 3. Divide stellar mass by area to get stellar surface mass density.
#
# ### Read in stellar masses
#
# Use [pandas](http://pandas.pydata.org/pandas-docs/stable/) to read in the csv file with stellar masses.
import pandas as pd
mstar = pd.read_csv(join(path_data, 'manga-{}_mstar.csv'.format(maps.plateifu)))
# Plot stellar mass map using `ax.imshow()`. MaNGA maps are oriented such that you want to specify `origin='lower'`. Also include a labelled colorbar.
fig, ax = plt.subplots()
p = ax.imshow(mstar, origin='lower')
ax.set_xlabel('spaxel')
ax.set_ylabel('spaxel')
cb = fig.colorbar(p)
cb.set_label('log(Mstar) [M$_\odot$]')
# ### Calculate physical size of a spaxel
#
# MaNGA's maps (and data cubes) have a spaxel size of 0.5 arcsec. Let's convert that into a physical scale for our galaxy.
spaxel_size = 0.5 # [arcsec]
# Get the redshift of the galaxy from the `maps.nsa` attribute.
redshift = maps.nsa['z']
# We'll use the **small angle approximation** to estimate the physical scale:
#
# $\theta = \mathrm{tan}^{-1}(\frac{d}{D}) \approx \frac{206,265 \, \mathrm{arcsec}}{1 \, \mathrm{radian}} \frac{d}{D}$,
#
# where
# $\theta$ is the angular size of the object (in our case spaxel) in arcsec,
# $d$ is the diameter of the object (spaxel), and
# $D$ is the angular diameter distance.
#
#
# The distance (via the **Hubble Law** --- which is fairly accurate for low redshift objects) is
#
# $D \approx \frac{cz}{H_0}$,
#
# where
# $c$ is the speed of light in km/s,
# $z$ is the redshift, and
# $H_0$ is the Hubble constant in km/s/Mpc.
#
# Calculate $D$.
c = 299792 # speed of light [km/s]
H0 = 70 # [km s^-1 Mpc^-1]
D = c * redshift / H0 # approx. distance to galaxy [Mpc]
# Rearrange the small angle formula to solve for the scale ($\frac{d}{\theta}$) in pc / arcsec.
scale = 1 / 206265 * D * 1e6 # 1 radian = 206265 arcsec [pc / arcsec]
# Now convert the spaxel size from arcsec to parsecs and calculate the area of a spaxel.
spaxel_area = (scale * spaxel_size)**2 # [pc^2]
# Finally, we simply divide the stellar mass by the area to get the stellar mass surface density $\Sigma_\star$ in units of $\frac{M_\odot}{pc^2}$.
sigma_star = np.log10(10**mstar / spaxel_area) # [Msun / pc^2]
# Let's plot metallicity as a function of $\Sigma_\star$! Remember to apply the mask. Also set the axis range to be `[0, 4, 8, 8.8]`.
fig, ax = plt.subplots(figsize=(6, 6))
ax.scatter(sigma_star.values[mask == 0], oh.value[mask == 0], alpha=0.15)
ax.set_xlabel('log(Mstar) [M$_\odot$]')
ax.set_ylabel('12+log(O/H)')
ax.axis([0, 4, 8.0, 8.8])
# ### MaNGA Spatially-Resolved Mass-Metallicity Relation
#
# We have constructed the spatially-resolved MZR for one galaxy, but we are interested in understanding the evolution of galaxies in general, so we want to repeat this exercise for many galaxies. In [Barrera-Ballesteros et al. (2016)](https://arxiv.org/pdf/1609.01740.pdf), <NAME> (who gave a talk at Pitt in November 2017) did just this, and here is the analogous figure for 653 disk galaxies.
#
# <img src="images/barrera-ballesteros_local_mzr.png" style="width: 400px;"/>
# The best fit line from Barrera-Ballesteros et al. (2016) is given in the next cell.
# fitting formula
aa = 8.55
bb = 0.014
cc = 3.14
xx = np.linspace(1, 3, 1000)
yy = aa + bb * (xx - cc) * np.exp(-(xx - cc))
# Remake the spatially-resolved MZR plot for our galaxy showing the he best fit line from Barrera-Ballesteros et al. (2016).
fig, ax = plt.subplots(figsize=(6, 6))
ax.scatter(sigma_star.values[mask == 0], oh.value[mask == 0], alpha=0.15)
ax.plot(xx, yy)
ax.set_xlabel('log(Mstar) [M$_\odot$]')
ax.set_ylabel('12+log(O/H)')
ax.axis([0, 4, 8.0, 8.8])
# The spaxels in our galaxy are typically above the best fit relation. Part of the offset may be due to systematic differences in the metallicity calibrator used, but the overal trend of flat metallicity as stellar mass surface densities decreases seems to be in tension with their best fit. It would be worth investigating this effect for more galaxies to understand if individual galaxies typically obey the best fit relation or whether they typically exhibit a flat trend in this space.
#
#
# Ultimately, Barrera-Ballesteros et al. (2016) concluded that the spatially-resolved MZR is a scaled version of the global MZR. For instance, here is the global MZR from Tremonti et al. (2004), which has a similar shape in spite of the offset in metallicity due to the adoption of different metallicity calibration.
#
#
# <img src="images/tremonti2004.png" style="width: 400px;"/>
| notebooks/resolved_mass_metallicity_relation_SOLUTION.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.9 32-bit ('Thonny')
# name: python3
# ---
# 80~100, 615~635
eva = []
name = []
# +
name_ele = "corner_mi3"
eva_ele = [
[ 1.0, -0.5, 0.5, 0.3, 0.3, 0.5, -0.5, 1.0],
[-0.5, -0.5, 0.0, 0.0, 0.0, 0.0, -0.5, -0.5],
[ 0.5, 0.0, 0.5, 0.5, 0.5, 0.5, 0.0, 0.5],
[ 0.3, 0.0, 0.5, 0.0, 0.0, 0.5, 0.0, 0.3],
[ 0.3, 0.0, 0.5, 0.0, 0.0, 0.5, 0.0, 0.3],
[ 0.5, 0.0, 0.5, 0.5, 0.5, 0.5, 0.0, 0.5],
[-0.5, -0.5, 0.0, 0.0, 0.0, 0.0, -0.5, -0.5],
[ 1.0, -0.5, 0.5, 0.3, 0.3, 0.5, -0.5, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi2+"
eva_ele = [
[ 1.0, -0.6, 0.6, 0.4, 0.4, 0.6, -0.6, 1.0],
[-0.6, -1.0, 0.0, 0.0, 0.0, 0.0, -1.0, -0.6],
[ 0.6, 0.0, 0.6, 0.6, 0.6, 0.6, 0.0, 0.6],
[ 0.4, 0.0, 0.6, 0.0, 0.0, 0.6, 0.0, 0.4],
[ 0.4, 0.0, 0.6, 0.0, 0.0, 0.6, 0.0, 0.4],
[ 0.6, 0.0, 0.6, 0.6, 0.6, 0.6, 0.0, 0.6],
[-0.6, -1.0, 0.0, 0.0, 0.0, 0.0, -1.0, -0.6],
[ 1.0, -0.6, 0.6, 0.4, 0.4, 0.6, -0.6, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi3+"
eva_ele = [
[ 1.0, -0.6, 0.6, 0.4, 0.4, 0.6, -0.6, 1.0],
[-0.6, -0.6, 0.0, 0.0, 0.0, 0.0, -0.6, -0.6],
[ 0.6, 0.0, 0.6, 0.6, 0.6, 0.6, 0.0, 0.6],
[ 0.4, 0.0, 0.6, 0.0, 0.0, 0.6, 0.0, 0.4],
[ 0.4, 0.0, 0.6, 0.0, 0.0, 0.6, 0.0, 0.4],
[ 0.6, 0.0, 0.6, 0.6, 0.6, 0.6, 0.0, 0.6],
[-0.6, -0.6, 0.0, 0.0, 0.0, 0.0, -0.6, -0.6],
[ 1.0, -0.6, 0.6, 0.4, 0.4, 0.6, -0.6, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi3++"
eva_ele = [
[ 1.0, -0.7, 0.7, 0.5, 0.5, 0.7, -0.7, 1.0],
[-0.7, -0.7, 0.0, 0.0, 0.0, 0.0, -0.7, -0.7],
[ 0.7, 0.0, 0.7, 0.7, 0.7, 0.7, 0.0, 0.7],
[ 0.5, 0.0, 0.7, 0.0, 0.0, 0.7, 0.0, 0.5],
[ 0.5, 0.0, 0.7, 0.0, 0.0, 0.7, 0.0, 0.5],
[ 0.7, 0.0, 0.7, 0.7, 0.7, 0.7, 0.0, 0.7],
[-0.7, -0.7, 0.0, 0.0, 0.0, 0.0, -0.7, -0.7],
[ 1.0, -0.7, 0.7, 0.5, 0.5, 0.7, -0.7, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi3+++"
eva_ele = [
[ 1.0, -0.8, 0.8, 0.6, 0.6, 0.8, -0.8, 1.0],
[-0.8, -0.8, 0.0, 0.0, 0.0, 0.0, -0.8, -0.8],
[ 0.8, 0.0, 0.8, 0.8, 0.8, 0.8, 0.0, 0.8],
[ 0.6, 0.0, 0.8, 0.0, 0.0, 0.8, 0.0, 0.6],
[ 0.6, 0.0, 0.8, 0.0, 0.0, 0.8, 0.0, 0.6],
[ 0.8, 0.0, 0.8, 0.8, 0.8, 0.8, 0.0, 0.8],
[-0.8, -0.8, 0.0, 0.0, 0.0, 0.0, -0.8, -0.8],
[ 1.0, -0.8, 0.8, 0.6, 0.6, 0.8, -0.8, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi3#"
eva_ele = [
[ 1.0, -0.9, 0.9, 0.7, 0.7, 0.9, -0.9, 1.0],
[-0.9, -0.9, 0.0, 0.0, 0.0, 0.0, -0.9, -0.9],
[ 0.9, 0.0, 0.9, 0.9, 0.9, 0.9, 0.0, 0.9],
[ 0.7, 0.0, 0.9, 0.0, 0.0, 0.9, 0.0, 0.7],
[ 0.7, 0.0, 0.9, 0.0, 0.0, 0.9, 0.0, 0.7],
[ 0.9, 0.0, 0.9, 0.9, 0.9, 0.9, 0.0, 0.9],
[-0.9, -0.9, 0.0, 0.0, 0.0, 0.0, -0.9, -0.9],
[ 1.0, -0.9, 0.9, 0.7, 0.7, 0.9, -0.9, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi3#+"
eva_ele = [
[ 1.0, -1.0, 1.0, 0.8, 0.8, 1.0, -1.0, 1.0],
[-1.0, -1.0, 0.0, 0.0, 0.0, 0.0, -1.0, -1.0],
[ 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0],
[ 0.8, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.8],
[ 0.8, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.8],
[ 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0],
[-1.0, -1.0, 0.0, 0.0, 0.0, 0.0, -1.0, -1.0],
[ 1.0, -1.0, 1.0, 0.8, 0.8, 1.0, -1.0, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi3-"
eva_ele = [
[ 1.0, -0.4, 0.4, 0.2, 0.2, 0.4, -0.4, 1.0],
[-0.4, -0.4, 0.0, 0.0, 0.0, 0.0, -0.4, -0.4],
[ 0.4, 0.0, 0.4, 0.4, 0.4, 0.4, 0.0, 0.4],
[ 0.2, 0.0, 0.4, 0.0, 0.0, 0.4, 0.0, 0.2],
[ 0.2, 0.0, 0.4, 0.0, 0.0, 0.4, 0.0, 0.2],
[ 0.4, 0.0, 0.4, 0.4, 0.4, 0.4, 0.0, 0.4],
[-0.4, -0.4, 0.0, 0.0, 0.0, 0.0, -0.4, -0.4],
[ 1.0, -0.4, 0.4, 0.2, 0.2, 0.4, -0.4, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi3+5"
eva_ele = [
[ 1.0, -0.7, 0.7, 0.3, 0.3, 0.7, -0.7, 1.0],
[-0.7, -0.7, 0.0, 0.0, 0.0, 0.0, -0.7, -0.7],
[ 0.7, 0.0, 0.7, 0.7, 0.7, 0.7, 0.0, 0.7],
[ 0.3, 0.0, 0.7, 0.0, 0.0, 0.7, 0.0, 0.3],
[ 0.3, 0.0, 0.7, 0.0, 0.0, 0.7, 0.0, 0.3],
[ 0.7, 0.0, 0.7, 0.7, 0.7, 0.7, 0.0, 0.7],
[-0.7, -0.7, 0.0, 0.0, 0.0, 0.0, -0.7, -0.7],
[ 1.0, -0.7, 0.7, 0.3, 0.3, 0.7, -0.7, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
name_ele = "corner_mi3+3"
eva_ele = [
[ 1.0, -0.5, 0.5, 0.4, 0.4, 0.5, -0.5, 1.0],
[-0.5, -0.5, 0.0, 0.0, 0.0, 0.0, -0.5, -0.5],
[ 0.5, 0.0, 0.5, 0.5, 0.5, 0.5, 0.0, 0.5],
[ 0.4, 0.0, 0.5, 0.0, 0.0, 0.5, 0.0, 0.4],
[ 0.4, 0.0, 0.5, 0.0, 0.0, 0.5, 0.0, 0.4],
[ 0.5, 0.0, 0.5, 0.5, 0.5, 0.5, 0.0, 0.5],
[-0.5, -0.5, 0.0, 0.0, 0.0, 0.0, -0.5, -0.5],
[ 1.0, -0.5, 0.5, 0.4, 0.4, 0.5, -0.5, 1.0]
]
name.append(name_ele)
eva.append(eva_ele)
# +
eva_file = open("eva_file.txt", "w")
for i in range(len(eva)):
for j in range(8):
for k in range(8):
eva_file.write(str(eva[i][j][k]))
eva_file.write("\n")
eva_file.close()
name_file = open("name_file.txt", "w")
for i in range(len(name)):
name_file.write(name[i])
name_file.write("\n")
name_file.close()
| sita/genetic_sur/07/creat_eva.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
#
# # Generate Readme for Awesome Notebooks
# + [markdown] tags=[]
# ## Input
# -
# ### Import librairies
# + tags=[]
import os
import requests
import pandas as pd
import naas_drivers
import urllib.parse
import json
import copy
import markdown
import nbformat
from nbconvert import MarkdownExporter
from papermill.iorw import (
load_notebook_node,
write_ipynb,
)
try:
from git import Repo
except:
# !pip install GitPython
from git import Repo
# -
# ### Variables
# + tags=[]
# README variables
readme_template = "README_template.md"
readme = "README.md"
replace_var = "[[DYNAMIC_LIST]]"
# Json output
json_file = "templates.json"
# Others
current_file = '.'
notebook_ext = '.ipynb'
github_url = 'https://github.com/jupyter-naas/awesome-notebooks/tree/master'
github_download_url = 'https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/'
naas_download_url ='https://app.naas.ai/user-redirect/naas/downloader?url='
naas_logo ='https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg'
# -
# ### Get files list
# + tags=[]
repo = Repo('.')
branch = repo.active_branch
list_of_dir = f"https://api.github.com/repos/jupyter-naas/awesome-notebooks/git/trees/{branch.name}?recursive=1"
r_gh = requests.get(list_of_dir).json().get("tree")
notebooks = []
for file in r_gh:
if ".github" not in file.get("path") and ".gitignore" not in file.get("path") and "/" in file.get("path"):
if file.get("path").endswith(".ipynb"):
temp = file.get("path").split("/")
if temp == -1:
data = {
"root": None,
"subdir": file.get("path")
}
notebooks.append(data)
else:
last_folder = ""
file_name = temp[-1]
temp.pop()
for folder in temp:
last_folder += "/" + folder
root = last_folder[1:]
data = {
"root": root,
"subdir": file_name
}
notebooks.append(data)
df_github = pd.DataFrame(notebooks)
df_github
# -
# ## Model
# ### Reformat functions
# + tags=[]
def reformat_file_name(file):
file_nice = file.replace('_', ' ')
file_nice = file_nice.replace(notebook_ext, '')
file_nice = file_nice.replace(folder_nice, '')
file_nice = file_nice.strip()
if (file_nice != ""):
file_nice = file_nice[0].capitalize() + file_nice[1:]
return file_nice
# -
# ### Get functions
# + tags=[]
def get_open_button(download_link):
return f"""<a href="{download_link}" target="_parent"><img src="{naas_logo}"/></a>"""
def get_title(folder_nice, file_nice, download_link):
return f"""# {folder_nice} - {file_nice}\n{get_open_button(download_link)}"""
def get_tags(text):
result = []
tags = text.split(' ')
for tag in tags:
if len(tag) >= 2 and tag[0] == '#' and tag[1] != ' ' and tag[1] != '#':
result.append(tag)
return result
# -
# ### Set 'Naas Download' link on notebook
# + tags=[]
def set_notebook_title_and_get_tags(notebook_path, title_source, final_title, good_format):
header_found = False
tag_found = False
tags = None
count = 0
nb = load_notebook_node(notebook_path)
nb = copy.deepcopy(nb)
# Parse the entire notebook
for cell in nb.cells:
source = cell.source
if cell.cell_type == "code":
nb.cells[count].outputs = []
# Get the tags, because tags are always after the header cell
if header_found and not tag_found:
if cell.cell_type == "markdown":
tags = get_tags(cell.source)
tag_found = True
# Get the header cell
if not header_found and cell.cell_type == "markdown" and len(source) > 2 and source[0] == '#' and source[1] == ' ':
nb.cells[count].source = title_source
header_found = True
count += 1
# Set the good title format in the notabook
write_ipynb(nb, notebook_path)
# Rename the notebook if the tool name is not the same
if good_format == 1:
os.rename(notebook_path, final_title)
# Return tags
return tags
# -
# ### Convert filepath in Markdown text
# + tags=[]
def get_file_md(folder_nice, folder_url, files, json_templates, title_sep="##", subtitle_sep="*"):
good_format = 0
final_title = ""
md = ""
folder_name = ""
tool_name = ""
tool_title = ""
if (len(files) > 0):
md += f"\n{title_sep} {folder_nice}\n"
for file in files:
if file.endswith(notebook_ext):
good_format = 0
file_url = urllib.parse.quote(file)
folder_name = folder_nice
temp = folder_name.split("_")
tool_name = temp[0]
file_nice = reformat_file_name(file)
# Check if the tool name is the same as the tool name in the notebook name
if tool_name != folder_name:
temp = file.split("_")
del temp[0]
tool_title = folder_name + "_"
for i in temp:
tool_title += i + "_"
final_title = folder_name + "/" + tool_title[:-1]
good_format = 1
path = urllib.parse.unquote(f"{folder_url}/{file_url}")
# Get the download URL
dl_url = f"{naas_download_url}{github_download_url}{folder_url}/{file_url}"
# Put the title to the format "TOOLS - NAME_OF_NOTEBOOK Open_In_Naas"
title = get_title(folder_nice, file_nice, dl_url)
# Set the good title format and get the tags from the notebooks of the folder
tags = set_notebook_title_and_get_tags(path, title, final_title, good_format)
# Get the name of the Notebook and the redirect to github link
nb_redirect = f"[{file_nice}]({github_url}/{folder_url}/{file_url})"
# Get the open in naas format
open_button = get_open_button(dl_url)
# For the actual file, put the nnotebook name and the gihub link for the return in markdown
md += f"{subtitle_sep} {nb_redirect}\n"
json_templates.append({
'tool': folder_nice,
'notebook': file_nice,
'tags': tags,
'update': '',
'action': open_button
})
return md
# -
# ### Generate markdown for each notebooks
# + tags=[]
generated_list = ""
json_templates = []
list_of_tools = []
index_max = len(notebooks)
index = 0
while index <= (index_max) - 1:
folder_nice = notebooks[index].get("root")
if folder_nice not in list_of_tools and folder_nice != "":
md_round = ""
files = []
list_of_tools.append(folder_nice)
folder_url = urllib.parse.quote(folder_nice)
print(folder_nice)
while True:
if notebooks[index].get("root") != folder_nice:
break
print(notebooks[index].get("subdir"))
files.append(notebooks[index].get("subdir"))
index += 1
if index == index_max:
break
if ("/" not in folder_nice):
md_round += get_file_md(folder_nice, folder_url, files, json_templates)
else:
folder_url = urllib.parse.quote(folder_nice)
subfolder_nice = folder_nice.split('/')[1].replace('_', ' ').replace(folder_nice, '').strip()
md_round += get_file_md(subfolder_nice, folder_url, files, json_templates, "\t###", "\t-")
generated_list += md_round
# -
# ## Output
# ### Preview the generated list
# + tags=[]
naas_drivers.markdown.display(generated_list)
# -
# ### Generate readme for github repository
# + tags=[]
# Open README template
template = open(readme_template).read()
# Replace var to get list of templates in markdown format
template = template.replace(replace_var, generated_list)
# Save README
f = open(readme, "w+")
f.write(template)
f.close()
# -
# ### Generate json for naas manager
# + tags=[]
with open(json_file, 'w') as f:
json.dump(json_templates, f)
# -
| generate_readme.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # This is 1st notebook
print("Hello, World!")
# %pip install matplotlib
# %matplotlib inline
# +
from matplotlib import pyplot as plt
# x-axis values
x = [5, 2, 9, 4, 7]
# Y-axis values
y = [10, 5, 8, 4, 2]
# Function to plot
plt.plot(x, y)
# function to show the plot
plt.show()
# -
| integration-tests/examples/1st template/example-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
print(lemmatizer.lemmatize('cats'))
print(lemmatizer.lemmatize('cacti'))
print(lemmatizer.lemmatize('geese'))
print(lemmatizer.lemmatize('rocks'))
print(lemmatizer.lemmatize('papers'))
print(lemmatizer.lemmatize('python'))
print(lemmatizer.lemmatize('scissor'))
print(lemmatizer.lemmatize('better', pos='a')) ## a is for denoting adjective
| nltk/lemmatizing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Upload Test Notebook
import ipywidgets as widgets
# There are many widgets distributed with ipywidgets that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing `Float` with `Int` in the widget name, you can find the Integer equivalent.
# ## Image
file = open("upload_image.png", "rb")
image = file.read()
widgets.Image(
value=image,
format='png',
width=70,
height=100,
)
| packages/galata/tests/upload/upload_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Merging U2M and V2M data and calculating Wind Speed
import pandas as pd
import numpy as np
# Load Wind data
df1 = pd.read_csv("../merradownload/2M Eastward Wind/North Carolina_monthly.csv")
df2 = pd.read_csv("../merradownload/2M Northward Wind/North Carolina_monthly.csv")
df1['month'] = [str(x).split('-')[-1] for x in df1['date']]
df1['year'] = [str(x).split('-')[0] for x in df1['date']]
df2['month'] = [str(x).split('-')[-1] for x in df2['date']]
df2['year'] = [str(x).split('-')[0] for x in df2['date']]
df1 = df1.drop(columns=['date'])
df2 = df2.drop(columns=['date'])
# Merging both wind data
df = pd.merge(df1, df2, on= ['lat','lon','year','month'])
df = df[['year','month','lat','lon','U2M','V2M']]
df['wind_speed'] = np.sqrt(df['U2M']**2 + df['V2M']**2)
df['wind_direction'] = np.arctan2(df['V2M'],df['U2M'])
df['state'] = 'North Carolina'
df
df.to_csv('North_carolina.csv', index=False )
| Analysis/.ipynb_checkpoints/Merging_U2M_&_V2M -checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Code repurposed from Kaggle [https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-cryptocurrencies]
# ## import dependencies
# +
import pandas as pd
import pandas_datareader as web
import numpy as np
from pathlib import Path
import datetime as dt
import matplotlib.pyplot as plt
import seaborn as sns
color = sns.color_palette()
from yahoo_fin.stock_info import get_data
# %matplotlib inline
# -
# ## load and inspect data
# +
crypto_currency = 'EOS'
against_currency = 'USD'
start = dt.datetime(2014,1,1)
end = dt.datetime.now()
base_df = get_data(f'{crypto_currency}-{against_currency}', start, end, index_as_date = True, interval = '1d')
base_df
# -
base_df['spread'] = base_df.high - base_df.low
base_df.info()
# ## Plot the closing value of EOS over time
# +
import matplotlib.dates as mdates
fig, ax = plt.subplots(figsize=(14,8))
sns.lineplot(y = base_df.close.values, x=base_df.index.values, alpha=0.8, color=color[3])
ax.xaxis.set_major_locator(mdates.AutoDateLocator())
fig.autofmt_xdate()
plt.xlabel('Date', fontsize=12)
plt.ylabel('Price in USD', fontsize=12)
plt.title("Closing price distribution of EOS", fontsize=15)
plt.show()
# +
fig, ax = plt.subplots(figsize=(14,8))
sns.lineplot(y = base_df.spread.values, x=base_df.index.values, alpha=0.8, color=color[3])
ax.xaxis.set_major_locator(mdates.AutoDateLocator())
fig.autofmt_xdate()
plt.xlabel('Date', fontsize=12)
plt.ylabel('Price in USD', fontsize=12)
plt.title("Daily price spread of EOS", fontsize=15)
# -
# ## Candlestick chart
# +
import matplotlib.ticker as mticker
import mplfinance as mpf
temp_base_df = base_df.copy(deep=False)
temp_base_df = temp_base_df.drop(['spread'], axis=1)
mpf.plot(temp_base_df.loc['2020-6-1':], type='candle', mav=(5,10), volume=True)
# -
# ## Future Price Prediction
from fbprophet import Prophet
price_predict_df = base_df['close'].copy(deep=False).reset_index()
price_predict_df.columns = ["ds", "y"]
#price_predict_df = price_predict_df[price_predict_df['ds']>'2020-6-1']
price_predict_df
# +
m = Prophet(changepoint_prior_scale=.7)
m.add_country_holidays(country_name='US')
#m.add_country_holidays(country_name='CN')
m.fit(price_predict_df);
m.train_holiday_names
future = m.make_future_dataframe(periods=30)
forecast = m.predict(future)
fig = m.plot_components(forecast)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
# -
m.plot(forecast)
| notebooks/by_coin/eos_notebook_from_Yahoo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="2GRQnxMzISE_"
# # Proyecto 03 - Procesamiento del Lenguaje Natural
#
# ## Dataset: The Multilingual Amazon Reviews Corpus
#
# **Recuerda descargar el dataset de [aquí](https://github.com/kang205/SASRec). Es un archivo .zip que contiene tres documentos. Más información sobre el dataset [aquí](https://registry.opendata.aws/amazon-reviews-ml/). Es importante que tengas en cuenta la [licencia](https://docs.opendata.aws/amazon-reviews-ml/license.txt) de este dataset.**
#
# ### Exploración de datos y Procesamiento del Lenguaje Natural
#
# Dedícale un buen tiempo a hacer un Análisis Exploratorio de Datos. Considera que hasta que no hayas aplicado las herramientas de Procesamiento del Lenguaje Natural vistas, será difícil completar este análisis. Elige preguntas que creas que puedas responder con este dataset. Por ejemplo, ¿qué palabras están asociadas a calificaciones positivas y qué palabras a calificaciones negativas?
#
# ### Machine Learning
#
# Implementa un modelo que, dada la crítica de un producto, asigne la cantidad de estrellas correspondiente. **Para pensar**: ¿es un problema de Clasificación o de Regresión?
#
# 1. Haz todas las transformaciones de datos que consideres necesarias. Justifica.
# 1. Evalúa de forma apropiada sus resultados. Justifica la métrica elegida.
# 1. Elige un modelo benchmark y compara tus resultados con este modelo.
# 1. Optimiza los hiperparámetros de tu modelo.
# 1. Intenta responder la pregunta: ¿Qué información está usando el modelo para predecir?
#
# **Recomendación:** si no te resulta conveniente trabajar en español con NLTK, te recomendamos que explores la librería [spaCy](https://spacy.io/).
#
# ### Para pensar, investigar y, opcionalmente, implementar
# 1. ¿Valdrá la pena convertir el problema de Machine Learning en un problema binario? Es decir, asignar únicamente las etiquetas Positiva y Negativa a cada crítica y hacer un modelo que, en lugar de predecir las estrellas, prediga esa etiqueta. Pensar en qué situación puede ser útil. ¿Esperas que el desempeño sea mejor o peor?
# 1. ¿Hay algo que te gustaría investigar o probar?
#
# ### **¡Tómate tiempo para investigar y leer mucho!**
# + [markdown] id="x1GFwraSISFB"
# #####################################################################################################################
# -
# ### 1. Análisis Exploratorio de Datos
# +
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import json
# -
dataset = pd.read_json('dataset_es_train.json', lines = True)
dataset.tail(20)
| Victor Mendez_DS_Proyecto_03_NLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
sess = tf.InteractiveSession()
my_tensor = tf.random_uniform((4, 4), 0, 1)
my_tensor
my_var = tf.Variable(initial_value=my_tensor)
print(my_var)
init = tf.global_variables_initializer()
sess.run(init)
sess.run(my_var)
ph = tf.placeholder(tf.float32, shape=(None, 5))
print(ph)
| TF Variables and Placeholders.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Prob 1.9
# 1.9
#
# A time series with a periodic component can be constructed from
# $$
# x_t = U_1 sin(2 \pi \omega_0 t) + U_2 cos(2 \pi \omega_0 t)
# $$
# ,
# where $U_1$ and $U_2$ are independent random variables with zero means and $E(U_1^2) = E(U_2^2) = \sigma^2$.
# the constant $\omega_0$ determines the period or time it takes the process to make one complete cycle. Show that this series is weakly stationary with autocovariance function
#
# $\gamma(h) = \sigma 2 cos(2 \pi \omega_0 h)$.
# # Prob 1.21
# 1.21
#
# (a) Simulate a series of n = 500 moving average observations as in Example 1.9 and compute the sample ACF, $\hat{\rho}(h)$, to lag 20. Compare the sample ACF you obtain to the actual ACF, $\rho(h)$. [Recall Example 1.20.]
#
# (b) Repeat part (a) using only n = 50.
# How does changing n affect the results?
# ## Example 1.9: Moving Averages and Filtering
#
# We might replace the white noise series $w_t$ by a moving average that smooths
# the series. For example, consider replacing $w_t$ in Example 1.8 by an average of its
# current value and its immediate neighbors in the past and future. That is, let
# $$
# v_t = \frac{1}{3} \left( w_{t−1} + w_t + w_{t+1} \right)
# $$ (1.1)
#
# which leads to the series shown in the lower panel of Fig. 1.8.
#
# 
#
# Inspecting the series shows a smoother version of the first series, reflecting the fact that the slower oscillations are more apparent and some of the faster oscillations are taken out. We begin to notice a similarity to the SOI in Fig. 1.5
#
# 
#
# A linear combination of values in a time series such as in eq (1.1) is referred to,
# generically, as a filtered series; hence the command filter in the following code
# for Fig. 1.8.
#
# ```R
# w = rnorm(500,0,1)
# # 500 N(0,1) variates
# v = filter(w, sides=2, filter=rep(1/3,3)) # moving average
# par(mfrow=c(2,1))
# plot.ts(w, main="white noise")
# plot.ts(v, ylim=c(-3,3), main="moving average")
# ```
#
# The speech series in Fig. 1.3 and the Recruitment series in Fig. 1.5, as well as
# some of the MRI series in Fig. 1.6, differ from the moving average series because one
# particular kind of oscillatory behavior seems to predominate, producing a sinusoidal
# type of behavior. A number of methods exist for generating series with this quasi-
# periodic behavior; we illustrate a popular one based on the autoregressive model
# considered in Chap. 3.
# ## Example 1.20 Stationarity of a Moving Average
#
# The three-point moving average process of Example 1.9 is stationary because, the mean and autocovariance functions
# $μ_vt = 0$, and
# $$
# \gamma_v(h) =
# \begin{cases}
# \frac{3}{9} \sigma_w^2 & h=0, \\
# \frac{2}{9} \sigma_w^2 & h= \pm 1, \\
# \frac{1}{9} \sigma_w^2 & h= \pm 2, \\
# 0 & |h| > 2
# \end{cases}
# $$
#
# are independent of time t, satisfying the conditions of Definition 1.7.
# The autocorrelation function is given by
# $$
# \rho_v(h) =
# \begin{cases}
# 1 & h= 0, \\
# \frac{2}{3} & h= \pm 1, \\
# \frac{1}{3} & h= \pm 2, \\
# 0 & |h| > 2
# \end{cases}
# $$
#
# 
#
# Figure 1.12 shows a plot of the autocorrelations as a function of lag h. Note that
# the ACF is symmetric about lag zero.
# # Prob 2.3
# 2.3 In this problem, we explore the difference between a random walk and a trend
# stationary process.
#
# (a) Generate four series that are random walk with drift of length $n = 100$
# with $\delta = .01$ and $\sigma_w = 1$. Call the data $x_t$ for $t = 1, \dotso, 100$. Fit the regression $x_t = \beta t + w_t$ using least squares. Plot the data, the true mean function (i.e., $\mu_t = .01 t$) and the fitted line, $\hat{x_t} = \hat{\beta} t$, on the same graph. Hint: The following R
# code may be useful.
#
# ```R
# par(mfrow=c(2,2), mar=c(2.5,2.5,0,0)+.5, mgp=c(1.6,.6,0)) # set up
# for (i in 1:4){
# x = ts(cumsum(rnorm(100,.01,1))) # data
# regx = lm(x~0+time(x), na.action=NULL) #regression
# plot(x, ylab='Random Walk w Drift') # plots
# abline(a=0, b=.01, col=2, lty=2) # true mean (red - dashed)
# abline(regx, col=4) # fitted line (blue - solid)
# ```
#
# (b) Generate four series of length n = 100 that are linear trend plus noise, say
# $y_t = .01 t + w_t$ , where t and $w_t$ are as in part (a). Fit the regression $y_t = \beta t + w_t$ using least squares. Plot the data, the true mean function (i.e., $\mu_t = .01 t$) and the fitted line, $\hat{y_t} = \hat{\beta} t$, on the same graph.
#
# (c) Comment (what did you learn from this assignment).
# # Prob 2.11
# 2.11 Use two different smoothing techniques described in Sect. 2.3 to estimate the
# trend in the global temperature series globtemp . Comment.
#
# ### Methods from Section 2.3
# * Moving Average Smoother
# * Kernel Smoothing
# * Lowess
# * Smoothing Splines
# * Smoothing One Series as a Function of Another
# # Prob 3.6
# 3.6 For the AR(2) model given by $x_t = −.9 x_{t−2} + w_t$ , find the roots of the
# autoregressive polynomial, and then sketch the ACF, $\rho(h)$.
# # Prob 3.9
# 3.9 Generate n = 100 observations from each of the three models discussed in Problem 3.8.
#
# Compute the sample ACF for each model and compare it to the theoretical values.
# Compute the sample PACF for each of the generated series and compare the sample ACFs and PACFs with the general results given in
# table 3.1. Section 3.5
#
# ## For Reference, Problem 3.8
# 3.8 Verify the calculations for the autocorrelation function of an ARMA(1, 1) process given in Example 3.14.
#
# Compare the form with that of the ACF for the ARMA(1, 0) and the ARMA(0, 1) series.
#
# Plot the ACFs of the three series on the same graph for $\phi = .6, \theta = .9$, and comment on the diagnostic capabilities of the ACF in this case.
#
# ### Table 3.1 Behavior of the ACF and PACF for ARMA models
#
# | | AR(p) | MA(q) | ARMA(p,q) |
# |------|----------------------|---------------------|-----------|
# | ACF | Tails off | Cutsoff after lag q | Tails off |
# | PACF | Cuts off after lag p | Tails off | Tails off |
#
# # Prob 3.21
# Generate 10 realizations of length $n = 200$ each of an ARMA(1,1) process with $\phi = .9, \theta = .5$, and $ \sigma^2 = 1$.
#
# Find the MLEs of the three parameters in
# each case and compare the estimators to the true values.
# # Prob 3.10
# Let $x_t$ represent the cardiovascular mortality series (cmort) discussed in
# Chapter 2, Example 2.2.
#
# (a) Fit an AR(2) to $x_t$ using linear regression as in Example 3.17.
#
# (b) Assuming the fitted model in (a) is the true model, find the forecasts over
# a four-week horizon, $x_{n+m}^n$ , for $m = 1, 2, 3, 4$, and the corresponding 95%
# prediction intervals.
# ## For Reference, Example 2.2
# #### Pollution, Temperature and Mortality
# The data shown in Fig. 2.2 are extracted series from a study by Shumway et al.
# of the possible effects of temperature and pollution on weekly mortality in Los
# Angeles County.
#
# Note the strong seasonal components in all of the series, corresponding to winter-summer variations and the downward trend in the cardiovascular mortality over the 10-year period.
#
# A scatterplot matrix, shown in Fig. 2.3, indicates a possible linear relation
# between mortality and the pollutant particulates and a possible relation to temperature.
#
# Note the curvilinear shape of the temperature mortality curve, indicating that
# higher temperatures as well as lower temperatures are associated with increases in cardiovascular mortality. Based on the scatterplot matrix, we entertain, tentatively, four models where $M_t$ denotes cardiovascular mortality, $T_t$ denotes temperature and $P_t$ denotes the
# particulate levels.
#
# They are
#
# $M_t = \beta_0 + \beta_1 t + w_t$
#
# $M_t = \beta_0 + \beta_1 t + \beta_2 (T_t − T_·) + w_t$
#
# $M t = \beta_0 + \beta_1 t + \beta_2 (T_t − T_· ) + \beta_3 (T_t − T_· ) 2 + w_t $
#
# $M t = \beta_0 + \beta_1 t + \beta_2 (T_t − T_· ) + \beta_3 (T_t − T_· ) 2 + \beta_4 P_t + w t$
#
# where we adjust temperature for its mean, $T_· = 74.26$, to avoid collinearity prob-
# lems.
#
# It is clear that (2.18) is a trend only model, (2.19) is linear temperature, (2.20)
# is curvilinear temperature and (2.21) is curvilinear temperature and pollution.
#
# We summarize some of the statistics given for this particular case in Table 2.2.
#
# We note that each model does substantially better than the one before it and that
# the model including temperature, temperature squared, and particulates does the
# best, accounting for some 60% of the variability and with the best value for AIC
# and BIC (because of the large sample size, AIC and AICc are nearly the same).
#
# Note that one can compare any two models using the residual sums of squares
# and (2.11).
#
# Hence, a model with only trend could be compared to the full model,
# $H_0$ : $\beta_2 = \beta_3 = \beta_4 = 0$, using $q = 4, r = 1, n = 508$, and $F_{3,503} = \frac{(40020 − 20508)/3}{20508/503} = 160$ which exceeds $F_{3,503}(.001) = 5.51$. We obtain the best prediction model.
#
# $\hat{M_t} = 2831.5 − 1.396_{(.10)} t − .472 _{(.032)} (T_t − 74.26) + .023_{(.003)} (T_t − 74.26)^2 + .255_{(.019)} P_t$ , for mortality, where the standard errors, computed from (2.6)–(2.8), are given in
# parentheses. As expected, a negative trend is present in time as well as a negative
# coefficient for adjusted temperature.
#
# The quadratic effect of temperature can clearly be seen in the scatterplots of Fig. 2.3.
#
# Pollution weights positively and can be
# interpreted as the incremental contribution to daily deaths per unit of particulate
# pollution.
#
# It would still be essential to check the residuals $\hat{w_t} = M_t − \hat{M_t}$ for
# autocorrelation (of which there is a substantial amount), but we defer this question to Sect. 3.8 when we discuss regression with correlated errors.
#
# Below is the R code to plot the series, display the scatterplot matrix, fit the final regression model (2.21), and compute the corresponding values of AIC, AICc and
# BIC.
#
# Finally, the use of na.action in lm() is to retain the time series attributes for
# the residuals and fitted values.
#
# ```R
# par(mfrow=c(3,1)) # plot the data
# plot(cmort, main="Cardiovascular Mortality", xlab="", ylab="")
# plot(tempr, main="Temperature", xlab="", ylab="")
# plot(part, main="Particulates", xlab="", ylab="")
# dev.new()
# # open a new graphic device
# ts.plot(cmort,tempr,part, col=1:3) # all on same plot (not shown)
# dev.new()
# pairs(cbind(Mortality=cmort, Temperature=tempr, Particulates=part))
# temp = tempr-mean(tempr) # center temperature
# temp2 = temp^2
# # time
# trend = time(cmort)
# fit
# = lm(cmort~ trend + temp + temp2 + part, na.action=NULL)
# summary(fit)
# # regression results
# summary(aov(fit))
# # ANOVA table
# (compare to next line)
# summary(aov(lm(cmort~cbind(trend, temp, temp2, part)))) # Table 2.1
# num = length(cmort)
# # sample size
# AIC(fit)/num - log(2*pi) # AIC
# BIC(fit)/num - log(2*pi) # BIC
# (AICc = log(sum(resid(fit)^2)/num) + (num+5)/(num-5-2)) # AICc
# ```
#
# As previously mentioned, it is possible to include lagged variables in time series
# regression models and we will continue to discuss this type of problem throughout
# the text. This concept is explored further in Problem 2.2 and Problem 2.10. The
# following is a simple example of lagged regression.
# ## Example 3.17 The PACF of an Invertible MA(q)
#
# For an invertible MA(q), we can write $x_t = -\sum_j=1^\infty \pi_j x_{t−j} + w_t$ .
#
# Moreover, no finite representation exists.
#
# From this result, it should be apparent that the PACF will never cut off, as in the case of an AR(p).
#
# For an MA(1), $x_t = w_t + \theta w_{t−1}$ , with $|\theta| < 1$, calculations similar to Example 3.15 will yield $\phi_{22} = -\theta^2 / (1 + \theta^2 + \theta^4 ).
#
# For the MA(1) in general, we can show
# that
#
# $\phi_{hh} = \frac{(-\theta)^h (1-\theta^2)}{1 - \theta^{2(h+1)}} , h\geq 1$
#
# In the next section, we will discuss methods of calculating the PACF. The PACF
# for MA models behaves much like the ACF for AR models. Also, the PACF for AR
# models behaves much like the ACF for MA models. Because an invertible ARMA
# model has an infinite AR representation, the PACF will not cut off. We may summarize
#
# these results in Table 3.1.
# # Prob 3.33
# 3.33 Fit an ARIMA(p, d, q) model to the global temperature data gtemp per-
# forming all of the necessary diagnostics. After deciding on an appropriate
# model, forecast (with limits) the next 10 years. Comment.
# # Prob 3.42
# 3.42 Consider the series x t = $w_t$ −w t−1 , where $w_t$ is a white noise process with
# 2
# . Suppose we consider the problem of predicting
# mean zero and variance \sigma w
# x n+1 , based on only x 1 , . . . , x n . Use the Projection theorem to answer the
# questions below.
# (a) Show the best linear predictor is
# n
# x nn+1 = −
# 1 X
# k x k .
# n +1
# k=1
# (b) Prove the mean square error is
# E(x n+1 − x nn+1 ) 2 =
# n +2 2
# \sigma .
# n +1
#
# # Prob 4.1
# 4.1 Verify that for any positive integer n and j, k = 0, 1, . . . , [[n/2]], where [[·]] denotes
# the greatest integer function:
# (a) Except for j = 0 or j = n/2,12
# n
#
# cos 2 (2\pit j/n) =
# t=1
# n
#
# sin 2 (2\pit j/n) = n/2.
# t=1
# (b) When j = 0 or j = n/2,
# n
#
# cos 2 (2\pit j/n) = n but
# t=1
# (c) For j $ k,
# n
#
# n
#
# sin 2 (2\pit j/n) = 0.
# t=1
# cos(2\pit j/n) cos(2\pitk/n) =
# t=1
# n
#
# sin(2\pit j/n) sin(2\pitk/n) = 0.
# t=1
# Also, for any j and k,
# n
#
# cos(2\pit j/n) sin(2\pitk/n) = 0.) Inspecting the series
# shows a smoother version of the first series, reflecting the fact that the slower
# oscillations are more apparent and some of the faster oscillations are taken out. We
# begin to notice a similarity to the SOI in Fig. 1.5, or perhaps, to some of the fMRI
# series in Fig. 1.6.1.2 Time Series Statistical Models
# 11
# A linear combination of values in a time series such as in (1.1) is referred to,
# generically, as a filtered series; hence the command filter in the following code
# for Fig. 1.8.
# ```
# w = rnorm(500,0,1)
# # 500 N(0,1) variates
# v = filter(w, sides=2, filter=rep(1/3,3)) # moving average
# par(mfrow=c(2,1))
# plot.ts(w, main="white noise")
# plot.ts(v, ylim=c(-3,3), main="moving average")
# ```
# The speech series in Fig. 1.3 and the Recruitment series in Fig. 1.5, as well as
# some of the MRI series in Fig. 1.6, differ from the moving average series because one
# particular kind of oscillatory behavior seems to predominate, producing a sinusoidal
# type of behavior. A number of methods exist for generating series with this quasi-
# periodic behavior; we illustrate a popular one based on the autoregressive model
# considered in Chap. 3.
#
# ## Example 1.20 Stationarity of a Moving Average
# The three-point moving average process of Example 1.9 is stationary because,
# from Example 1.13 and Example 1.17, the mean and autocovariance functions
# $μ_vt = 0$, and
# 3 2
# ⎧
# ⎪
# ⎪ 9 σ w h = 0,
# ⎪
# ⎪
# ⎨ 2 σ 2 h = ±1,
# ⎪
# γ v (h) = 9 1 w 2
# ⎪ 9 σ w h = ±2,
# ⎪
# ⎪
# ⎪
# ⎪ 0
# |h| > 2
# ⎩
# are independent of time t, satisfying the conditions of Definition 1.7.
# The autocorrelation function is given by
# \rho v (h) =
# ⎧
# 1
# ⎪
# ⎪
# ⎪
# ⎪
# ⎨ 2
# ⎪
# 3
# 1
# ⎪
# ⎪
# 3
# ⎪
# ⎪
# ⎪ 0
# ⎩
# h = 0,
# h = ±1,
# h = ±2,
# |h| > 2.
# Figure 1.12 shows a plot of the autocorrelations as a function of lag h. Note that
# the ACF is symmetric about lag zero.
#
# # Prob 2.3
# 2.3 Repeat the following exercise six times and then discuss the results. Gen-
# erate a random walk with drift, (1.4), of length n = 100 with \delta = .01 and
# \sigma w = 1. Call the data x t for t = 1, . . . , 100. Fit the regression x t = \betat + w t
# using least squares. Plot the data, the mean function (i.e., \mu t = .01 t) and the
# fitted line, x
# b t = \beta b t, on the same graph. Discuss your results.
# the following R code may be useful:
#
# ```R
# par(mfcol = c(3,2)) # set up graphics
# for (i in 1:6){
# x = ts(cumsum(rnorm(100,.01,1)))
# # the data
# reg = lm(x~0+time(x), na.action=NULL) # the regression
# plot(x) # plot data
# lines(.01*time(x), col="red", lty="dashed") # plot mean
# abline(reg, col="blue") } # plot regression line
# ```
# # Prob 2.11
# 2.11 Consider the two weekly time series oil and gas. the oil series is in
# dollars per barrel, while the gas series is in cents per gallon; see Appendix R
# for details.
# (a) Plot the data on the same graph. Which of the simulated series displayed in
# §1.3 do these series most resemble? Do you believe the series are stationary
# (explain your answer)?
# (b) In economics, it is often the percentage change in price (termed growth rate
# or return), rather than the absolute price change, that is important. Argue
# that a transformation of the form y t = \nabla log x t might be applied to the
# data, where x t is the oil or gas price series [see the hint in Problem 2.8(d)].
# (c) transform the data as described in part (b), plot the data on the same
# graph, look at the sample ACFs of the transformed data, and comment.
# [Hint: poil = diff(log(oil)) and pgas = diff(log(gas)).]
# (d) Plot the CCF of the transformed data and comment the small, but signif-
# icant values when gas leads oil might be considered as feedback. [Hint:
# ccf(poil, pgas) will have poil leading for negative lag values.]
# (e) Exhibit scatterplots of the oil and gas growth rate series for up to three
# weeks of lead time of oil prices; include a nonparametric smoother in each
# plot and comment on the results (e.g., Are there outliers? Are the rela-
# tionships linear?). [Hint: lag.plot2(poil, pgas, 3).]
# (f) there have been a number of studies questioning whether gasoline prices
# respond more quickly when oil prices are rising than when oil prices are
# falling (“asymmetry”). We will attempt to explore this question here with
# simple lagged regression; we will ignore some obvious problems such as
# outliers and autocorrelated errors, so this will not be a definitive analysis.
# Let G t and O t denote the gas and oil growth rates.
# (i) Fit the regression (and comment on the results)
# G t = \alpha 1 + \alpha 2 I t + \beta 1 O t + \beta 2 O t−1 + $w_t$ ,
# where I t = 1 if O t ≥ 0 and 0 otherwise (I t is the indicator of no
# growth or positive growth in oil price). Hint:
# 1
# 2
# 3
# indi = ifelse(poil < 0, 0, 1)
# mess = ts.intersect(pgas, poil, poilL = lag(poil,-1), indi)
# summary(fit <- lm(pgas~ poil + poilL + indi, data=mess))
# (ii) What is the fitted model when there is negative growth in oil price at
# time t? What is the fitted model when there is no or positive growth
# in oil price? Do these results support the asymmetry hypothesis?
# (iii) Analyze the residuals from the fit and comment.
# # Prob 3.6
# 3.6 For the AR(2) model given by x t = −.9x t−2 + $w_t$ , find the roots of the
# autoregressive polynomial, and then sketch the ACF, \rho(h).
# # Prob 3.9
# 3.9 Generate n = 100 observations from each of the three models discussed in
# Problem 3.8. Compute the sample ACF for each model and compare it to the
# theoretical values. Compute the sample PACF for each of the generated series
# and compare the sample ACFs and PACFs with the general results given in
# table 3.1.
# Section 3.5
# # Prob 3.21
# 3.21 Generate 10 realizations of length n = 200 each of an ARMA(1,1) process
# with \phi = .9, θ = .5 and \sigma 2 = 1. Find the MLEs of the three parameters in
# each case and compare the estimators to the true values.
# # Prob 3.10
# 3.10 Let x t represent the cardiovascular mortality series (cmort) discussed in
# Chapter 2, Example 2.2.
# (a) Fit an AR(2) to x t using linear regression as in Example 3.17.
# (b) Assuming the fitted model in (a) is the true model, find the forecasts over
# a four-week horizon, x nn+m , for m = 1, 2, 3, 4, and the corresponding 95%
# prediction intervals.
#
# ## Example 2.2
# Example 2.2 Pollution, Temperature and Mortality
# The data shown in Fig. 2.2 are extracted series from a study by Shumway et al. [183]
# of the possible effects of temperature and pollution on weekly mortality in Los
# Angeles County. Note the strong seasonal components in all of the series, corre-
# sponding to winter-summer variations and the downward trend in the cardiovascular
# mortality over the 10-year period.
# A scatterplot matrix, shown in Fig. 2.3, indicates a possible linear relation
# between mortality and the pollutant particulates and a possible relation to tempera-
# ture. Note the curvilinear shape of the temperature mortality curve, indicating that
# higher temperatures as well as lower temperatures are associated with increases in
# cardiovascular mortality.
# Based on the scatterplot matrix, we entertain, tentatively, four models where
# M t denotes cardiovascular mortality, T t denotes temperature and P t denotes the
# particulate levels. They are
# M t = β 0 + β 1 t + $w_t$ (2.18)
# M t = β 0 + β 1 t + β 2 (T t − T · ) + w t
# M t = β 0 + β 1 t + β 2 (T t − T · ) + β 3 (T t − T · ) 2 + $w_t$ (2.19)
# (2.20)
# M t = β 0 + β 1 t + β 2 (T t − T · ) + β 3 (T t − T · ) 2 + β 4 P t + w t
# (2.21)
# where we adjust temperature for its mean, T · = 74.26, to avoid collinearity prob-
# lems. It is clear that (2.18) is a trend only model, (2.19) is linear temperature, (2.20)
#
# is curvilinear temperature and (2.21) is curvilinear temperature and pollution. We
# summarize some of the statistics given for this particular case in Table 2.2.
# We note that each model does substantially better than the one before it and that
# the model including temperature, temperature squared, and particulates does the
# best, accounting for some 60% of the variability and with the best value for AIC
# and BIC (because of the large sample size, AIC and AICc are nearly the same).
# Note that one can compare any two models using the residual sums of squares
# and (2.11). Hence, a model with only trend could be compared to the full model,
# H 0 : β 2 = β 3 = β 4 = 0, using q = 4, r = 1, n = 508, and (40, 020 − 20, 508)/3
# = 160,
# 20, 508/503
# which exceeds F 3,503 (.001) = 5.51. We obtain the best prediction model,
# F 3,503 =
# M̂ t = 2831.5 − 1.396 (.10) t − .472 (.032) (T t − 74.26)
# + .023 (.003) (T t − 74.26) 2 + .255 (.019) P t ,
# for mortality, where the standard errors, computed from (2.6)–(2.8), are given in
# parentheses. As expected, a negative trend is present in time as well as a negative
# coefficient for adjusted temperature. The quadratic effect of temperature can clearly
# be seen in the scatterplots of Fig. 2.3. Pollution weights positively and can be
# interpreted as the incremental contribution to daily deaths per unit of particulate
# pollution. It would still be essential to check the residuals ŵ t = M t − M̂ t for
# autocorrelation (of which there is a substantial amount), but we defer this question
# to Sect. 3.8 when we discuss regression with correlated errors.
# Below is the R code to plot the series, display the scatterplot matrix, fit the final
# regression model (2.21), and compute the corresponding values of AIC, AICc and
# BIC.2 Finally, the use of na.action in lm() is to retain the time series attributes for
# the residuals and fitted values.
#
# ```R
# par(mfrow=c(3,1)) # plot the data
# plot(cmort, main="Cardiovascular Mortality", xlab="", ylab="")
# plot(tempr, main="Temperature", xlab="", ylab="")
# plot(part, main="Particulates", xlab="", ylab="")
# dev.new()
# # open a new graphic device
# ts.plot(cmort,tempr,part, col=1:3) # all on same plot (not shown)
# dev.new()
# pairs(cbind(Mortality=cmort, Temperature=tempr, Particulates=part))
# temp = tempr-mean(tempr) # center temperature
# temp2 = temp^2
# # time
# trend = time(cmort)
# fit
# = lm(cmort~ trend + temp + temp2 + part, na.action=NULL)
# summary(fit)
# # regression results
# summary(aov(fit))
# # ANOVA table
# (compare to next line)
# summary(aov(lm(cmort~cbind(trend, temp, temp2, part)))) # Table 2.1
# num = length(cmort)
# # sample size
# AIC(fit)/num - log(2*pi) # AIC
# BIC(fit)/num - log(2*pi) # BIC
# (AICc = log(sum(resid(fit)^2)/num) + (num+5)/(num-5-2)) # AICc
# ```
#
# As previously mentioned, it is possible to include lagged variables in time series
# regression models and we will continue to discuss this type of problem throughout
# the text. This concept is explored further in Problem 2.2 and Problem 2.10. The
# following is a simple example of lagged regression.
#
# ## Example 3.17 The PACF of an Invertible MA(q)
#
# For an invertible MA(q), we can write x t = − ∞
# j=1 π j x t−j + $w_t$ . Moreover, no finite
# representation exists. From this result, it should be apparent that the PACF will
# never cut off, as in the case of an AR(p).
# For an MA(1), x t = $w_t$ + θw t−1 , with |θ| < 1, calculations similar to Exam-
# ple 3.15 will yield φ 22 = −θ 2 /(1 + θ 2 + θ 4 ). For the MA(1) in general, we can show
# that
# (−θ) h (1 − θ 2 )
# , h ≥ 1.
# φ hh = −
# 1 − θ 2(h+1)
# In the next section, we will discuss methods of calculating the PACF. The PACF
# for MA models behaves much like the ACF for AR models. Also, the PACF for AR
# models behaves much like the ACF for MA models. Because an invertible ARMA
# model has an infinite AR representation, the PACF will not cut off. We may summarize
# these results in Table 3.1.
#
#
# # Prob 3.33
# 3.33 Fit an ARIMA(p, d, q) model to the global temperature data gtemp per-
# forming all of the necessary diagnostics. After deciding on an appropriate
# model, forecast (with limits) the next 10 years. Comment.
# # Prob 3.42
# 3.42 Consider the series x t = $w_t$ −w t−1 , where $w_t$ is a white noise process with
# 2
# . Suppose we consider the problem of predicting
# mean zero and variance \sigma w
# x n+1 , based on only x 1 , . . . , x n . Use the Projection theorem to answer the
# questions below.
# (a) Show the best linear predictor is
# n
# x nn+1 = −
# 1 X
# k x k .
# n +1
# k=1
# (b) Prove the mean square error is
# E(x n+1 − x nn+1 ) 2 =
# n +2 2
# \sigma .
# n +1
#
# # Prob 4.1
# 4.1 Verify that for any positive integer n and j, k = 0, 1, . . . , [[n/2]], where [[·]] denotes
# the greatest integer function:
# (a) Except for j = 0 or j = n/2,12
# n
#
# cos 2 (2\pit j/n) =
# t=1
# n
#
# sin 2 (2\pit j/n) = n/2.
# t=1
# (b) When j = 0 or j = n/2,
# n
#
# cos 2 (2\pit j/n) = n but
# t=1
# (c) For j $ k,
# n
#
# n
#
# sin 2 (2\pit j/n) = 0.
# t=1
# cos(2\pit j/n) cos(2\pitk/n) =
# t=1
# n
#
# sin(2\pit j/n) sin(2\pitk/n) = 0.
# t=1
# Also, for any j and k,
# n
#
# cos(2\pit j/n) sin(2\pitk/n) = 0.
| problems/TimeSeriesHWProblems.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + dc={"key": "3"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 1. The sisters and Google Trends
# <p>While I'm not a fan nor a hater of the Kardashians and Jenners, the polarizing family intrigues me. Why? Their marketing prowess. Say what you will about them and what they stand for, they are great at the hype game. Everything they touch turns to content.</p>
# <p>The sisters in particular over the past decade have been especially productive in this regard. Let's get some facts straight. I consider the "sisters" to be the following daughters of <NAME>. Three from her first marriage to lawyer <a href="https://en.wikipedia.org/wiki/Robert_Kardashian"><NAME></a>:</p>
# <ul>
# <li><a href="https://en.wikipedia.org/wiki/Kourtney_Kardashian">Kourtney Kardashian</a> (daughter of <NAME>, born in 1979)</li>
# <li><a href="https://en.wikipedia.org/wiki/Kim_Kardashian">Kim Kardashian</a> (daughter of <NAME>ian, born in 1980)</li>
# <li><a href="https://en.wikipedia.org/wiki/Khlo%C3%A9_Kardashian">Khloé Kardashian</a> (daughter of <NAME>ian, born in 1984)</li>
# </ul>
# <p>And two from her second marriage to Olympic gold medal-winning decathlete, <a href="https://en.wikipedia.org/wiki/Caitlyn_Jenner"><NAME></a> (formerly Bruce):</p>
# <ul>
# <li><a href="https://en.wikipedia.org/wiki/Kendall_Jenner"><NAME></a> (daughter of <NAME>, born in 1995)</li>
# <li><a href="https://en.wikipedia.org/wiki/Kylie_Jenner"><NAME></a> (daughter of <NAME>, born in 1997)</li>
# </ul>
# <p><img src="https://assets.datacamp.com/production/project_538/img/kardashian_jenner_family_tree.png" alt="<NAME>ner sisters family tree"></p>
# <p>This family tree can be confusing, but we aren't here to explain it. We're here to explore the data underneath the hype, and we'll do it using search interest data from Google Trends. We'll recreate the Google Trends plot to visualize their ups and downs over time, then make a few custom plots of our own. And we'll answer the big question: <strong>is Kim even the most famous sister anymore?</strong></p>
# <p>First, let's load and inspect our Google Trends data, which was downloaded in CSV form. The <a href="https://trends.google.com/trends/explore?date=2007-01-01%202019-03-21&q=%2Fm%2F0261x8t,%2Fm%2F043p2f2,%2Fm%2F043ttm7,%2Fm%2F05_5_yx,%2Fm%2F05_5_yh">query</a> parameters: each of the sisters, worldwide search data, 2007 to present day. (2007 was the year Kim became "active" according to Wikipedia.)</p>
# + dc={"key": "3"} tags=["sample_code"]
# Load pandas
import pandas as pd
# Read in dataset
trends = pd.read_csv('datasets/trends_kj_sisters.csv')
# Inspect data
trends.head()
# + dc={"key": "10"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 2. Better "kolumn" names
# <p>So we have a column for each month since January 2007 and a column for the worldwide search interest for each of the sisters each month. By the way, Google defines the values of search interest as:</p>
# <blockquote>
# <p>Numbers represent search interest relative to the highest point on the chart for the given region and time. A value of 100 is the peak popularity for the term. A value of 50 means that the term is half as popular. A score of 0 means there was not enough data for this term.</p>
# </blockquote>
# <p>Okay, that's great Google, but you are not making this data easily analyzable for us. I see a few things. Let's do the column names first. A column named "<NAME>: (Worldwide)" is not the most usable for coding purposes. Let's shorten those so we can access their values better. Might as well standardize all column formats, too. I like lowercase, short column names.</p>
# + dc={"key": "10"} tags=["sample_code"]
# Make column names easier to work with
trends.columns = ['month', 'kim', 'khloe', 'kourtney', 'kendall', 'kylie']
# Inspect data
trends.head()
# + dc={"key": "17"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 3. Pesky data types
# <p>That's better. We don't need to scroll our eyes across the table to read the values anymore since it is much less wide. And seeing five columns that all start with the letter "k" ... the aesthetics ... we should call them "kolumns" now! (Bad joke.)</p>
# <p>The next thing I see that is going to be an issue is that "<" sign. If <em>"a score of 0 means there was not enough data for this term,"</em> "<1" must mean it is between 0 and 1 and Google does not want to give us the fraction from google.trends.com for whatever reason. That's fine, but this "<" sign means we won't be able to analyze or visualize our data right away because those column values aren't going to be represented as numbers in our data structure. Let's confirm that by inspecting our data types.</p>
# + dc={"key": "17"} tags=["sample_code"]
# Inspect data types
trends.info()
# + dc={"key": "24"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 4. From object to integer
# <p>Yes, okay, the <code>khloe</code>, <code>kourtney</code>, and <code>kendall</code> columns aren't integers like the <code>kim</code> and <code>kylie</code> columns are. Again, because of the "<" sign that indicates a search interest value between zero and one. Is this an early hint at the hierarchy of sister popularity? We'll see shortly. Before that, we'll need to remove that pesky "<" sign. Then we can change the type of those columns to integer.</p>
# + dc={"key": "24"} tags=["sample_code"]
# Loop through columns
for column in trends.columns:
if '<' in trends[column].to_string():
trends[column] = trends[column].str.replace('<', '')
trends[column] = pd.to_numeric(trends[column])
# Inspect data types and data
trends.info()
trends.head()
# + dc={"key": "31"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 5. From object to datetime
# <p>Okay, great, no more "<" signs. All the sister columns are of integer type.</p>
# <p>Now let's convert our <code>month</code> column from type object to datetime to make our date data more accessible.</p>
# + dc={"key": "31"} tags=["sample_code"]
# Convert month to type datetime
trends['month'] = pd.to_datetime(trends['month'])
# Inspect data types and data
trends.info()
trends.head()
# + dc={"key": "38"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 6. Set month as index
# <p>And finally, let's set the <code>month</code> column as our index to wrap our data cleaning. Having <code>month</code> as index rather than the zero-based row numbers will allow us to write shorter lines of code to create plots, where <code>month</code> will represent our x-axis.</p>
# + dc={"key": "38"} tags=["sample_code"]
# Set month as DataFrame index
trends = trends.set_index('month')
# Inspect the data
trends.head()
# + dc={"key": "45"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 7. The early Kim hype
# <p>Okay! So our data is ready to plot. Because we cleaned our data, we only need one line of code (and just <em>thirteen</em> characters!) to remake the Google Trends chart, plus another line to make the plot show up in our notebook.</p>
# + dc={"key": "45"} tags=["sample_code"]
# Plot search interest vs. month
# %matplotlib inline
trends.plot()
# + dc={"key": "52"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 8. Kylie's rise
# <p>Oh my! There is so much to make sense of here. Kim's <a href="https://en.wikipedia.org/wiki/Kim_Kardashian#2007%E2%80%932009:_Breakthrough_with_reality_television">sharp rise in 2007</a>, with the beginning of <a href="https://en.wikipedia.org/wiki/Keeping_Up_with_the_Kardashians"><em>Keeping Up with the Kardashians</em></a>, among other things. There was no significant search interest for the other four sisters until mid-2009 when Kourtney and Khloé launched the reality television series, <a href="https://en.wikipedia.org/wiki/Kourtney_and_Kim_Take_Miami"><em>Kourtney and Khloé Take Miami</em></a>. Then there was Kim's rise from famous to <a href="https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F0261x8t,%2Fm%2F0d05l6">literally more famous than God</a> in 2011. This Cosmopolitan <a href="https://www.cosmopolitan.com/uk/entertainment/a12464842/who-is-kim-kardashian/">article</a> covers the timeline that includes the launch of music videos, fragrances, iPhone and Android games, another television series, joining Instagram, and more. Then there was Kim's ridiculous spike in December 2014: posing naked on the cover of Paper Magazine in a bid to break the internet will do that for you.</p>
# <p>A curious thing starts to happen after that bid as well. Let's zoom in...</p>
# + dc={"key": "52"} tags=["sample_code"]
# Zoom in from January 2014
trends.loc['2014.01':'2019.04'].plot()
# + dc={"key": "59"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 9. Smooth out the fluctuations with rolling means
# <p>It looks like my suspicion may be true: Kim is not always the most searched Kardashian or Jenner sister. Since late-2016, at various months, Kylie overtakes Kim. Two big spikes where she smashed Kim's search interest: in September 2017 when it was reported that Kylie was expecting her first child with rapper <a href="https://en.wikipedia.org/wiki/Travis_Scott">Travis Scott</a> and in February 2018 when she gave birth to her daughter, <NAME>. The continued success of Kylie Cosmetics has kept her in the news, not to mention making her the "The Youngest Self-Made Billionaire Ever" <a href="https://www.forbes.com/sites/natalierobehmed/2019/03/05/at-21-kylie-jenner-becomes-the-youngest-self-made-billionaire-ever/#57e612c02794">according to Forbes</a>.</p>
# <p>These fluctuations are descriptive but do not really help us answer our question: is Kim even the most famous sister anymore? We can use rolling means to smooth out short-term fluctuations in time series data and highlight long-term trends. Let's make the window twelve months a.k.a. one year.</p>
# + dc={"key": "59"} tags=["sample_code"]
# Smooth the data with rolling means
trends.rolling(window=12).mean().plot()
# + dc={"key": "66"} deletable=false editable=false run_control={"frozen": true} tags=["context"]
# ## 10. Who's more famous? The Kardashians or the Jenners?
# <p>Whoa, okay! So by this metric, Kim is still the most famous sister despite Kylie being close and nearly taking her crown. Honestly, the biggest takeaway from this whole exercise might be Kendall not showing up that much. It makes sense, though, despite her <a href="http://time.com/money/5033357/kendall-jenner-makes-more-than-gisele-bundchen/">wildly successful modeling career</a>. Some have called her "<a href="https://www.nickiswift.com/5681/kendall-jenner-normal-one-family/">the only normal one in her family</a>" as she tends to shy away from the more dramatic and controversial parts of the media limelight that generate oh so many clicks.</p>
# <p>Let's end this analysis with one last plot. In it, we will plot (pun!) the Kardashian sisters against the Jenner sisters to see which family line is more popular now. We will use average search interest to make things fair, i.e., total search interest divided by the number of sisters in the family line.</p>
# <p><strong>The answer?</strong> Since 2015, it has been a toss-up. And in the future? With this family and their penchant for big events, who knows?</p>
# + dc={"key": "66"} tags=["sample_code"]
# Average search interest for each family line
trends['kardashian'] = trends[['kim', 'khloe', 'kourtney']].sum() / 3
trends['jenner'] = trends[['kendall', 'kylie']].sum() / 2
# Plot average family line search interest vs. month
trends[['kardashian', 'jenner']].plot()
trends.head()
| DataCamp/Up and Down With the Kardashians/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ajenningsfrankston/Dynamic-Memory-Network-Plus-master/blob/master/tree_regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + pycharm={"name": "#%%\n"} colab={"base_uri": "https://localhost:8080/"} id="wnfdFeUNBjX2" outputId="a40dfa41-15e7-4f29-d506-e34d0518f52b"
"""
som_regression model
"""
# !pip install numerapi
# !pip install susi
import os
import gc
import pandas as pd
from numerapi import NumerAPI
import zipfile
import os
import susi
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.ensemble import BaggingClassifier
TOURNAMENT_NAME = "kazutsugi"
TARGET_NAME = f"target"
PREDICTION_NAME = f"prediction_{TOURNAMENT_NAME}"
data_directory = "../kazutsugi/datasets/"
BENCHMARK = 0.002
BAND = 0.04
# Submissions are scored by Spearman correlation
def score(df):
return df[[TARGET_NAME, PREDICTION_NAME]].corr(method="spearman")[TARGET_NAME][PREDICTION_NAME]
# The payout function
def payout(scores):
return ((scores - BENCHMARK)/BAND).clip(lower=-1, upper=1)
def download_data():
data_archive = NumerAPI().download_current_dataset(dest_path='../tmp', unzip=False)
with zipfile.ZipFile(data_archive, "r") as zip_ref:
zip_ref.extractall("../kazutsugi/datasets")
# + pycharm={"name": "#%%\n"} id="eZMu2qHNBjX4"
def get_data():
download_data()
print("# Loading data...")
# The training data is used to train your model how to predict the targets.
training_data = pd.read_csv(data_directory + "numerai_training_data.csv").set_index("id")
# The tournament data is the data that Numerai uses to evaluate your model.
tournament_data = pd.read_csv(data_directory + "numerai_tournament_data.csv").set_index("id")
feature_names = [ f for f in training_data.columns if f.startswith("feature")]
print(f"Loaded {len(feature_names)} features"
return training_data,feature_names,tournament_data
# + pycharm={"name": "#%%\n"} id="ShCkZfcBBjX4"
def make_model(training_data,feature_names,tournament_data):
print("Training model")
X = training_data[feature_names]
Y = training_data[TARGET_NAME]
print(X.head())
som = susi.SOMClustering()
som.fit(X)
bmu_list = som.get_bmus(X)
plt.hist2d([x[0] for x in bmu_list], [x[1] for x in bmu_list])
model = Ridge(alpha=0.9)
model.fit(X, Y)
print("Generating predictions")
training_data[PREDICTION_NAME] = model.predict(training_data[feature_names])
tournament_data[PREDICTION_NAME] = model.predict(tournament_data[feature_names])
# Check the per-era correlations on the training set
train_correlations = training_data.groupby("era").apply(score)
print(
f"On training the correlation has mean {train_correlations.mean()} and std {train_correlations.std()}")
print(
f"On training the average per-era payout is {payout(train_correlations).mean()}")
# Check the per-era correlations on the validation set
validation_data = tournament_data[tournament_data.data_type == "validation"]
validation_correlations = validation_data.groupby("era").apply(score)
print(
f"On validation the correlation has mean {validation_correlations.mean()} and std {validation_correlations.std()}")
print(
f"On validation the average per-era payout is {payout(validation_correlations).mean()}")
# create destination directory if it does not exist
#
destination_dir = "../kazutsugi/submissions/"
if not os.path.exists(destination_dir):
os.makedirs(destination_dir)
submission_file = destination_dir + TOURNAMENT_NAME + "_submission.csv"
tournament_data[PREDICTION_NAME].to_csv(submission_file,header=True)
# + pycharm={"name": "#%%\n"} id="W0a7luOyBjX6" outputId="8fb8aebe-daf2-4902-f9af-9cd6b11ac45a"
training_data,feature_names,tournament_data = get_data()
# + pycharm={"name": "#%%\n"} id="uvO6tPK3BjX6"
make_model(training_data,feature_names,tournament_data)
# + [markdown] id="8rpvmvpjKaFE"
#
# + [markdown] id="fzzJlF5QKdnW"
#
# + id="UMqwIaqUZU39"
from numerapi import NumerAPI
n_id = "OML65REYFDPC5O7N22XCRP44BG2M74XH"
key = "<KEY>"
api = NumerAPI(public_id=n_id,secret_key=key)
base_path = "../kazutsugi/submissions/"
path = base_path + 'kazutsugi' + "_submission.csv"
#print('uploading')
#api.upload_predictions(path)
| tree_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="YqsOmFjqmuvo"
import pandas as pd
pd.options.display.float_format = '{:.2f}'.format
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
plt.style.use('fivethirtyeight')
import plotly.express as px
import seaborn as sns
# + colab={"base_uri": "https://localhost:8080/"} id="dM1Z1dXqx0Ij" outputId="ae3b1480-db35-4e69-c909-2976181b53dc"
from google.colab import drive
drive.mount('/content/drive')
# + id="g3i1olzaoGgZ"
files = {'customers' : '/content/drive/MyDrive/Datasets/olist_customers_dataset.csv',
'geolocation' : '/content/drive/MyDrive/Datasets/olist_geolocation_dataset.csv',
'items' : '/content/drive/MyDrive/Datasets/olist_order_items_dataset.csv',
'payment' : '/content/drive/MyDrive/Datasets/olist_order_payments_dataset.csv',
'orders' : '/content/drive/MyDrive/Datasets/olist_orders_dataset.csv',
'products' : '/content/drive/MyDrive/Datasets/olist_products_dataset.csv',
'sellers' : '/content/drive/MyDrive/Datasets/olist_sellers_dataset.csv',
'review' : '/content/drive/MyDrive/Datasets/olist_order_reviews_dataset.csv',
}
dfs = {}
for key, value in files.items():
dfs[key] = pd.read_csv(value, on_bad_lines='skip')
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="UqSSW5PM1lG9" outputId="4d4f9159-a466-4fe1-caf6-d20e40ae2581"
dfs['customers'].head()
# + colab={"base_uri": "https://localhost:8080/"} id="8L6FT8n310xz" outputId="7a6e8ef5-2209-4210-94fa-018480de86d5"
dfs['customers'].info()
# + colab={"base_uri": "https://localhost:8080/"} id="K9cOBEmQyIk2" outputId="1e0937f8-b194-45a6-d199-6fe1328829ca"
dfs['customers'].isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="QKWUtUu10rJx" outputId="c4c27663-007b-451f-99f9-1cd68c37ba2b"
dfs['customers'].isna().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 314} id="59VMKGJI2J5x" outputId="c583c7a9-61eb-4ecc-f1fd-1617574bfd03"
dfs['orders'].head()
# + colab={"base_uri": "https://localhost:8080/"} id="1o9Ynjf12S0P" outputId="400bdf86-ef75-423a-fafc-ef5b9818c943"
dfs['orders'].info()
# + colab={"base_uri": "https://localhost:8080/"} id="8VcLNxuq0yON" outputId="73672e65-812c-42d4-cba1-650bc76ac6b7"
dfs['orders'].isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="6ui9uApL08Yu" outputId="c2a1087d-a486-4bea-cadc-f6bf72fcfe08"
dfs['orders'].isna().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 215} id="iqwGrByH2UTf" outputId="b977544a-aed2-4a9d-ead5-6fd6e01cc1b4"
dfs['items'].head()
# + colab={"base_uri": "https://localhost:8080/"} id="LjtFaml42VSu" outputId="332d2aea-e61d-4283-9773-5745cea90be0"
dfs['items'].info()
# + colab={"base_uri": "https://localhost:8080/"} id="gANNgSgm1GKW" outputId="fe3b65b4-54e0-4b37-fa34-ba5172d6f737"
dfs['items'].isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="gCOEmYbx1I01" outputId="2b45148f-f03a-4e12-db7e-3a07b6133a32"
dfs['items'].isna().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 215} id="bPN0p1kE2dlO" outputId="71cb16b3-df99-4847-85d2-2bdbd429fe21"
dfs['products'].head()
# + colab={"base_uri": "https://localhost:8080/"} id="zMdlmUJM2gHe" outputId="7b8d05df-cc23-40cd-d88e-2b5ff965eb1e"
dfs['products'].info()
# + colab={"base_uri": "https://localhost:8080/"} id="8PjP-6up1PBF" outputId="5b8ee31a-f6f7-4d9a-b4ed-ca46d424dffb"
dfs['products'].isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="Ct-M0CiV1PoU" outputId="e94118a8-7760-4287-a062-21e97c2afbf1"
dfs['products'].isna().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 215} id="7adDbaPw2hL2" outputId="c7d4e61d-0123-4913-cddf-cdcd9e811c34"
dfs['sellers'].head()
# + colab={"base_uri": "https://localhost:8080/"} id="AMbTFwdX2gse" outputId="df4ca20e-b57c-46b5-8dbb-cac8447ac578"
dfs['sellers'].info()
# + colab={"base_uri": "https://localhost:8080/"} id="vh2OrQ_K1Ypr" outputId="6f2f024f-a89a-454e-b6ff-f91a4e6bbd3d"
dfs['sellers'].isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="L_dhjQtk1Z47" outputId="ad945059-c262-4eb3-d73e-08d6a25ef7d8"
dfs['sellers'].isna().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 215} id="MUAepEO42lgO" outputId="c9e12157-8f49-4a43-82b4-8d5fe96b6a77"
dfs['payment'].head()
# + colab={"base_uri": "https://localhost:8080/"} id="QZs-7jV82o41" outputId="2e52e56e-33ce-446a-c518-e8e4981651d5"
dfs['payment'].info()
# + colab={"base_uri": "https://localhost:8080/"} id="_8u9ffNs1ijC" outputId="280ee754-b8f9-48a2-94fc-285238c95c67"
dfs['payment'].isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="ZJoVZyKc1jQi" outputId="3389eeb2-3ea1-42de-cc88-1eebb7d2c37f"
dfs['payment'].isna().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 215} id="9WW0BIGb2nrl" outputId="b98d97aa-f913-46e1-d020-87e823d0b61e"
dfs['geolocation'].head()
# + colab={"base_uri": "https://localhost:8080/"} id="u6zvNYvK2nLV" outputId="682901b0-1f86-4d22-9d5a-9e9f867a729c"
dfs['geolocation'].info()
# + colab={"base_uri": "https://localhost:8080/"} id="XUHmV86r16au" outputId="42569ddc-a7b3-40eb-abc8-256c69f23944"
dfs['geolocation'].isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="5r9tJOnc17w2" outputId="d8a178a8-05eb-4ec4-9e61-5df01e82a246"
dfs['geolocation'].isna().sum()
| Docs/DataReport/datasets_absolutos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.insert(1, 'C:/Users/peter/Desktop/volatility-forecasting/midas')
from volatility import MIDAS
from weights import Beta
from base import BaseModel
from helper_functions import create_matrix
import pandas as pd
import numpy as np
import time
import statsmodels.api as sm
import matplotlib.pyplot as plt
# -
# ## Mixed Data Sampling Simulation
#
# The simulation was inspired from [Conrad and Kleen (2019)] research paper. <br>
# Suppose we have $X_t$ is an AR(1) process, that:
# $$X_{i,t} = \phi X_{i-1, t} + \epsilon_t$$
# where t = 1, ...,T, i = 1, ...,$I_t$, $I_t$ equals to 22. $\phi = 0.9$ and $\epsilon_t \sim \mathcal{N}(0, 1)$ standard normal variable, than the MIDAS equation will be:
# $$y_t = m + \theta \sum_{k=0}^K \xi_k (1, w) X_{i-k, t} + z_t$$
# with the parameters m = 0.1, $\theta = 0.3$ and w = 4.0. To see how accurate our code about to estimate the theoratical parameters, we run Monte Carlo simulation with three sample size (T), namely 100, 200, 400. It comes out, that with very small sample size it can estimate parameters accurately.
class MIDAS_sim(BaseModel):
def __init__(self, lag = 22, plot = True, *args):
self.lag = lag
self.plot = plot
self.args = args
def initialize_params(self, X):
self.init_params = np.linspace(1, 1, 3)
return self.init_params
def model_filter(self, params, X, y):
if isinstance(y, int) or isinstance(y, float):
T = y
else:
T = len(y)
model = np.zeros(T)
for i in range(T):
model[i] = params[0] + params[1] * Beta().x_weighted(X[i * self.lag : (i + 1) * self.lag].reshape((1, self.lag)), [1.0, params[2]])
return model
def loglikelihood(self, params, X, y):
return np.sum((y - self.model_filter(params, X, y)) ** 2)
def simulate(self, params = [0.1, 0.3, 4.0], num = 500, K = 22):
X = np.zeros(num * K)
y = np.zeros(num)
for i in range(num * K):
if i == 0:
X[i] = np.random.normal()
else:
X[i] = 0.9 * X[i - 1] + np.random.normal()
for i in range(num):
y[i] = params[0] + params[1] * Beta().x_weighted(X[i * K : (i + 1) * K].reshape((1, K)), [1.0, params[2]]) + np.random.normal(scale = 0.7**2)
return X, y
def create_sims(self, number_of_sims = 500, length = 500, K = 22, params = [0.1, 0.3, 4.0]):
lls, b0, b1, th, runtime = np.zeros(number_of_sims), np.zeros(number_of_sims), np.zeros(number_of_sims), np.zeros(number_of_sims), np.zeros(number_of_sims)
for i in range(number_of_sims):
np.random.seed(i)
X, y = self.simulate(params = params, num = length, K = K)
start = time.time()
self.fit(['pos', 'pos', 'pos'], X, y)
runtime[i] = time.time() - start
lls[i] = self.opt.fun
b0[i], b1[i], th[i] = self.optimized_params[0], self.optimized_params[1], self.optimized_params[2]
return pd.DataFrame(data = {'LogLike': lls,
'Beta0': b0,
'Beta1': b1,
'Theta':th})
def forecasting(self, X, k = 10):
X_n = np.zeros(k * 22)
for i in range(k * 22):
if i == 0:
X_n[i] = 0.9 * X[-1] + np.random.normal()
else:
X_n[i] = 0.9 * X_n[i - 1] + np.random.normal()
try:
y_hat = self.model_filter(self.optimized_params, X_n, k)
except:
params = input('Please give the parameters:')
return X_n, y_hat
sim100 = pd.read_csv('C:/Users/peter/Desktop/volatility-forecasting/results/midas_sim100.csv')
sim200 = pd.read_csv('C:/Users/peter/Desktop/volatility-forecasting/results/midas_sim200.csv')
sim500 = pd.read_csv('C:/Users/peter/Desktop/volatility-forecasting/results/midas_sim500.csv')
sim1000 = pd.read_csv('C:/Users/peter/Desktop/volatility-forecasting/results/midas_sim1000.csv')
sim2000 = pd.read_csv('C:/Users/peter/Desktop/volatility-forecasting/results/midas_sim2000.csv')
# +
beta0_100 = sm.nonparametric.KDEUnivariate(sim100.iloc[:, 2].values)
beta0_100.fit()
beta0_200 = sm.nonparametric.KDEUnivariate(sim200.iloc[:, 2].values)
beta0_200.fit()
beta0_500 = sm.nonparametric.KDEUnivariate(sim500.iloc[:, 2].values)
beta0_500.fit()
beta0_1000 = sm.nonparametric.KDEUnivariate(sim1000.iloc[:, 2].values)
beta0_1000.fit()
beta0_2000 = sm.nonparametric.KDEUnivariate(sim2000.iloc[:, 2].values)
beta0_2000.fit()
beta1_100 = sm.nonparametric.KDEUnivariate(sim100.iloc[:, 3].values)
beta1_100.fit()
beta1_200 = sm.nonparametric.KDEUnivariate(sim200.iloc[:, 3].values)
beta1_200.fit()
beta1_500 = sm.nonparametric.KDEUnivariate(sim500.iloc[:, 3].values)
beta1_500.fit()
beta1_1000 = sm.nonparametric.KDEUnivariate(sim1000.iloc[:, 3].values)
beta1_1000.fit()
beta1_2000 = sm.nonparametric.KDEUnivariate(sim2000.iloc[:, 3].values)
beta1_2000.fit()
theta_100 = sm.nonparametric.KDEUnivariate(sim100.iloc[:, 4].values)
theta_100.fit()
theta_200 = sm.nonparametric.KDEUnivariate(sim200.iloc[:, 4].values)
theta_200.fit()
theta_500 = sm.nonparametric.KDEUnivariate(sim500.iloc[:, 4].values)
theta_500.fit()
theta_1000 = sm.nonparametric.KDEUnivariate(sim1000.iloc[:, 4].values)
theta_1000.fit()
theta_2000 = sm.nonparametric.KDEUnivariate(sim2000.iloc[:, 4].values)
theta_2000.fit()
fig , ax = plt.subplots(3, 1, figsize=(15, 9), tight_layout=True)
ax[0].plot(beta0_100.support, beta0_100.density, lw = 3, label = 'N = 100', zorder = 10)
ax[0].plot(beta0_200.support, beta0_200.density, lw = 3, label = 'N = 200', zorder = 10)
ax[0].plot(beta0_500.support, beta0_500.density, lw = 3, label = 'N = 500', zorder = 10)
#ax[0].plot(beta0_1000.support, beta0_1000.density, lw = 3, label = 'N = 1000', zorder = 10)
#ax[0].plot(beta0_2000.support, beta0_2000.density, lw = 3, label = 'N = 2000', zorder = 10)
ax[0].set_title(r'$\beta_0$'+" (Act = 0.1) parameter's density from different samples size")
ax[0].grid(True, zorder = -5)
ax[0].set_xlim((0.0, 0.3))
ax[0].legend(loc = 'best')
ax[1].plot(beta1_100.support, beta1_100.density, lw = 3, label = 'N = 100', zorder = 10)
ax[1].plot(beta1_200.support, beta1_200.density, lw = 3, label = 'N = 200', zorder = 10)
ax[1].plot(beta1_500.support, beta1_500.density, lw = 3, label = 'N = 500', zorder = 10)
ax[1].plot(beta1_1000.support, beta1_1000.density, lw = 3, label = 'N = 1000', zorder = 10)
#ax[1].plot(beta1_2000.support, beta1_2000.density, lw = 3, label = 'N = 2000', zorder = 10)
#ax[1].set_title(r'$\beta_1$'+" (Act = 0.3) parameter's density from different samples size")
ax[1].grid(True, zorder = -5)
ax[1].set_xlim((0.2, 0.4))
ax[1].legend(loc = 'best')
ax[2].plot(theta_100.support, theta_100.density, lw = 3, label = 'N = 100', zorder = 10)
ax[2].plot(theta_200.support, theta_200.density, lw = 3, label = 'N = 200', zorder = 10)
ax[2].plot(theta_500.support, theta_500.density, lw = 3, label = 'N = 500', zorder = 10)
#ax[2].plot(theta_1000.support, theta_1000.density, lw = 3, label = 'N = 1000', zorder = 10)
#ax[2].plot(theta_2000.support, theta_2000.density, lw = 3, label = 'N = 2000', zorder = 10)
ax[2].set_title(r'$\theta$'+" (Act = 4.0) parameter's density from different samples size")
ax[2].grid(True, zorder = -5)
#ax[2].set_xlim((0.0, 1.0))
ax[2].legend(loc = 'best')
plt.show()
# -
def bias_2(actual_value, list_of_est, lags):
"""
Function for calculating the Squared Bias
Bias^2 = \sum_{j=0}^N (\hat{w_j} - w_j)^2
where \hat{w_j} is the estimated weight and w_j is the actual weight.
Parameters
----------
actual_value : int or flaot
Theoretical value for the Beta lag function
list_of_est : array or value
Contain the estimated parameter for the Beta lag function
lags : int or flaot
The number of lags
Returns
-------
bias : array or value
Contain the squared bias
"""
bias = np.zeros(len(list_of_est))
act = Beta().weights([1.0, actual_value], lags)
for i in range(len(list_of_est)):
w = Beta().weights([1.0, list_of_est[i]], lags)
bias[i] = np.sum( (w - act) ** 2 )
return bias
bs100 = bias_2(4.0, sim100.iloc[:, 4], 22)
bs200 = bias_2(4.0, sim200.iloc[:, 4], 22)
bs500 = bias_2(4.0, sim500.iloc[:, 4], 22)
bs1000 = bias_2(4.0, sim1000.iloc[:, 4], 22)
bs2000 = bias_2(4.0, sim2000.iloc[:, 4], 22)
pd.DataFrame(data = {'SquaredBias': [np.mean(bs100), np.mean(bs200), np.mean(bs500), np.mean(bs1000), np.mean(bs2000)],
'Std': [np.std(bs100), np.std(bs200), np.std(bs500), np.std(bs1000), np.std(bs2000)]},
index = [100, 200, 500, 1000, 2000])
| Examples/MIDAS_sim_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
#
# © Copyright Quantopian Inc.<br>
# © Modifications Copyright QuantRocket LLC<br>
# Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
#
# <a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# # Universe Selection
#
# by <NAME>, <NAME>
#
# Selecting the product space in which an algorithm trades can be as important as, if not more than, the strategy itself. In this lecture, we will walk through the basics of constructing a universe.
# ## What is a Universe?
#
# On a high level, universe selection is the process of choosing the pool of securities upon which your algorithm will trade. For example, an algorithm designed to play with the characteristics of a universe consisting of technology equities may perform exceptionally well in that universe with the tradeoff of falling flat in other sectors. Experimenting with different universes by tweaking their components is an essential part of developing a trading strategy.
#
# Using Pipeline and the full US Stock dataset, we have access to over 8000 securities to choose from each day. However, the securities within this basket are markedly different. Some are different asset classes, some belong to different sectors and super-sectors, some employ different business models, some practice different management styles, and so on. By defining a universe, a trader can narrow in on securities with one or more of these attributes in order to craft a strategy that is most effective for that subset of the population.
#
# Without a properly-constructed universe, your algorithm may be exposed to risks that you just aren't aware of. For example, it could be possible that your universe selection methodology only selects a stock basket whose constituents do not trade very often. Let's say that your algorithm wants to place an order of 100,000 shares for a company that only trades 1,000 on a given day. The inability to fill this order or others might prevent you from achieving the optimal weights for your portfolio, thereby undermining your strategy. These risks can be controlled for by careful and thoughtful universe slection.
#
# In Zipline, universes are often implemented as a Pipeline screen. If you are not familiar with Pipeline, feel free to check out the [Pipeline Tutorial](https://www.quantrocket.com/code/?filter=zipline). Below is an example implementation of a universe that limits Pipeline output to the 500 securities with the largest revenue each day. This can be seen as a naive implementation of the Fortune500.
# + jupyter={"outputs_hidden": false}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from zipline.pipeline.data import master
from zipline.pipeline import Pipeline
from zipline.pipeline.data import USEquityPricing
from zipline.research import run_pipeline
from zipline.pipeline.data import sharadar
from zipline.pipeline.factors import CustomFactor
# + jupyter={"outputs_hidden": false}
revenue = sharadar.Fundamentals.slice(dimension='ARQ', period_offset=0).REVENUE.latest
pipe = Pipeline(
columns={
'Revenue': revenue
},
screen=revenue.top(500)
)
res = run_pipeline(pipe, start_date='2016-01-04', end_date='2016-01-04', bundle='usstock-1d-bundle')
print("There are %d assets in this universe." % len(res))
res.head(10) # print 10 constituents
# -
# This is a good start, but again, it is a very naive universe. Normally, high revenue is a characteristic of a healthy, thriving company, but there are many other things that play into the construction of a good universe. While this idea has a reasonable economic basis, more analysis has to be conducted to determine the efficacy of this universe. There may be more subtle things occurring independently of the revenue of its constituent companies.
#
# For the rest of this notebook, we will design our own universe, profile it and check its performance. Let's create the Lectures500!
# ## Lectures500
#
# ### Sector Exposure
#
# If I create a universe that only looks at equities in the technology sector, my algorithm will have an extreme sector bias. Companies in the same industry sector are affected by similar macroeconomic trends and therefore their performance tends to be correlated. In the case of particular strategies, we may find the benefits of working exclusively within a particular sector greater than the downside risks, but this is not suitable for creating a general-purpose, quality universe.
#
# Let's have a look at the sector breakdown of the Lectures500.
# + jupyter={"outputs_hidden": false}
# Rename our universe to Lectures500
Lectures500 = revenue.top(500)
def get_sectors(day, universe, bundle):
pipe = Pipeline(columns={'Sector': master.SecuritiesMaster.usstock_Sector.latest}, screen=universe)
# Drop the datetime level of the index, since we only have one day of data
return run_pipeline(pipe, start_date=day, end_date=day, bundle=bundle).reset_index(level=0, drop=True)
def calculate_sector_counts(sectors):
counts = (sectors.groupby('Sector').size())
return counts
lectures500_sectors = get_sectors('2016-01-04', Lectures500, 'usstock-1d-bundle')
lectures500_counts = calculate_sector_counts(lectures500_sectors)
# + jupyter={"outputs_hidden": false}
def plot_sector_counts(sector_counts):
bar = plt.subplot2grid((10,12), (0,0), rowspan=10, colspan=6)
pie = plt.subplot2grid((10,12), (0,6), rowspan=10, colspan=6)
# Bar chart
sector_counts.plot(
kind='bar',
color='b',
rot=30,
ax=bar,
)
bar.set_title('Sector Exposure - Counts')
# Pie chart
sector_counts.plot(
kind='pie',
colormap='Set3',
autopct='%.2f %%',
fontsize=12,
ax=pie,
)
pie.set_ylabel('') # This overwrites default ylabel, which is None :(
pie.set_title('Sector Exposure - Proportions')
plt.tight_layout();
# + jupyter={"outputs_hidden": false}
plot_sector_counts(lectures500_counts)
# -
# From the above plots it is clear that there is a mild sector bias towards the consumer discretionary industry. Any big events that affect companies in this sector will have a large effect on this universe and any algorithm that uses it.
#
# One option is to equal-weight the sectors, so that equities from each industry sector make up an identical proportion of the final universe. This, however, comes with its own disadvantages. In a sector-equal Lectures500, the universe would include some lower-revenue real estate equities at the expense of higher-revenue consumer discretionary equities.
# ### Turnover
#
# Another thing to consider when designing a universe is the rate at which the universe changes. Turnover is a way of measuring this rate of change. Turnover is defined as the number of equities to enter or exit the universe in a particular time window.
#
# Let us imagine a universe with a turnover of 0. This universe would be completely unchanged by market movements. Moreover, stocks inappropriate for the universe would never be removed and stocks that should be included will never enter.
#
# Conversely, imagine a universe that changes every one of its constituents every day. An algorithm built on this universe will be forced to sell its entire portfolio every day. This incurs transaction costs which erode returns.
#
# When creating a universe, there is an inherent tradeoff between stagnation and sensitivity to the market.
#
# Let's have a look at the turnover for the Lectures500!
# +
res = run_pipeline(Pipeline(columns={'Lectures500' : Lectures500}), start_date='2015-01-01', end_date='2016-01-01', bundle='usstock-1d-bundle')
res = res.unstack().fillna(False).astype(int)
def calculate_daily_turnover(unstacked):
return (unstacked
.diff() # Get 1/0 (True/False) showing where values changed from previous day.
.abs() # take absolute value so that any turnover is a 1
.iloc[1:] # Drop first row, which is meaningless after diff().
.groupby(axis=1, level=0)
.sum()) # Group by universe and count number of 1 values in each row.
def plot_daily_turnover(unstacked):
# Calculate locations where the inclusion state of an asset changed.
turnover = calculate_daily_turnover(unstacked)
# Write the data to an axis.
ax = turnover.plot(figsize=(14, 8))
# Add style to the axis.
ax.grid(False)
ax.set_title('Changes per Day')
ax.set_ylabel('Number of Added or Removed Assets')
def print_daily_turnover_stats(unstacked):
turnover = calculate_daily_turnover(unstacked)
print(turnover.describe().loc[['mean', 'std', '25%', '50%', '75%', 'min', 'max']])
# + jupyter={"outputs_hidden": false}
plot_daily_turnover(res)
print_daily_turnover_stats(res)
# -
# #### Smoothing
#
# A good way to reduce turnover is through smoothing functions. Smoothing is the process of taking noisy data and aggregating it in order to analyze its underlying trends. When applied to universe selection, a good smoothing function prevents equities at the universe boundary from entering and exiting frequently.
#
# One example of a potential smoothing function is a filter that finds equities that have passed the Lectures500 criteria for 16 or more days out of the past 21 days. We will call this filter `AtLeast16`. This aggregation of many days of data lends a certain degree of flexibility to the edges of our universe. If, for example, Equity XYZ is very close to the boundary for inclusion, in a given month, it may flit in and out of the Lectures500 day after day. However, with the `AtLeast16` filter, Equity XYZ is allowed to enter and exit the daily universe a maximum of 5 times before it is excluded from the smoothed universe.
#
# Let's apply a smoothing function to our universe and see its effect on turnover.
# + jupyter={"outputs_hidden": false}
from zipline.pipeline.filters import AtLeastN
Lectures500 = AtLeastN(inputs=[Lectures500],
window_length=21,
N=16,)
res_smoothed = run_pipeline(Pipeline(columns={'Lectures500 Smoothed' : Lectures500}),
start_date='2015-01-01',
end_date='2016-01-01',
bundle='usstock-1d-bundle')
res_smoothed = res_smoothed.unstack().fillna(False).astype(int)
plot_daily_turnover(res_smoothed)
print_daily_turnover_stats(res_smoothed)
# -
# Looking at the metrics, we can see that the smoothed universe has a lower turnover than the original Lectures500. Since this is a good characteristic, we will add this logic to the universe.
#
# NB: Smoothing can also be accomplished by downsampling.
# ---
#
# **Next Lecture:** [The Capital Asset Pricing Model and Arbitrage Pricing Theory](Lecture30-CAPM-and-Arbitrage-Pricing-Theory.ipynb)
#
# [Back to Introduction](Introduction.ipynb)
# ---
#
# *This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| quant_finance_lectures/Lecture29-Universe-Selection.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .js
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Javascript (Node.js)
// language: javascript
// name: javascript
// ---
var DefaultMap = (defaultValue = undefined) => {
const map = new Map()
const set = (key, value) => {map.set(key, value)}
const get = key => map.get(key) || defaultValue
return {set, get}
}
var myMap = DefaultMap(DefaultMap())
myMap.get(1).get(1)
myMap.get(1).set(1, 1)
myMap.get(1).get(1)
var myDefaultMap2d = DefaultMap(DefaultMap(0))
myDefaultMap2d.get('a').get('b') // 0
myDefaultMap2d.get('a').set('b', 42)
myDefaultMap2d.get('a').get('b') // 42
| DefaultMap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## requests
# [官方文档](https://2.python-requests.org//zh_CN/latest/user/quickstart.html)
# ### 简单实例
# +
import requests
r = requests.get('https://api.github.com/events')
print(r.encoding)
print(r.url)
print(r.json) #requests自带json解码工具,若解码失败则抛出异常
| python/modules/jupyter/Requests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# +
# Standard module imports
import numpy as np
import pandas as pd
# %matplotlib inline
# -
# Setup figures and upload plotting modules
style='notebook'
execfile('/Users/ttshimiz/Dropbox/Research/figure_setup.py')
# +
# Upload the sample data
df = pd.read_csv('../data/cleaned_sample.csv', index_col=0)
df_all = pd.read_csv('../data/cleaned_sample_all_seyferts.csv', index_col=0)
# Remove Sy 2s from cleaned sample
df = df[df['Type'] != 2]
# -
# Set the bin edges
lx_bins = np.array([40, 42, 42.5, 43., 43.5, 44., 44.5, 45., 46.])
# +
# Set the broad H-alpha vs. X-ray relationship
slope = 1.06459050797
intercept = -4.32396218268
scatter = 0.409383670093
# Set how far below the broad H-alpha vs. X-ray relationship to choose Sy 1.9 AGN
offset = 1.0
# +
# Pull out the Sy 1.9 that are offset
ind_sy1_9 = df['Type'] == 1.9
halpha_predict = np.log10(df['Intrinsic X-ray Luminosity'])*slope + intercept
diff_halpha = halpha_predict - np.log10(df['Broad Halpha Luminosity'])
off_srcs = diff_halpha > 1
on_srcs = diff_halpha < 1
# Pull out X-ray absorbed BLAGN
xray_abs = df['NH'] >= 22.0
xray_unabs = df['NH'] < 22.0
# -
# Look at the total number of optically obscured and X-ray absorbed AGN
print 'Total # BLAGN =',len(df), ' | X-ray unabsorbed | X-ray absorbed'
print 'Optically unobscured | ', sum(xray_unabs & on_srcs), ' | ', sum(xray_abs & on_srcs)
print 'Optically obscured | ', sum(xray_unabs & off_srcs), ' | ', sum(xray_abs & off_srcs)
# Look at the total number of Sy 1.9 optically obscured and X-ray absorbed AGN
print 'Total # Sy 1.9 =',sum(ind_sy1_9), ' | X-ray unabsorbed | X-ray absorbed'
print 'Optically unobscured | ', sum(xray_unabs & on_srcs & ind_sy1_9), ' | ', sum(xray_abs & on_srcs & ind_sy1_9)
print 'Optically obscured | ', sum(xray_unabs & off_srcs & ind_sy1_9), ' | ', sum(xray_abs & off_srcs & ind_sy1_9)
# +
# Count the total number of AGN, offset Sy 1.9, and X-ray absorbed BLAGN in each Lx bin
n_total = np.zeros(len(lx_bins)-1)
n_broad = np.zeros(len(lx_bins)-1)
n_off = np.zeros(len(lx_bins)-1)
n_xray_abs = np.zeros(len(lx_bins)-1)
for i in range(len(n_total)):
ind_total = ((np.log10(df_all['Intrinsic X-ray Luminosity']) > lx_bins[i]) &
(np.log10(df_all['Intrinsic X-ray Luminosity']) < lx_bins[i+1]))
n_total[i] = sum(ind_total)
ind_offset = ((np.log10(df['Intrinsic X-ray Luminosity'][off_srcs]) > lx_bins[i]) &
(np.log10(df['Intrinsic X-ray Luminosity'][off_srcs]) < lx_bins[i+1]))
n_off[i] = sum(ind_offset)
ind_broad = ((np.log10(df['Intrinsic X-ray Luminosity']) > lx_bins[i]) &
(np.log10(df['Intrinsic X-ray Luminosity']) < lx_bins[i+1]))
n_broad[i] = sum(ind_broad)
ind_xabs = ((np.log10(df['Intrinsic X-ray Luminosity'][xray_abs]) > lx_bins[i]) &
(np.log10(df['Intrinsic X-ray Luminosity'][xray_abs]) < lx_bins[i+1]))
n_xray_abs[i] = sum(ind_xabs)
print 'log(LX) =',lx_bins[i],'-',lx_bins[i+1],': Ntotal =',n_total[i],', Noff =', n_off[i],', Nbroad =', n_broad[i], ', Nxrayabs =', n_xray_abs[i]
# +
# Plot the fraction of offset sources
fig = plt.figure()
ax = fig.add_subplot(111)
bin_centers = (lx_bins[1:]+lx_bins[0:-1])/2.
ax.plot(bin_centers, n_off/n_broad, 'ko')
ax.plot(bin_centers, n_off/n_total, 'ro')
ax.set_xlabel(r'$\log(L_{\rm X})$ [erg s$^{-1}$]')
ax.set_ylabel('Fraction of Offset')
ax.set_ylim(-0.01, ax.get_ylim()[1])
ax.legend(['Fraction of BLAGN', 'Fraction of All AGN'], loc='upper left', fontsize=10)
sn.despine()
#fig.savefig('../figures/frac_offsetSy1_9_vs_lx.pdf', bbox_inches='tight')
# +
# Plot the fraction of X-ray absorbed BLAGN
fig = plt.figure()
ax = fig.add_subplot(111)
bin_centers = (lx_bins[1:]+lx_bins[0:-1])/2.
ax.plot(bin_centers, n_xray_abs/n_broad, 'ko-', label='X-ray Absorbed')
ax.plot(bin_centers+0.1, n_off/n_broad, 'ro--', label='Optically Obscured')
ax.set_xlabel(r'$\log(L_{\rm X})$ [erg s$^{-1}$]')
ax.set_ylabel('Fraction')
ax.set_ylim(-0.01, ax.get_ylim()[1])
ax.set_xlim(40.5, 46.0)
ax.legend(loc='upper right', fontsize=14)
sn.despine()
fig.savefig('../figures/frac_blagn_xray_abs_and_opt_obs_vs_lx.pdf', bbox_inches='tight')
# +
# Plot the fraction of X-ray absorbed BLAGN
fig = plt.figure()
ax = fig.add_subplot(111)
bin_centers = (lx_bins[1:]+lx_bins[0:-1])/2.
ax.plot(bin_centers, n_off/n_broad, 'ko-')
#ax.plot(bin_centers+0.1, n_sy1_9/n_broad, 'ro')
ax.set_xlabel(r'$\log(L_{\rm X})$ [erg s$^{-1}$]')
ax.set_ylabel('Fraction of Optically obscured BLAGN')
ax.set_ylim(-0.01, ax.get_ylim()[1])
ax.set_xlim(41.0, 46.0)
#ax.legend(['Fraction of BLAGN', 'Fraction of All AGN'], loc='upper left', fontsize=10)
sn.despine()
#fig.savefig('../figures/frac_blagn_xray_abs_vs_lx.pdf', bbox_inches='tight')
| notebooks/fraction-obscured-vs-lx.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
rcParams['figure.figsize'] = (16, 4) #wide graphs by default
# # Segmentation
# ## Structural segmentation
# <NAME>., & <NAME>. (1999). Multifeature audio segmentation for browsing and annotation. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 1–4. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=810860
from essentia.streaming import *
sr = 44100
loader = MonoLoader(filename = 'sources/Dire Straits - Walk of life.mp3', sampleRate=sr)
frameCutter = FrameCutter(frameSize = 1024, hopSize = 512)
w = Windowing(type = 'hann')
spec = Spectrum()
mfcc = MFCC()
loader.audio >> frameCutter.signal
frameCutter.frame >> w.frame >> spec.frame
spec.spectrum >> mfcc.spectrum
# +
pool = essentia.Pool()
mfcc.bands >> (pool, 'lowlevel.mfcc_bands')
mfcc.mfcc >> (pool, 'lowlevel.mfcc')
# -
essentia.run(loader)
imshow(pool['lowlevel.mfcc'].T[1:,:], aspect = 'auto', interpolation='nearest')
essentia.reset(loader)
loader.inputNames(), loader.outputNames()
frameCutter.inputNames(), frameCutter.outputNames()
frameCutter.connections
loader
loader.audio
frameCutter.signal
w
w.frame
frameCutter.frame
frameCutter.connections[frameCutter.frame]
w.frame in frameCutter.connections[frameCutter.frame]
# We can change parameters for any *algorithm* in the processing chain:
loader.configure(filename='sources/<NAME> - Buffalo Soldier.mp3')
essentia.run(loader)
imshow(pool['lowlevel.mfcc'].T[1:,:], aspect = 'auto', interpolation='nearest')
# If we hadn't adjusted the loader, we would have had to call:
#
# essentia.reset(loader)
#
# Because the file reader would be at the end of the file.
# ## Using essentia to calculate texture windows
# +
sr = 22050
frameSize = 1024
hopSize = 512
loader = MonoLoader(filename = 'sources/Dire Straits - Walk of life.mp3', sampleRate=sr)
frameCutter = FrameCutter(frameSize = frameSize, hopSize = hopSize)
w = Windowing(type = 'hann')
spec = Spectrum()
mfcc = MFCC()
centroid = Centroid()
pool = essentia.Pool()
# +
loader.audio >> frameCutter.signal
frameCutter.frame >> w.frame >> spec.frame
spec.spectrum >> mfcc.spectrum
spec.spectrum >> centroid.array
mfcc.mfcc >> (pool, 'lowlevel.mfcc')
centroid.centroid >> (pool, 'lowlevel.centroid')
# -
# Common error: If I try this again:
# +
loader.audio >> frameCutter.signal
frameCutter.frame >> w.frame >> spec.frame
spec.spectrum >> mfcc.spectrum
spec.spectrum >> centroid.array
mfcc.mfcc >> (pool, 'lowlevel.mfcc')
centroid.centroid >> (pool, 'lowlevel.centroid')
# -
# Oops... Need to clear the connections first. The easiest way is just to recreate the object (Python will do the garbage collection for you)
# +
sr = 22050
frameSize = 1024
hopSize = 512
loader = MonoLoader(filename = 'sources/Dire Straits - Walk of life.mp3', sampleRate=sr)
frameCutter = FrameCutter(frameSize = frameSize, hopSize = hopSize)
w = Windowing(type = 'hann')
spec = Spectrum()
centroid = Centroid()
rolloff = RollOff()
flux = Flux()
zcr = ZeroCrossingRate()
rms = RMS()
# +
# Texture windows
textureTime = 1.0 # seconds
textureSize = int(textureTime * sr/float(hopSize))
textureCutter = FrameCutter(frameSize = textureSize, hopSize = textureSize)
pool = essentia.Pool()
# +
loader.audio >> frameCutter.signal
frameCutter.frame >> w.frame >> spec.frame
spec.spectrum >> centroid.array
spec.spectrum >> rolloff.spectrum
spec.spectrum >> flux.spectrum
frameCutter.frame >> zcr.signal
frameCutter.frame >> rms.array
centroid.centroid >> (pool, 'lowlevel.centroid')
rolloff.rollOff >> (pool, 'lowlevel.rolloff')
flux.flux >> (pool, 'lowlevel.flux')
zcr.zeroCrossingRate >> (pool, 'lowlevel.zcr')
rms.rms >> (pool, 'lowlevel.rms')
# -
essentia.run(loader)
plot(pool['lowlevel.centroid'])
plot(pool['lowlevel.rms'])
# ## Texture windows
# +
sr = 44100
frameSize = 1024
hopSize = 512
loader = MonoLoader(filename = 'sources/Stevie Wonder - Superstition.mp3', sampleRate=sr)
frameCutter = FrameCutter(frameSize = frameSize, hopSize = hopSize)
w = Windowing(type = 'hann')
spec = Spectrum()
centroid = Centroid()
rolloff = RollOff()
flux = Flux()
zcr = ZeroCrossingRate()
rms = RMS()
# +
# Texture windows
textureTime = 1.0 # seconds
textureSize = int(textureTime * sr/float(hopSize))
textureWindowCutters = []
textureWindowMeans = []
textureWindowVars = []
for i in range(5):
textureWindowCutters.append(FrameCutter(frameSize = textureSize, hopSize = textureSize))
textureWindowMeans.append(Mean())
textureWindowVars.append(Variance())
pool = essentia.Pool()
# +
loader.audio >> frameCutter.signal
frameCutter.frame >> w.frame >> spec.frame
spec.spectrum >> centroid.array
spec.spectrum >> rolloff.spectrum
spec.spectrum >> flux.spectrum
frameCutter.frame >> zcr.signal
frameCutter.frame >> rms.array
centroid.centroid >> (pool, 'lowlevel.centroid')
rolloff.rollOff >> (pool, 'lowlevel.rolloff')
flux.flux >> (pool, 'lowlevel.flux')
zcr.zeroCrossingRate >> (pool, 'lowlevel.zcr')
rms.rms >> (pool, 'lowlevel.rms')
# -
# Now the texture windows:
# +
centroid.centroid >> textureWindowCutters[0].signal
rolloff.rollOff >> textureWindowCutters[1].signal
flux.flux >> textureWindowCutters[2].signal
zcr.zeroCrossingRate >> textureWindowCutters[3].signal
rms.rms >> textureWindowCutters[4].signal
features = ['lowlevel.centroid', 'lowlevel.rolloff', 'lowlevel.flux', 'lowlevel.zcr', 'lowlevel.rms']
for i in range(5):
textureWindowCutters[i].frame >> textureWindowMeans[i].array
textureWindowCutters[i].frame >> textureWindowVars[i].array
textureWindowMeans[i].mean >> (pool, '%s_mean'%features[i])
textureWindowVars[i].variance >> (pool, '%s_var'%features[i])
# -
essentia.run(loader)
plot(pool['lowlevel.rms'])
plot(pool['lowlevel.rms_mean'])
dur = 1 # get right duration!
rms = pool['lowlevel.rms']
rms_mean = pool['lowlevel.rms_mean']
plot(linspace(0, dur, len(rms)), rms)
plot(linspace(0, dur, len(rms_mean)), rms_mean, lw=3)
dur = 1
rms = pool['lowlevel.rms']
rms_mean = pool['lowlevel.rms_mean']
rms_var = pool['lowlevel.rms_var']
plot(linspace(0, dur, len(rms)), rms)
plot(linspace(0, dur, len(rms_mean)), rms_mean, lw=3)
twinx()
plot(linspace(0, dur, len(rms_var)), rms_var, lw=3, color='r')
all_features = []
for ft in features:
all_features.append(ft+'_mean')
all_features.append(ft+'_var')
feat_vectors = array( [pool[feat_vector_name] for feat_vector_name in all_features] ,dtype=float)
feat_vectors.shape
# ## Euclidean distance
from scipy.spatial.distance import euclidean
feat_vect_frame = feat_vectors[:,0]
feat_vect_frame
euclidean(feat_vectors[:,0], feat_vectors[:,1])
euclidean(feat_vectors[:,0], feat_vectors[:,0])
euc_distances = []
for i in range(feat_vectors.shape[1] - 1):
cdist = euclidean(feat_vectors[:,i], feat_vectors[:,i+1])
euc_distances.append(cdist)
plot(euc_distances)
plot(diff(euc_distances))
# +
diff_euc = diff(euc_distances)
euc_peaks = argwhere(diff_euc>0.2e7)
plot(diff_euc)
plot(euc_peaks, diff_euc[euc_peaks], 'o')
# +
rms = pool['lowlevel.rms']
dur = (hopSize*len(rms))/float(sr)
plot(linspace(0, dur, len(rms)), rms)
vlines(euc_peaks[:,0], -0.05, 0.3)
for peak in euc_peaks[:,0]:
text(peak, 0.31, '%.1f'%peak)
# -
# ## Cosine distance
# http://en.wikipedia.org/wiki/Cosine_distance
#
# Measures similarity in orientation (multidimensional) but not in magnitude
from scipy.spatial.distance import cosine
cosine(feat_vectors[:,0], feat_vectors[:,1])
cosine(feat_vectors[:,0], feat_vectors[:,0])
cos_distances = []
for i in range(feat_vectors.shape[1] - 1):
cdist = cosine(feat_vectors[:,i], feat_vectors[:,i+1])
cos_distances.append(cdist)
plot(cos_distances)
plot(diff(cos_distances))
# +
diff_cos = diff(cos_distances)
cos_peaks = argwhere(diff_cos>0.000008)
plot(diff_cos)
plot(cos_peaks, diff_cos[cos_peaks], 'o')
# -
cos_peaks
# +
rms = pool['lowlevel.rms']
dur = (hopSize*len(rms))/float(sr)
plot(linspace(0, dur, len(rms)), rms)
vlines(cos_peaks[:,0], -0.05, 0.3)
for peak in cos_peaks[:,0]:
text(peak, 0.31, '%.1f'%peak)
# -
dur
# ## Mahalanobis distance
# http://en.wikipedia.org/wiki/Mahalanobis_distance
from scipy.spatial.distance import mahalanobis
# 10 feature vectors per analysis frame:
feat_vectors[:,1].reshape(10,1)
covmat = cov(feat_vectors)
covmat
invcov = inv(covmat)
invcov
mahalanobis(feat_vectors.T[0].T, feat_vectors.T[1], invcov)
mahalanobis(feat_vectors.T[0].T, feat_vectors.T[0], invcov)
mah_distances = []
for i in range(feat_vectors.shape[1] - 1):
cdist = mahalanobis(feat_vectors[:,i], feat_vectors[:,i+1], invcov)
mah_distances.append(cdist)
plot(mah_distances)
# +
diff_mah = diff(mah_distances)
mah_peaks = argwhere(diff_mah>2.5)
plot(diff_mah)
plot(mah_peaks, diff_mah[mah_peaks], 'o')
# +
rms = pool['lowlevel.rms']
dur = (hopSize*len(rms))/float(sr)
plot(linspace(0, dur, len(rms)), rms)
vlines(mah_peaks[:,0], -0.05, 0.3)
for peak in mah_peaks[:,0]:
text(peak, 0.31, '%.1f'%peak)
# -
# Now all results:
# +
rms = pool['lowlevel.rms']
dur = (hopSize*len(rms))/float(sr)
plot(linspace(0, dur, len(rms)), rms, alpha=0.2)
vlines(mah_peaks[:,0], -0.05, 0.25, 'r', lw=3)
for peak in mah_peaks[:,0]:
text(peak, 0.26, '%.1f'%peak, color='red')
vlines(cos_peaks[:,0], -0.05, 0.3, 'g', lw=3)
for peak in cos_peaks[:,0]:
text(peak, 0.31, '%.1f'%peak, color='g')
vlines(euc_peaks[:,0], -0.05, 0.3, 'b', lw=3)
for peak in euc_peaks[:,0]:
text(peak, 0.35, '%.1f'%peak, color='g')
# -
# There are may other ways of calculating vector distance:
#
# http://docs.scipy.org/doc/scipy/reference/spatial.distance.html
#
# http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics.pairwise
#
# How can this segmentation metric be improved?
#
# *Hint: How does this relate to the self-similarity matrix?*
# ## Event segmentation
sr = 44100
loader = MonoLoader(filename = 'sources/superstition.wav', sampleRate=sr)
loader.audio
pool = essentia.Pool()
loader.audio >> (pool, "samples")
essentia.run(loader)
plot(pool['samples']);
rhythmext = RhythmExtractor2013()
loader.audio >> rhythmext.signal
rhythmext.ticks >> (pool, 'rhythm.ticks')
rhythmext.bpm >> (pool, 'rhythm.bpm')
rhythmext.confidence >> (pool, 'rhythm.confidence')
rhythmext.estimates >> (pool, 'rhythm.estimates')
rhythmext.bpmIntervals >> (pool, 'rhythm.bpmIntervals')
essentia.reset(loader)
pool.clear()
essentia.run(loader)
pool['rhythm.ticks']
pool['rhythm.bpm']
# +
dur = len(pool['samples'].flat)/float(sr)
plot(linspace(0, dur, len(pool['samples'].flat)), pool['samples'].flat);
plot(pool['rhythm.ticks'], zeros_like(pool['rhythm.ticks']), 'o')
# -
frameSize = 1024
hopSize = 256
spec = Spectrum()
onsetdetect = OnsetDetection(method='flux')
frameCutter = FrameCutter(frameSize = frameSize, hopSize = hopSize)
w = Windowing(type = 'hann')
loader.audio >> frameCutter.signal
frameCutter.frame >> w.frame >> spec.frame
spec.spectrum >> onsetdetect.spectrum
spec.spectrum >> onsetdetect.phase
onsetdetect.onsetDetection >> (pool, 'onsetDetection')
essentia.reset(loader)
pool.clear()
essentia.run(loader)
plot(pool['onsetDetection'])
diff_onsets = diff(pool['onsetDetection'])
plot(diff_onsets)
onsets = argwhere(diff_onsets > 0.1)
plot(diff_onsets)
plot(onsets, zeros_like(onsets), 'o')
# TODO:
#
# * Filter out onsets that are too close
# * Then segment and find similarity between each slice
# More todo:
#
# * Use checkerboard kernel with self-similarity matrix
#
# <NAME>. (2000). Automatic audio segmentation using a measure of audio novelty. Multimedia and Expo, 2000. ICME 2000. 2000 IEEE …, 1, 452–455. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=869637
from scipy.ndimage.filters import gaussian_filter
kernel = zeros((65, 65))
kernel[32,32] = 1
kernel = gaussian_filter(kernel, 16)
# +
from mpl_toolkits.mplot3d import Axes3D
fig = figure()
ax = Axes3D(fig)
X = arange(65)
Y = arange(65)
X, Y = meshgrid(X, Y)
ax.plot_surface(X, Y, kernel, rstride=1, cstride=1, cmap=cm.hot)
# +
checkerboard = array(r_[ones(33), -ones(32)])
for i in range(32):
checkerboard = column_stack((checkerboard, r_[ones(33), -ones(32)]))
for i in range(32):
checkerboard = column_stack((checkerboard, r_[-ones(32), ones(33)]))
# -
kernel*checkerboard
# +
fig = figure()
ax = Axes3D(fig)
X = arange(65)
Y = arange(65)
X, Y = meshgrid(X, Y)
ax.plot_surface(X, Y, kernel*checkerboard, rstride=1, cstride=1, cmap=cm.hot)
# -
# By: <NAME> <EMAIL>
#
# For Course MAT 240E at UCSB
#
# This ipython notebook is licensed under the CC-BY-NC-SA license: http://creativecommons.org/licenses/by-nc-sa/4.0/
#
# 
| notebooks/segmentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab={"base_uri": "https://localhost:8080/"} id="VonJnPkFb_f0" outputId="c0527e5a-48c1-4ab7-b07f-0b873fb87c85"
# # Imports
# + id="KsVq3uA0zp9w"
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchsummary import summary
import torchvision
from torchvision import models, transforms
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader, Dataset
import pytorch_lightning as pl
from PIL import Image
import cv2
import sklearn
from sklearn.metrics import roc_curve, auc, log_loss, precision_score, f1_score, recall_score, confusion_matrix
from sklearn.model_selection import train_test_split
import matplotlib as mplb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import zipfile
import shutil
from tqdm.notebook import tqdm
import warnings
warnings.filterwarnings(action='ignore')
import random
print(f'[INFO] Using pytorch-lighning version : {pl.__version__}')
# + [markdown] id="fYKYqm36XszQ"
# # Getting datasets
# + id="O0Fyim2_jR6A"
class_names = ['NEG', 'POS']
data_dir = '../Datasets/Zip'
base_dir = '../'
# create dirs
PATHS = {
'cwd': './', #Current directory
'arch': '../Datasets/MODELS/', #Folder in which we're going to save our models
'raw': '../Datasets/Csv/', #Folder containing the training files
'images': '../Datasets/Images/train/', #this folder is going to store our images files
'test_images' : '../Datasets/Images/test/'
}
os.makedirs(PATHS['arch'], exist_ok=True)
seed_val = 2020 # for reproductibility
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# + id="M1JTztL8cf1B"
# extract files
files_base_path = os.path.join(data_dir, 'data', 'Zip')
def extract_files(task='train', dest = '../Datasets/Zip/Images'):
src = f"{files_base_path}/{task}"
dest = os.path.join(dest, f'{task}')
try:
with zipfile.ZipFile(f"{src}.zip", 'r') as zip_ref:
# Loop over each file
for file in tqdm(iterable=zip_ref.namelist(), total=len(zip_ref.namelist()), desc="Extrating files"):
# Extract each file to another directory
# If you want to extract to current working directory, don't specify path
zip_ref.extract(member=file, path=dest)
# print('[INFO] Done !')
except Exception as ex:
print(f'[ERROR] {ex}')
# split files into classes
def create_training_folders(images_path='../Datasets/Zip/Images/train/train'):
images = [img for img in os.listdir(images_path) if len(img.split('.'))>1]
base_path = images_path
for imag in tqdm(images, desc='Moving files'):
img_path = os.path.join(base_path, imag)
try:
label = train_df.loc[train_df['ID'] == imag.split('.')[0]].values[0][2]
if label == 1:
src = img_path
dest = os.path.join(base_path, 'positive')
try:
shutil.move(src, dest)
except Exception as ex:
print(f'[ERROR] {ex}')
else:
src = img_path
dest = os.path.join(base_path, 'negative')
try:
shutil.move(src, dest)
except Exception as ex:
print(f'[ERROR] {ex}')
except:
pass
# + colab={"base_uri": "https://localhost:8080/", "height": 116, "referenced_widgets": ["8e191ee92aab46c7a556eb580a351440", "6e4b9b7520174a09ab47109c82552a9f", "77e0fe3356ef418bbb77b190d448fcce", "3bedfd8381d2491b821e9699a8b08c54", "b7c74b42eb8e40b5a39d753a85a7d782", "0e9a86b933154fe89adb3e5b9edb875f", "849ad7d16ecf4809923761c149c1ed20", "677e3d6a13ba47599cec0e8a0ffd0695", "109a6d27f10a42659788a3fd460e86f3", "5fdd37964d4f4eec8f96f9be0affc644", "97816c3818484daa8fe4feb8b2fe97c9", "b807552c3f624ca59c325b256b159e15", "f77f337c18ec46cf8a33c1385cd58661", "a87f0edb103142ab8d932a13d6a5b0a6", "30978964aefb4c25a274c8677afd5657", "ab9519c2b94c4a1ea9d3ac47269ab1ff"]} id="wPGgzyNLczt4" outputId="2eb13e78-ad5b-4fe5-d7cc-59ca39cb66ff"
extract_files(task='train')
extract_files(task='test')
# + [markdown] id="bSfip2Akjowp"
# ## Classes definition
# + id="mxZ9BTl2GOcY"
class TBDataset(torch.utils.data.Dataset):
def __init__(self, df, task='train', size=(300, 230), use_tfms=True, **kwargs):
super(TBDataset, self).__init__()
self.df = df
self.task = task
self.size = size
self.use_tfms = use_tfms
self.c = 2
self.train_transforms = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
])
self.transform = transforms.Compose([
transforms.ToTensor(),
])
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
if self.task == 'train':
fn = self.df.loc[idx, 'filename']
img = cv2.imread(os.path.join(PATHS['images'], fn))
img = cv2.resize(img, self.size)
else:
fn = self.df.loc[idx, 'filepath']
img = cv2.imread(fn)
img = cv2.resize(img, self.size)
if self.use_tfms:
img = self.train_transforms(img)
else:
img = self.transform(img)
output = {
'image': img,
}
if self.task == 'train':
label = self.df.loc[idx, 'LABEL']
output.update({
'label': torch.tensor( np.eye(self.c)[label], dtype=torch.float )
})
return output
class TBModel(torch.nn.Module):
def __init__(self, arch_name, pretrained=True, layer='fc', fc_size=512, out_size=2):
super(TBModel, self).__init__()
self.arch = getattr(models, arch_name)(pretrained)
self.num_ftr = getattr(self.arch, layer).in_features
self.classifier = nn.Sequential(
torch.nn.Dropout(p=.4),
torch.nn.Linear(self.num_ftr, out_size)
)
setattr(self.arch, layer, self.classifier)
torch.nn.init.xavier_normal_(getattr(getattr(self.arch, layer)[1], 'weight'))
def forward(self, x):
x = self.arch(x)
x = torch.sigmoid(x)
return x
# + [markdown] id="x3OHMLQglYuK"
# ## Functions definition
# + id="ksvEESw8lNs7"
def training_fn(dataloader, model, opt, criterion, epoch):
avg_loss = 0
avg_acc = 0
avg_auc = 0
# pbar = tqdm(dataloader, desc=f'Epoch {epoch+1}')
model.to(device)
model.train()
for i, data in enumerate(dataloader):
x,y = data['image'].to(device), data['label'].to(device)
opt.zero_grad()
pred = model(x)
loss = criterion(pred, y)
avg_loss += loss
avg_acc += (y.argmax(1) == pred.argmax(1)).float().mean()
_, y = torch.max(y, 1)
_, pred = torch.max(pred, 1)
fpr, tpr, _ = sklearn.metrics.roc_curve(y_true=y.cpu().detach().numpy(), y_score = pred.cpu().detach().numpy(), pos_label=1)
avg_auc += sklearn.metrics.auc(fpr, tpr)
loss.backward()
opt.step()
# pbar.set_postfix(Loss=str(loss.cpu().detach().numpy()), OvrAllLoss=str(avg_loss.cpu().detach().numpy()/(i+1)),
# OvrAllAcc=str(avg_acc.cpu().detach().numpy()/(i+1)),
# AvgLogLoss=str(avg_logloss/(i+1)))
# pbar.update()
avg_loss_nump = avg_loss.cpu().detach().numpy()
avg_acc_nump = avg_acc.cpu().detach().numpy()
print('[Training] Epoch {} : Loss: {:.5f} - Acc : {:.5f} - AUC : {:.5f}'.format(epoch, avg_loss_nump/len(dataloader), avg_acc_nump/len(dataloader), avg_auc/len(dataloader)))
################## evaluation Function ####################
def evaluate(dataloader, model, criterion):
avg_loss = 0
avg_acc = 0
avg_auc = 0
model.eval()
with torch.no_grad():
for data in dataloader:
x = data['image'].to(device)
y = data['label'].to(device)
pred = model(x)
loss = criterion(pred, y)
avg_loss += loss
avg_acc += (y.argmax(1) == pred.argmax(1)).float().mean()
_, y = torch.max(y, 1)
_, pred = torch.max(pred, 1)
fpr, tpr, _ = sklearn.metrics.roc_curve(y_true=y.cpu().detach().numpy(), y_score = pred.cpu().detach().numpy(), pos_label=1)
avg_auc += sklearn.metrics.auc(fpr, tpr)
avg_loss /= len(dataloader)
avg_acc /= len(dataloader)
print('[Evaluation] Loss: {:.5f} - Acc : {:.5f} - AUC : {:.5f}'.format(avg_loss.cpu().detach().numpy(),
avg_acc.cpu().detach().numpy(),
avg_auc / len(dataloader)))
return avg_loss, avg_auc/len(dataloader), avg_acc
################## prediction Function ####################
def predict(df, size, bs=8, model_path = PATHS['arch'], device='cuda'):
test_ds = TBDataset(df, task='test', size=size, use_tfms=False)
testloader = torch.utils.data.DataLoader(test_ds, bs, shuffle=False)
predictions_labels = []
predictions_proba = []
out = None
for data in tqdm(testloader):
x = data['image'].to(device)
for i in range(n_folds):
model = TBModel(arch_name=arch, layer=layer, fc_size=fc_size)
model.load_state_dict(torch.load(os.path.join(PATHS['arch'], f'model_state_dict_{i}.bin')))
model.eval()
model.to(device)
if i == 0: out = model(x)
else: out += model(x)
out /= n_folds
out_labels = out.argmax(1).cpu().detach().numpy()
out_probas = out.cpu().detach().numpy()
predictions_labels += out_labels.tolist()
predictions_proba += out_probas.tolist()
del model, del x
return predictions_labels , predictions_proba
################## Run training over folds Function ####################
def run_fold(train:pd.DataFrame,
fold, bs=16,
eval_bs=8,
device='cuda',
lr=1e-4,
size=(300, 230),
arch='resnet34',
layer='fc',
epochs=15,
fc_size=512,
path=PATHS['arch']):
best_logloss = np.inf
best_auc = 0
best_acc = 0
fold_train = train[train.fold != fold].reset_index(drop=True)
fold_val = train[train.fold == fold].reset_index(drop=True)
train_ds = TBDataset(fold_train, size=size)
val_ds = TBDataset(fold_val, size=size, use_tfms=False)
trainloader = torch.utils.data.DataLoader(train_ds, batch_size=bs, shuffle=True)
validloader = torch.utils.data.DataLoader(val_ds, batch_size=eval_bs, shuffle=False)
model = TBModel(arch, layer=layer, fc_size=fc_size)
criterion = torch.nn.BCELoss()
opt = torch.optim.AdamW(model.parameters(), lr=lr)
loader = tqdm(range(epochs), desc=f'Training on fold {fold+1}')
for epoch in loader:
training_fn(trainloader, model, opt, criterion, epoch)
avg_logloss, avg_auc, avg_acc = evaluate(validloader, model, criterion)
if avg_auc > best_auc:
best_auc = avg_auc
torch.save(model.state_dict(), f'{path}model_state_dict_{fold}.bin')
elif avg_acc > best_acc :
best_acc = avg_acc
torch.save(model.state_dict(), f'{path}model_state_dict_{fold}.bin')
del model
return best_auc
################## training Function ####################
def load_models(arch='resnet34',
layer='fc',
fc_size=512,
device='cuda',
path=PATHS['arch']):
MODELS = [str(md) for md in os.listdir(path)]
for index, model in enumerate(MODELS):
print(model)
m = TBModel(arch_name=arch, layer=layer)
m.load_state_dict(torch.load(os.path.join(PATHS['arch'], model)))
m.to(device)
m.eval()
MODELS.insert(index, m)
return MODELS
# -
# + [markdown] id="AkF3GoGKmQll"
# ## Dataframes loading
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="lgGw1SKhmQz-" outputId="054f90e5-0f05-4f2e-c753-0bc62384c963"
train = pd.read_csv(PATHS['raw']+'Train.csv')
train.head()
# + [markdown] id="v1gWzbA1moWw"
# ## Train/val split
# + id="_y_xw6bMmVBn"
train_images_list = train['filename'].tolist() #convert images column into list
images_list = os.listdir(PATHS['images']+'train/')
test_images_list = [fn for fn in os.listdir(PATHS['test_images']+'test')]
sub = pd.DataFrame(test_images_list, columns=['image'])
sub['LABEL'] = 0
# Add file names
train['filepath'] = PATHS['images']+train['filename']
sub['filepath'] = PATHS['test_images']+'test/'+sub['image']
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="KTzfmLcenMkc" outputId="4698cca6-3d28-483e-f4ce-57067505b117"
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="dWvEnvwhonsM" outputId="52e8b5ba-ebb4-41ef-f26c-a0efa6782d62"
sub.head()
# + colab={"base_uri": "https://localhost:8080/"} id="sthWnOYzpRwH" outputId="4915eda7-1de1-4446-d7eb-b7a54c2b7106"
len(train), len(sub)
# + [markdown] id="jS6qMT316B_3"
# ### Create data folds for cross-validation
# + id="OPBe_8AMyRDD"
n_folds = 10 # number of folds used
train['fold'] = 0
#creating our folds using a special class of Scikit-learn
fold = sklearn.model_selection.StratifiedKFold(n_splits = n_folds, random_state=seed_val)
for i, (tr, vr) in enumerate(fold.split(train, train['LABEL'])):
train.loc[vr, 'fold'] = i
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="c-Ph_KiNZAaT" outputId="ad6bc79f-ce50-4008-d214-449c9bae698c"
train.head()
# + [markdown] id="8mbQzFehvOu-"
# ## Data viz
# + colab={"base_uri": "https://localhost:8080/", "height": 607} id="hWpOQ3C-pYQX" outputId="422b913c-6fe8-4453-faa0-631582a1ba7e"
#Training images
nrows = 3
rands = np.random.randint(len(train_images_list), size=nrows**2)
fig = plt.figure(figsize=(12,10))
for i in range(nrows**2):
img = cv2.imread(os.path.join(PATHS['images'], train.loc[rands[i], 'filename']))
ax = plt.subplot(nrows, nrows, i+1)
plt.imshow(img)
plt.title(train.loc[rands[i], 'LABEL'])
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 595} id="lmOsC3IwvUJ_" outputId="27053265-7158-49b3-d7ae-e345bb1bff51"
#Test images
nrows = 3
rands = np.random.randint(len(test_images_list), size=nrows**2)
fig = plt.figure(figsize=(12,10))
for i in range(nrows**2):
img = cv2.imread(os.path.join(PATHS['test_images']+'test', sub.loc[rands[i], 'image']))
ax = plt.subplot(nrows, nrows, i+1)
plt.imshow(img)
plt.show()
# + [markdown] id="aBuH_cVwy3gR"
# ## Training part
# -
epochs = 5 # training epochs
device = 'cuda' # in order to run model on GPU
size = (300, 300) # image size
arch = 'densenet161' # pre-trained model architecture used
layer = 'classifier'
fc_size = 2208 # Fully connected layer size
bs = 8 # training Batch size
eval_bs = 4 # Evaluation Batch size
lr = 1e-4 # learning rate
# + colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["a9172f4e772d420f8410ee0865ad0cff", "d4376a040432460cb50bf3c1d99eb6df", "2e742a85eddf4eefa1aabad305459840", "ae51d9b75d7541a893388662928da56e", "2fa6a42da90a4352b0310a578b20074a", "ed314d081d454e2d9129929bba8402ec", "754a388c6d0b42a1bdc437e5eb4dc537", "2931ed40e8364926944acb654f84dc2b", "<KEY>", "24a09ed9413f486a9bacade34a97ebdf", "<KEY>", "f57a5a06c9fb44ab8a6e593adcaeeff2", "ae0ac289326a483b866eefe75c1c641e", "69d5eed10d034cabbe5b4ce0e58d97d2", "<KEY>", "3b33d2448e5c47c591124a97dfe7fba0", "c3c87b7edf954535841b23ef03dd4af6", "<KEY>", "6f7c1ecff9824a01b08ef36091cc8d66", "7116322066f04b6192598644390a095f", "e937a31a282d42ba8822a3e58da4457f", "1915039c54b946c18e8dc7ad1d125383", "<KEY>", "<KEY>"]} id="1TUxZaxCy_E4" outputId="7214ee88-52e8-42c0-edb1-7d3a776246ef"
# %%timeit
epochs = 5 # training epochs
device = 'cuda' # in order to run model on GPU
size = (300, 300) # image size
arch = 'densenet161' # pre-trained model architecture used
layer = 'classifier'
fc_size = 2208 # Fully connected layer size
bs = 8 # training Batch size
eval_bs = 4 # Evaluation Batch size
lr = 1e-4 # learning rate
avg_auc = 0 # variable for getting the average of loss results after training
best_fold = 0 # variable for getting the best fold number after training
fold_auc = -np.inf # initialize fold loss to infinity
###################################### TRAINING PART ###################################
# run training loop over our 10 folds
for fold in range(n_folds):
print('*'*10)
print(f'Fold {fold+1}/{n_folds}')
print('*'*10)
_score = run_fold(fold = fold,
train=train,
device = device,
bs=bs,
eval_bs=eval_bs,
arch=arch,
layer=layer,
epochs=epochs,
fc_size=fc_size,
size=size,
lr=lr)
avg_auc += _score
if fold_auc > _score:
fold_auc = _score
best_fold = fold
###################################### TRAINING PART ###################################
print("\n [INFO] Avg AUC: ", avg_auc/n_folds)
# + [markdown] id="dJLYs5xLzYkr"
# ## Loading trained models for predictions
#
#
# ###### - Load model (1)
# ###### - Make prediction (2)
# ###### - Create submission file (3)
#
# + colab={"base_uri": "https://localhost:8080/", "height": 246} id="dg2YVAYYzJr_" outputId="43b8a5ee-f0f4-4223-efe6-543c5672f963"
# 1
#MODELS = load_models(arch=arch, layer=layer, fc_size=fc_size)
# 2
predictions_labels, predictions_proba = predict(sub, size=size, bs=2)
# + [markdown] id="YMYJKz40zuYw"
# #### Making submission
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="HEZUpRNsz0X8" outputId="50b94945-1dfd-459e-d5d6-710faaa38ec4"
sample_sub_df.head()
# + id="8A-Ctjgkzwqx"
submission = pd.DataFrame()
submission['ID'] = [fn.split('.')[0] for fn in sub['image'].tolist()]
for i, label in enumerate(["0", "1"]):
submission[label] = 0
for i, label in enumerate(["0", "1"]):
submission.loc[:,label] = np.array(predictions_proba)[:,i]
submission['LABEL'] = predictions_labels
#show predicted values
submission.head()
# + [markdown] id="DQ8u9sHd0wD6"
# # CV/LB score
# + id="JL3WFLG_0vLJ"
y_true = submission['LABEL'].to_list()
y_true = np.array(y_true)
y_pred = submission['1'].to_list()
y_pred = np.array(y_pred)
fpr, tpr, _ = sklearn.metrics.roc_curve(y_true=y_true, y_score = y_pred, pos_label=1)
score = sklearn.metrics.auc(fpr, tpr)
print("[INFO] Your LB AUC score should look like {score}".format(score))
# + [markdown] id="yFpF_gqw0y-y"
# ## Save submission file
# + id="Tzy87VCo0doy"
# Format dataframe to match Zindi sample submission file
# and use experiment variables to keep relevant infos on it
subs = submission[['ID', '1']]
subs.columns = ['ID', 'LABEL']
subs.to_csv(f'torch_tb_{arch}_folds_{n_folds}_epochs_{epochs}_size_{size}_LR_{lr}.csv', index=False)
# + id="kpkVIcPqbeHq"
# + id="ZhL6tiOno8Sg"
| Notebooks/TB_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from copy import deepcopy
import itertools
import numpy as np
import torch
from torch.optim import Adam
import gym
import time
import spinup.algos.pytorch.td3.core as core
from spinup.utils.logx import EpochLogger
class ReplayBuffer:
"""
A simple FIFO experience replay buffer for TD3 agents.
"""
def __init__(self, obs_dim, act_dim, size):
self.obs_buf = np.zeros(core.combined_shape(size, obs_dim), dtype=np.float32)
self.obs2_buf = np.zeros(core.combined_shape(size, obs_dim), dtype=np.float32)
self.act_buf = np.zeros(core.combined_shape(size, act_dim), dtype=np.float32)
self.rew_buf = np.zeros(size, dtype=np.float32)
self.done_buf = np.zeros(size, dtype=np.float32)
self.ptr, self.size, self.max_size = 0, 0, size
def store(self, obs, act, rew, next_obs, done):
self.obs_buf[self.ptr] = obs
self.obs2_buf[self.ptr] = next_obs
self.act_buf[self.ptr] = act
self.rew_buf[self.ptr] = rew
self.done_buf[self.ptr] = done
self.ptr = (self.ptr+1) % self.max_size
self.size = min(self.size+1, self.max_size)
def sample_batch(self, batch_size=32):
idxs = np.random.randint(0, self.size, size=batch_size)
batch = dict(obs=self.obs_buf[idxs],
obs2=self.obs2_buf[idxs],
act=self.act_buf[idxs],
rew=self.rew_buf[idxs],
done=self.done_buf[idxs])
return {k: torch.as_tensor(v, dtype=torch.float32) for k,v in batch.items()}
# -
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence, pad_packed_sequence
# +
class MLPCritic(nn.Module):
def __init__(self, obs_dim, act_dim, hidden_sizes=(128, 128)):
super(MLPCritic, self).__init__()
self.obs_dim = obs_dim
self.act_dim = act_dim
self.layers = nn.ModuleList()
layer_size = [obs_dim+act_dim]+list(hidden_sizes) + [1]
for h in range(len(layer_size)-2):
self.layers += [nn.Linear(layer_size[h], layer_size[h+1]), nn.ReLU()]
self.layers += [nn.Linear(layer_size[-2], layer_size[-1]), nn.Identity()]
def forward(self, obs, act):
cat_input = torch.cat([obs, act], dim=-1)
x = cat_input
for layer in self.layers:
x = layer(x)
return torch.squeeze(x, -1) # Critical to ensure q has right shape.
class MLPActor(nn.Module):
def __init__(self, obs_dim, act_dim, act_limit, hidden_sizes=(128, 128)):
super(MLPActor, self).__init__()
self.obs_dim = obs_dim
self.act_dim = act_dim
self.act_limit = act_limit
self.layers = nn.ModuleList()
layer_size = [obs_dim]+list(hidden_sizes) + [act_dim]
for h in range(len(layer_size)-2):
self.layers += [nn.Linear(layer_size[h], layer_size[h+1]), nn.ReLU()]
self.layers += [nn.Linear(layer_size[-2], layer_size[-1]), nn.Tanh()]
def forward(self, obs):
x = obs
for layer in self.layers:
x = layer(x)
return self.act_limit * x
class MLPActorCritic(nn.Module):
def __init__(self, obs_dim, act_dim, act_limit, hidden_sizes=(128, 128)):
super(MLPActorCritic, self).__init__()
self.q1 = MLPCritic(obs_dim, act_dim)
self.q2 = MLPCritic(obs_dim, act_dim)
self.pi = MLPActor(obs_dim, act_dim, act_limit=1)
def act(self, obs):
with torch.no_grad():
return self.pi(obs).numpy()
# -
obs_dim =10
act_dim = 5
hidden_sizes=(128, 128)
ac = MLPActorCritic(obs_dim, act_dim, act_limit=1)
print(ac)
mlp_c = MLPCritic(obs_dim, act_dim)
mlp_a = MLPActor(obs_dim, act_dim, act_limit=1)
print(mlp_c)
print(mlp_a)
print(len(list(mlp_c.parameters())))
print(len(list(mlp_a.parameters())))
def mlp(sizes, activation, output_activation=nn.Identity):
layers = []
for j in range(len(sizes)-1):
act = activation if j < len(sizes)-2 else output_activation
layers += [nn.Linear(sizes[j], sizes[j+1]), act()]
return nn.Sequential(*layers)
obs_dim =10
act_dim = 5
hidden_sizes=(128, 128)
mlp_c = mlp([obs_dim+act_dim]+list(hidden_sizes) + [1], nn.ReLU, nn.Identity)
mlp_a = mlp([obs_dim]+list(hidden_sizes) + [1], nn.ReLU, nn.Tanh)
print(mlp_c)
print(mlp_a)
print(len(list(mlp_c.parameters())))
print(len(list(mlp_a.parameters())))
# +
actor_critic=core.MLPActorCritic
env = gym.make('Ant-v2')
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l'])
obs_dim = env.observation_space.shape
act_dim = env.action_space.shape[0]
# Action limit for clamping: critically, assumes all dimensions share the same bound!
act_limit = env.action_space.high[0]
# Create actor-critic module and target networks
ac = actor_critic(env.observation_space, env.action_space, **ac_kwargs)
ac_targ = deepcopy(ac)
# -
print(len(list(ac.q1.parameters())))
print(len(list(ac.pi.parameters())))
ac_targ = deepcopy(ac)
len(list(ac_targ.parameters()))
cuda = torch.device('cuda')
# +
def td3(env_fn, actor_critic=core.MLPActorCritic, ac_kwargs=dict(), seed=0,
steps_per_epoch=4000, epochs=100, replay_size=int(1e6), gamma=0.99,
polyak=0.995, pi_lr=1e-3, q_lr=1e-3, batch_size=100, start_steps=10000,
update_after=1000, update_every=50, act_noise=0.1, target_noise=0.2,
noise_clip=0.5, policy_delay=2, num_test_episodes=10, max_ep_len=1000,
logger_kwargs=dict(), save_freq=1):
"""
Twin Delayed Deep Deterministic Policy Gradient (TD3)
Args:
env_fn : A function which creates a copy of the environment.
The environment must satisfy the OpenAI Gym API.
actor_critic: The constructor method for a PyTorch Module with an ``act``
method, a ``pi`` module, a ``q1`` module, and a ``q2`` module.
The ``act`` method and ``pi`` module should accept batches of
observations as inputs, and ``q1`` and ``q2`` should accept a batch
of observations and a batch of actions as inputs. When called,
these should return:
=========== ================ ======================================
Call Output Shape Description
=========== ================ ======================================
``act`` (batch, act_dim) | Numpy array of actions for each
| observation.
``pi`` (batch, act_dim) | Tensor containing actions from policy
| given observations.
``q1`` (batch,) | Tensor containing one current estimate
| of Q* for the provided observations
| and actions. (Critical: make sure to
| flatten this!)
``q2`` (batch,) | Tensor containing the other current
| estimate of Q* for the provided observations
| and actions. (Critical: make sure to
| flatten this!)
=========== ================ ======================================
ac_kwargs (dict): Any kwargs appropriate for the ActorCritic object
you provided to TD3.
seed (int): Seed for random number generators.
steps_per_epoch (int): Number of steps of interaction (state-action pairs)
for the agent and the environment in each epoch.
epochs (int): Number of epochs to run and train agent.
replay_size (int): Maximum length of replay buffer.
gamma (float): Discount factor. (Always between 0 and 1.)
polyak (float): Interpolation factor in polyak averaging for target
networks. Target networks are updated towards main networks
according to:
.. math:: \\theta_{\\text{targ}} \\leftarrow
\\rho \\theta_{\\text{targ}} + (1-\\rho) \\theta
where :math:`\\rho` is polyak. (Always between 0 and 1, usually
close to 1.)
pi_lr (float): Learning rate for policy.
q_lr (float): Learning rate for Q-networks.
batch_size (int): Minibatch size for SGD.
start_steps (int): Number of steps for uniform-random action selection,
before running real policy. Helps exploration.
update_after (int): Number of env interactions to collect before
starting to do gradient descent updates. Ensures replay buffer
is full enough for useful updates.
update_every (int): Number of env interactions that should elapse
between gradient descent updates. Note: Regardless of how long
you wait between updates, the ratio of env steps to gradient steps
is locked to 1.
act_noise (float): Stddev for Gaussian exploration noise added to
policy at training time. (At test time, no noise is added.)
target_noise (float): Stddev for smoothing noise added to target
policy.
noise_clip (float): Limit for absolute value of target policy
smoothing noise.
policy_delay (int): Policy will only be updated once every
policy_delay times for each update of the Q-networks.
num_test_episodes (int): Number of episodes to test the deterministic
policy at the end of each epoch.
max_ep_len (int): Maximum length of trajectory / episode / rollout.
logger_kwargs (dict): Keyword args for EpochLogger.
save_freq (int): How often (in terms of gap between epochs) to save
the current policy and value function.
"""
logger = EpochLogger(**logger_kwargs)
logger.save_config(locals())
torch.manual_seed(seed)
np.random.seed(seed)
env, test_env = env_fn(), env_fn()
obs_dim = env.observation_space.shape[0]
act_dim = env.action_space.shape[0]
# Action limit for clamping: critically, assumes all dimensions share the same bound!
act_limit = env.action_space.high[0]
# Create actor-critic module and target networks
mlp_c1 = MLPCritic(obs_dim, act_dim)
mlp_c2 = MLPCritic(obs_dim, act_dim)
mlp_a = MLPActor(obs_dim, act_dim, act_limit)
mlp_c1_targ = deepcopy(mlp_c1)
mlp_c2_targ = deepcopy(mlp_c2)
mlp_a_targ = deepcopy(mlp_a)
mlp_c1.cuda()
mlp_c2.cuda()
mlp_a.cuda()
mlp_c1_targ.cuda()
mlp_c2_targ.cuda()
mlp_a_targ.cuda()
# Freeze target networks with respect to optimizers (only update via polyak averaging)
for p in mlp_c1_targ.parameters():
p.requires_grad = False
for p in mlp_c2_targ.parameters():
p.requires_grad = False
for p in mlp_a_targ.parameters():
p.requires_grad = False
# List of parameters for both Q-networks (save this for convenience)
q_params = itertools.chain(mlp_c1.parameters(), mlp_c2.parameters())
# Experience buffer
replay_buffer = ReplayBuffer(obs_dim=obs_dim, act_dim=act_dim, size=replay_size)
# # Count variables (protip: try to get a feel for how different size networks behave!)
# var_counts = tuple(core.count_vars(module) for module in [ac.pi, ac.q1, ac.q2])
# logger.log('\nNumber of parameters: \t pi: %d, \t q1: %d, \t q2: %d\n'%var_counts)
# Set up function for computing TD3 Q-losses
def compute_loss_q(data):
o, a, r, o2, d = data['obs'].to(device=cuda), data['act'].to(device=cuda), data['rew'].to(device=cuda), data['obs2'].to(device=cuda), data['done'].to(device=cuda)
q1 = mlp_c1(o, a)
q2 = mlp_c2(o, a)
# Bellman backup for Q functions
with torch.no_grad():
pi_targ = mlp_a_targ(o2)
a2 = pi_targ
# Target Q-values
q1_pi_targ = mlp_c1_targ(o2, a2)
q2_pi_targ = mlp_c2_targ(o2, a2)
q_pi_targ = torch.min(q1_pi_targ, q2_pi_targ)
backup = r + gamma * (1 - d) * q_pi_targ
# MSE loss against Bellman backup
loss_q1 = ((q1 - backup)**2).mean()
loss_q2 = ((q2 - backup)**2).mean()
loss_q = loss_q1 + loss_q2
# Useful info for logging
loss_info = dict(Q1Vals=q1.detach().cpu().numpy(),
Q2Vals=q2.detach().cpu().numpy())
return loss_q, loss_info
# Set up function for computing TD3 pi loss
def compute_loss_pi(data):
o = data['obs'].to(device=cuda)
q1_pi = mlp_c1(o, mlp_a(o))
return -q1_pi.mean()
# Set up optimizers for policy and q-function
pi_optimizer = Adam(mlp_a.parameters(), lr=pi_lr)
q_optimizer = Adam(q_params, lr=q_lr)
# # Set up model saving
# logger.setup_pytorch_saver(ac)
def update(data, timer):
# First run one gradient descent step for Q1 and Q2
q_optimizer.zero_grad()
loss_q, loss_info = compute_loss_q(data)
loss_q.backward()
q_optimizer.step()
# Record things
logger.store(LossQ=loss_q.item(), **loss_info)
# Freeze Q-networks so you don't waste computational effort
# computing gradients for them during the policy learning step.
for p in q_params:
p.requires_grad = False
# Next run one gradient descent step for pi.
pi_optimizer.zero_grad()
loss_pi = compute_loss_pi(data)
loss_pi.backward()
pi_optimizer.step()
# Unfreeze Q-networks so you can optimize it at next DDPG step.
for p in q_params:
p.requires_grad = True
# Record things
logger.store(LossPi=loss_pi.item())
# Finally, update target networks by polyak averaging.
with torch.no_grad():
for p, p_targ in zip(mlp_a.parameters(), mlp_a_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
for p, p_targ in zip(mlp_c1.parameters(), mlp_c1_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
for p, p_targ in zip(mlp_c2.parameters(), mlp_c2_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
def get_action(o, noise_scale):
o = torch.tensor(o).view(1, -1).float().to(device=cuda)
with torch.no_grad():
a = mlp_a(o)
a = a.cpu().numpy()
a += noise_scale * np.random.randn(act_dim)
return np.clip(a, -act_limit, act_limit)
def test_agent():
for j in range(num_test_episodes):
o, d, ep_ret, ep_len = test_env.reset(), False, 0, 0
while not(d or (ep_len == max_ep_len)):
# Take deterministic actions at test time (noise_scale=0)
o, r, d, _ = test_env.step(get_action(o, 0))
ep_ret += r
ep_len += 1
logger.store(TestEpRet=ep_ret, TestEpLen=ep_len)
# Prepare for interaction with environment
total_steps = steps_per_epoch * epochs
start_time = time.time()
o, ep_ret, ep_len = env.reset(), 0, 0
# Main loop: collect experience in env and update/log each epoch
for t in range(total_steps):
# Until start_steps have elapsed, randomly sample actions
# from a uniform distribution for better exploration. Afterwards,
# use the learned policy (with some noise, via act_noise).
if t > start_steps:
a = get_action(o, act_noise)
else:
a = env.action_space.sample()
# Step the env
o2, r, d, _ = env.step(a)
ep_ret += r
ep_len += 1
# Ignore the "done" signal if it comes from hitting the time
# horizon (that is, when it's an artificial terminal signal
# that isn't based on the agent's state)
d = False if ep_len==max_ep_len else d
# Store experience to replay buffer
replay_buffer.store(o, a, r, o2, d)
# Super critical, easy to overlook step: make sure to update
# most recent observation!
o = o2
# End of trajectory handling
if d or (ep_len == max_ep_len):
logger.store(EpRet=ep_ret, EpLen=ep_len)
o, ep_ret, ep_len = env.reset(), 0, 0
# Update handling
if t >= update_after and t % update_every == 0:
for j in range(update_every):
batch = replay_buffer.sample_batch(batch_size)
update(data=batch, timer=j)
# End of epoch handling
if (t+1) % steps_per_epoch == 0:
epoch = (t+1) // steps_per_epoch
# # Save model
# if (epoch % save_freq == 0) or (epoch == epochs):
# logger.save_state({'env': env}, None)
# Test the performance of the deterministic version of the agent.
test_agent()
# Log info about epoch
logger.log_tabular('Epoch', epoch)
logger.log_tabular('EpRet', with_min_and_max=True)
logger.log_tabular('TestEpRet', with_min_and_max=True)
logger.log_tabular('EpLen', average_only=True)
logger.log_tabular('TestEpLen', average_only=True)
logger.log_tabular('TotalEnvInteracts', t)
logger.log_tabular('Q1Vals', with_min_and_max=True)
logger.log_tabular('Q2Vals', with_min_and_max=True)
logger.log_tabular('LossPi', average_only=True)
logger.log_tabular('LossQ', average_only=True)
logger.log_tabular('Time', time.time()-start_time)
logger.dump_tabular()
# +
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99, 'seed': 0, 'epochs': 50, 'exp_name': 'td3_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(lambda : gym.make(args['env']), actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
logger_kwargs=logger_kwargs)
# -
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--env', type=str, default='HalfCheetah-v2')
parser.add_argument('--hid', type=int, default=256)
parser.add_argument('--l', type=int, default=2)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--seed', '-s', type=int, default=0)
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--exp_name', type=str, default='td3')
args = parser.parse_args()
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args.exp_name, args.seed)
td3(lambda : gym.make(args.env), actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args.hid]*args.l),
gamma=args.gamma, seed=args.seed, epochs=args.epochs,
logger_kwargs=logger_kwargs)
def td3(env_fn, actor_critic=core.MLPActorCritic, ac_kwargs=dict(), seed=0,
steps_per_epoch=4000, epochs=100, replay_size=int(1e6), gamma=0.99,
polyak=0.995, pi_lr=1e-3, q_lr=1e-3, batch_size=100, start_steps=10000,
update_after=1000, update_every=50, act_noise=0.1, target_noise=0.2,
noise_clip=0.5, policy_delay=2, num_test_episodes=10, max_ep_len=1000,
logger_kwargs=dict(), save_freq=1):
"""
Twin Delayed Deep Deterministic Policy Gradient (TD3)
Args:
env_fn : A function which creates a copy of the environment.
The environment must satisfy the OpenAI Gym API.
actor_critic: The constructor method for a PyTorch Module with an ``act``
method, a ``pi`` module, a ``q1`` module, and a ``q2`` module.
The ``act`` method and ``pi`` module should accept batches of
observations as inputs, and ``q1`` and ``q2`` should accept a batch
of observations and a batch of actions as inputs. When called,
these should return:
=========== ================ ======================================
Call Output Shape Description
=========== ================ ======================================
``act`` (batch, act_dim) | Numpy array of actions for each
| observation.
``pi`` (batch, act_dim) | Tensor containing actions from policy
| given observations.
``q1`` (batch,) | Tensor containing one current estimate
| of Q* for the provided observations
| and actions. (Critical: make sure to
| flatten this!)
``q2`` (batch,) | Tensor containing the other current
| estimate of Q* for the provided observations
| and actions. (Critical: make sure to
| flatten this!)
=========== ================ ======================================
ac_kwargs (dict): Any kwargs appropriate for the ActorCritic object
you provided to TD3.
seed (int): Seed for random number generators.
steps_per_epoch (int): Number of steps of interaction (state-action pairs)
for the agent and the environment in each epoch.
epochs (int): Number of epochs to run and train agent.
replay_size (int): Maximum length of replay buffer.
gamma (float): Discount factor. (Always between 0 and 1.)
polyak (float): Interpolation factor in polyak averaging for target
networks. Target networks are updated towards main networks
according to:
.. math:: \\theta_{\\text{targ}} \\leftarrow
\\rho \\theta_{\\text{targ}} + (1-\\rho) \\theta
where :math:`\\rho` is polyak. (Always between 0 and 1, usually
close to 1.)
pi_lr (float): Learning rate for policy.
q_lr (float): Learning rate for Q-networks.
batch_size (int): Minibatch size for SGD.
start_steps (int): Number of steps for uniform-random action selection,
before running real policy. Helps exploration.
update_after (int): Number of env interactions to collect before
starting to do gradient descent updates. Ensures replay buffer
is full enough for useful updates.
update_every (int): Number of env interactions that should elapse
between gradient descent updates. Note: Regardless of how long
you wait between updates, the ratio of env steps to gradient steps
is locked to 1.
act_noise (float): Stddev for Gaussian exploration noise added to
policy at training time. (At test time, no noise is added.)
target_noise (float): Stddev for smoothing noise added to target
policy.
noise_clip (float): Limit for absolute value of target policy
smoothing noise.
policy_delay (int): Policy will only be updated once every
policy_delay times for each update of the Q-networks.
num_test_episodes (int): Number of episodes to test the deterministic
policy at the end of each epoch.
max_ep_len (int): Maximum length of trajectory / episode / rollout.
logger_kwargs (dict): Keyword args for EpochLogger.
save_freq (int): How often (in terms of gap between epochs) to save
the current policy and value function.
"""
logger = EpochLogger(**logger_kwargs)
logger.save_config(locals())
torch.manual_seed(seed)
np.random.seed(seed)
env, test_env = env_fn(), env_fn()
obs_dim = env.observation_space.shape[0]
act_dim = env.action_space.shape[0]
# Action limit for clamping: critically, assumes all dimensions share the same bound!
act_limit = env.action_space.high[0]
# Create actor-critic module and target networks
# ac = actor_critic(env.observation_space, env.action_space, **ac_kwargs)
ac = MLPActorCritic(obs_dim, act_dim, act_limit)
# import pdb
# pdb.set_trace()
ac_targ = deepcopy(ac)
# Freeze target networks with respect to optimizers (only update via polyak averaging)
for p in ac_targ.parameters():
p.requires_grad = False
# List of parameters for both Q-networks (save this for convenience)
q_params = itertools.chain(ac.q1.parameters(), ac.q2.parameters())
# Experience buffer
replay_buffer = ReplayBuffer(obs_dim=obs_dim, act_dim=act_dim, size=replay_size)
# Count variables (protip: try to get a feel for how different size networks behave!)
var_counts = tuple(core.count_vars(module) for module in [ac.pi, ac.q1, ac.q2])
logger.log('\nNumber of parameters: \t pi: %d, \t q1: %d, \t q2: %d\n'%var_counts)
# Set up function for computing TD3 Q-losses
def compute_loss_q(data):
o, a, r, o2, d = data['obs'], data['act'], data['rew'], data['obs2'], data['done']
q1 = ac.q1(o,a)
q2 = ac.q2(o,a)
# Bellman backup for Q functions
with torch.no_grad():
pi_targ = ac_targ.pi(o2)
# Target policy smoothing
epsilon = torch.randn_like(pi_targ) * target_noise
epsilon = torch.clamp(epsilon, -noise_clip, noise_clip)
a2 = pi_targ + epsilon
a2 = torch.clamp(a2, -act_limit, act_limit)
# Target Q-values
q1_pi_targ = ac_targ.q1(o2, a2)
q2_pi_targ = ac_targ.q2(o2, a2)
q_pi_targ = torch.min(q1_pi_targ, q2_pi_targ)
backup = r + gamma * (1 - d) * q_pi_targ
# import pdb
# pdb.set_trace()
# MSE loss against Bellman backup
loss_q1 = ((q1 - backup)**2).mean()
loss_q2 = ((q2 - backup)**2).mean()
loss_q = loss_q1 + loss_q2
# Useful info for logging
loss_info = dict(Q1Vals=q1.detach().numpy(),
Q2Vals=q2.detach().numpy())
return loss_q, loss_info
# Set up function for computing TD3 pi loss
def compute_loss_pi(data):
o = data['obs']
q1_pi = ac.q1(o, ac.pi(o))
return -q1_pi.mean()
# Set up optimizers for policy and q-function
pi_optimizer = Adam(ac.pi.parameters(), lr=pi_lr)
q_optimizer = Adam(q_params, lr=q_lr)
# Set up model saving
logger.setup_pytorch_saver(ac)
def update(data, timer):
# First run one gradient descent step for Q1 and Q2
q_optimizer.zero_grad()
loss_q, loss_info = compute_loss_q(data)
loss_q.backward()
q_optimizer.step()
# Record things
logger.store(LossQ=loss_q.item(), **loss_info)
# Possibly update pi and target networks
if timer % policy_delay == 0:
# Freeze Q-networks so you don't waste computational effort
# computing gradients for them during the policy learning step.
for p in q_params:
p.requires_grad = False
# Next run one gradient descent step for pi.
pi_optimizer.zero_grad()
loss_pi = compute_loss_pi(data)
loss_pi.backward()
pi_optimizer.step()
# Unfreeze Q-networks so you can optimize it at next DDPG step.
for p in q_params:
p.requires_grad = True
# Record things
logger.store(LossPi=loss_pi.item())
# Finally, update target networks by polyak averaging.
with torch.no_grad():
for p, p_targ in zip(ac.parameters(), ac_targ.parameters()):
# NB: We use an in-place operations "mul_", "add_" to update target
# params, as opposed to "mul" and "add", which would make new tensors.
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
def get_action(o, noise_scale):
a = ac.act(torch.as_tensor(o, dtype=torch.float32))
a += noise_scale * np.random.randn(act_dim)
return np.clip(a, -act_limit, act_limit)
def test_agent():
for j in range(num_test_episodes):
o, d, ep_ret, ep_len = test_env.reset(), False, 0, 0
while not(d or (ep_len == max_ep_len)):
# Take deterministic actions at test time (noise_scale=0)
o, r, d, _ = test_env.step(get_action(o, 0))
ep_ret += r
ep_len += 1
logger.store(TestEpRet=ep_ret, TestEpLen=ep_len)
# Prepare for interaction with environment
total_steps = steps_per_epoch * epochs
start_time = time.time()
o, ep_ret, ep_len = env.reset(), 0, 0
# Main loop: collect experience in env and update/log each epoch
for t in range(total_steps):
# Until start_steps have elapsed, randomly sample actions
# from a uniform distribution for better exploration. Afterwards,
# use the learned policy (with some noise, via act_noise).
if t > start_steps:
a = get_action(o, act_noise)
else:
a = env.action_space.sample()
# Step the env
o2, r, d, _ = env.step(a)
ep_ret += r
ep_len += 1
# Ignore the "done" signal if it comes from hitting the time
# horizon (that is, when it's an artificial terminal signal
# that isn't based on the agent's state)
d = False if ep_len==max_ep_len else d
# Store experience to replay buffer
replay_buffer.store(o, a, r, o2, d)
# Super critical, easy to overlook step: make sure to update
# most recent observation!
o = o2
# End of trajectory handling
if d or (ep_len == max_ep_len):
logger.store(EpRet=ep_ret, EpLen=ep_len)
o, ep_ret, ep_len = env.reset(), 0, 0
# Update handling
if t >= update_after and t % update_every == 0:
for j in range(update_every):
batch = replay_buffer.sample_batch(batch_size)
update(data=batch, timer=j)
# End of epoch handling
if (t+1) % steps_per_epoch == 0:
epoch = (t+1) // steps_per_epoch
# Save model
if (epoch % save_freq == 0) or (epoch == epochs):
logger.save_state({'env': env}, None)
# Test the performance of the deterministic version of the agent.
test_agent()
# Log info about epoch
logger.log_tabular('Epoch', epoch)
logger.log_tabular('EpRet', with_min_and_max=True)
logger.log_tabular('TestEpRet', with_min_and_max=True)
logger.log_tabular('EpLen', average_only=True)
logger.log_tabular('TestEpLen', average_only=True)
logger.log_tabular('TotalEnvInteracts', t)
logger.log_tabular('Q1Vals', with_min_and_max=True)
logger.log_tabular('Q2Vals', with_min_and_max=True)
logger.log_tabular('LossPi', average_only=True)
logger.log_tabular('LossQ', average_only=True)
logger.log_tabular('Time', time.time()-start_time)
logger.dump_tabular()
# +
args = {'env': 'HalfCheetah-v2', 'hid': 128, 'l': 2, 'gamma': 0.99, 'seed': 0, 'epochs': 50, 'exp_name': 'td3_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(lambda : gym.make(args['env']), actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
logger_kwargs=logger_kwargs)
# -
| spinup/algos/pytorch/lstm_ddpg/.ipynb_checkpoints/Untitled1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
import pandas as pd
import numpy as np
from numpy.random import randn
np.random.seed(10)
df = pd.DataFrame(randn(5,4),index=list('ABCDE'),columns=['x1' , 'x2' , 'x3' , 'x4' ])
df
df.columns
df.values
# Now, lets discuss how we can do the following operations:
# - Selecting Columns
# - Creating new columns
# - Removing columns
# - Selecting rows on the basis of index
# - Selecing rows on the basis of some condition
# ### Selecting Columns
# Select one column :
df['x1']
# Selecting multiple columns in any order:
df[['x3' , 'x1']]
# +
# Another syntax ( this is not recommended , why?)
# -
df.x1
# ### Creating new columns:
df['x5'] = df['x1'] + df['x2']
df
# ### Dropping columns & rows:
# +
# Drop / remove columns :
# axis =1 for columns
df.drop('x5' , axis = 1)
# +
# Drop / remove rows :
# axis =0 for columns
df.drop('A' , axis = 0)
# -
# Use `inplace = True` to replace the original dataframe
df.drop('A' , axis = 0 , inplace = True)
df
# ### Using `iloc` and `loc` for indexing
#
# The alternate way of indexing is using:
# - `iloc` : which is used for implicit indexing. The implicit index always starts from '0' and always exists.
# - `loc` : which is used for explicit indexing. The explicit index only exists if specified explicitly like 'A', 'B', etc. in our example above.
# #### Implicit Indexing:
# +
# Use iloc for indexing rows
# Implicit Indexing (single row):
df.iloc[0]
# -
# Implicit Indexing on multiple rows):
df.iloc[[0,2]]
# #### Explicit Indexing:
# Explicit Indexing (single row):
df.loc['B' ]
# Explicit Indexing (on multiple rows)
df.loc[['B' , 'C']]
# ### Indexing both rows and columns using `loc` and `iloc`
# Implicit Indexing on multiple rows):
df.iloc[[0,2] , [0,2]]
# Explicit Indexing (on multiple rows)
df.loc[['B' , 'C'] , ['x1' , 'x2']]
| Section 4/02- DataFrames.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PDFs and Spreadsheets Puzzle Exercise
#
# Let's test your skills, the files needed for this puzzle exercise
#
# You will need to work with two files for this exercise and solve the following tasks:
#
# * Task One: Use Python to extract the Google Drive link from the .csv file. (Hint: Its along the diagonal from top left to bottom right).
# * Task Two: Download the PDF from the Google Drive link (we already downloaded it for you just in case you can't download from Google Drive) and find the phone number that is in the document. Note: There are different ways of formatting a phone number!
# ## Task One: Grab the Google Drive Link from .csv File
import csv
f = open('find_the_link.csv')
csv_reader = csv.reader(f)
data_lines = list(csv_reader)
data_lines
data_lines[0][0] + data_lines[1][1] + data_lines[2][2] + data_lines[3][3]
len(data_lines)
str = ''
for row_num, data in enumerate(data_lines):
#print('row_num:', row_num)
#print('data:', data)
str = str + data[row_num]
str
# ## Task Two: Download the PDF from the Google Drive link and find the phone number that is in the document.
223.223.2334
r'\d{3}.\d{3}.\d{4}'
from google_drive_downloader import GoogleDriveDownloader as gdd
gdd.download_file_from_google_drive(file_id='1G6SEgg018UB4_4xsAJJ5TdzrhmXipr4Q',dest_path='./testpdf.pdf')
import PyPDF2
f = open('testpdf.pdf','rb')
pdf_reader = PyPDF2.PdfFileReader(f)
pdf_reader.numPages
page = pdf_reader.getPage(0)
Info = page.extractText()
Message = pdf_reader.getPage(0).extractText()
# +
#print(Message)
# -
import re
pattern = r'\d{3}.\d{3}.\d{4}'
# +
text = ''
for i in range(pdf_reader.numPages):
Message = pdf_reader.getPage(i).extractText()
text = text + ' ' + Message
# +
#print(text)
# -
re.findall(pattern, text)
pattern = r'\d{3}'
list(re.finditer(pattern, text))
re.findall(pattern, text)
re.search(pattern, text)
f.close()
| docs/Notebooks/23.pdf-and-csv-puzzle-exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# # Accuracy=99.75% using 25 Million Training Images!!
# It's amazing that convolutional neural networks can classify handwritten digits so accurately. In this kernel, we witness an ensemble of 15 CNNs classify Kaggle's MNIST digits after training on Kaggle's 42,000 images in "train.csv" plus 25 million more images created by rotating, scaling, and shifting Kaggle's images. Learning from 25,042,000 images, this ensemble of CNNs achieves 99.75% classification accuracy. This kernel uses ideas from the best published models found on the internet. Advanced techniques include data augmentation, nonlinear convolution layers, learnable pooling layers, ReLU activation, ensembling, bagging, decaying learning rates, dropout, batch normalization, and adam optimization.
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _kg_hide-output=true _kg_hide-input=true
# LOAD LIBRARIES
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler
# + [markdown] _uuid="cd31c62c12088bfa6f2b26dcecc714182627c767"
# # Load Kaggle's 42,000 training images
# + _uuid="d71b3fa2b10620dc8870352fc18d4548f824a88a"
# LOAD THE DATA
train = pd.read_csv("../input/train.csv")
test = pd.read_csv("../input/test.csv")
# + _kg_hide-input=true _uuid="b3c56055d1ba56d28d982f9647c33439c46753ff"
# PREPARE DATA FOR NEURAL NETWORK
Y_train = train["label"]
X_train = train.drop(labels = ["label"],axis = 1)
X_train = X_train / 255.0
X_test = test / 255.0
X_train = X_train.values.reshape(-1,28,28,1)
X_test = X_test.values.reshape(-1,28,28,1)
Y_train = to_categorical(Y_train, num_classes = 10)
# + _kg_hide-input=true _uuid="b95ca2c1e71cb5457684eff3c35bb8d68b4a0f97"
import matplotlib.pyplot as plt
# PREVIEW IMAGES
plt.figure(figsize=(15,4.5))
for i in range(30):
plt.subplot(3, 10, i+1)
plt.imshow(X_train[i].reshape((28,28)),cmap=plt.cm.binary)
plt.axis('off')
plt.subplots_adjust(wspace=-0.1, hspace=-0.1)
plt.show()
# + [markdown] _uuid="cfcb89d7d2dab632986e80d9f68d194c3c1c9e9f"
# # Generate 25 million more images!!
# by randomly rotating, scaling, and shifting Kaggle's 42,000 images.
# + _uuid="1e61e07d14b9b012748fdaac9eaf02e5263a475e"
# CREATE MORE IMAGES VIA DATA AUGMENTATION
datagen = ImageDataGenerator(
rotation_range=10,
zoom_range = 0.10,
width_shift_range=0.1,
height_shift_range=0.1)
# + _kg_hide-input=true _uuid="fcf6daaae4424b95978856d7e75271c97b971c71"
# PREVIEW AUGMENTED IMAGES
X_train3 = X_train[9,].reshape((1,28,28,1))
Y_train3 = Y_train[9,].reshape((1,10))
plt.figure(figsize=(15,4.5))
for i in range(30):
plt.subplot(3, 10, i+1)
X_train2, Y_train2 = datagen.flow(X_train3,Y_train3).next()
plt.imshow(X_train2[0].reshape((28,28)),cmap=plt.cm.binary)
plt.axis('off')
if i==9: X_train3 = X_train[11,].reshape((1,28,28,1))
if i==19: X_train3 = X_train[18,].reshape((1,28,28,1))
plt.subplots_adjust(wspace=-0.1, hspace=-0.1)
plt.show()
# + [markdown] _uuid="9ea116cd3688cb26ac79b9fecc7309a1aebf3b63"
# # Build 15 Convolutional Neural Networks!
# + _uuid="f6703f3f53c659e95579122755454899d842722a"
# BUILD CONVOLUTIONAL NEURAL NETWORKS
nets = 15
model = [0] *nets
for j in range(nets):
model[j] = Sequential()
model[j].add(Conv2D(32, kernel_size = 3, activation='relu', input_shape = (28, 28, 1)))
model[j].add(BatchNormalization())
model[j].add(Conv2D(32, kernel_size = 3, activation='relu'))
model[j].add(BatchNormalization())
model[j].add(Conv2D(32, kernel_size = 5, strides=2, padding='same', activation='relu'))
model[j].add(BatchNormalization())
model[j].add(Dropout(0.4))
model[j].add(Conv2D(64, kernel_size = 3, activation='relu'))
model[j].add(BatchNormalization())
model[j].add(Conv2D(64, kernel_size = 3, activation='relu'))
model[j].add(BatchNormalization())
model[j].add(Conv2D(64, kernel_size = 5, strides=2, padding='same', activation='relu'))
model[j].add(BatchNormalization())
model[j].add(Dropout(0.4))
model[j].add(Conv2D(128, kernel_size = 4, activation='relu'))
model[j].add(BatchNormalization())
model[j].add(Flatten())
model[j].add(Dropout(0.4))
model[j].add(Dense(10, activation='softmax'))
# COMPILE WITH ADAM OPTIMIZER AND CROSS ENTROPY COST
model[j].compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
# + [markdown] _uuid="843d2cb58465b81404c47559ceaf96c139ff82da"
# # Architectural highlights
# 
# The CNNs in this kernel follow [LeNet5's][1] design (pictured above) with the following improvements:
# * Two stacked 3x3 filters replace the single 5x5 filters. These become nonlinear 5x5 convolutions
# * A convolution with stride 2 replaces pooling layers. These become learnable pooling layers.
# * ReLU activation replaces sigmoid.
# * Batch normalization is added
# * Dropout is added
# * More feature maps (channels) are added
# * An ensemble of 15 CNNs with bagging is used
#
# Experiments [(here)][2] show that each of these changes improve classification accuracy.
#
# [1]:http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
# [2]:https://www.kaggle.com/cdeotte/how-to-choose-cnn-architecture-mnist
# + [markdown] _uuid="e433661c7762b947c0fbfc4ad3f5e1d2e056312c"
# # Train 15 CNNs
# + _uuid="9f1dd8a54aa0fab8530c0095f7d4c4b35984ea6d"
# DECREASE LEARNING RATE EACH EPOCH
annealer = LearningRateScheduler(lambda x: 1e-3 * 0.95 ** x)
# TRAIN NETWORKS
history = [0] * nets
epochs = 45
for j in range(nets):
X_train2, X_val2, Y_train2, Y_val2 = train_test_split(X_train, Y_train, test_size = 0.1)
history[j] = model[j].fit_generator(datagen.flow(X_train2,Y_train2, batch_size=64),
epochs = epochs, steps_per_epoch = X_train2.shape[0]//64,
validation_data = (X_val2,Y_val2), callbacks=[annealer], verbose=0)
print("CNN {0:d}: Epochs={1:d}, Train accuracy={2:.5f}, Validation accuracy={3:.5f}".format(
j+1,epochs,max(history[j].history['acc']),max(history[j].history['val_acc']) ))
# + [markdown] _uuid="28b78b6502d2d1c993555725383f3e30728fa5be"
# # Ensemble 15 CNN predictions and submit
# + _uuid="6e4e01ffe692c34c555bdbf5d606611f9a128b9c"
# ENSEMBLE PREDICTIONS AND SUBMIT
results = np.zeros( (X_test.shape[0],10) )
for j in range(nets):
results = results + model[j].predict(X_test)
results = np.argmax(results,axis = 1)
results = pd.Series(results,name="Label")
submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)
submission.to_csv("MNIST-CNN-ENSEMBLE.csv",index=False)
# + _kg_hide-input=true _uuid="4a9d7710a3b69c2b48bc1687e5b6a27a7076f40a"
# PREVIEW PREDICTIONS
plt.figure(figsize=(15,6))
for i in range(40):
plt.subplot(4, 10, i+1)
plt.imshow(X_test[i].reshape((28,28)),cmap=plt.cm.binary)
plt.title("predict=%d" % results[i],y=0.9)
plt.axis('off')
plt.subplots_adjust(wspace=0.3, hspace=-0.1)
plt.show()
# + [markdown] _uuid="b4d3e3246313e8bcb15c6b485540368d9297537b"
# # Kaggle Result
# 
# Wow, its amazing that convolution neural networks can classify handwritten digits so accurately; 99.75% is as good as a human can classify!! This ensemble of 15 CNNs was trained with Kaggle's "train.csv" 42,000 images plus 25 million more images created by rotating, scaling, and shifting Kaggle's "train.csv" images.
# + [markdown] _uuid="372eaad5737aeef94f084c9c3317f6537db2cf2f"
# # How much more accuracy is possible?
# Not much. Here are the best published MNIST classifiers found on the internet:
# * 99.79% [Regularization of Neural Networks using DropConnect, 2013][1]
# * 99.77% [Multi-column Deep Neural Networks for Image Classification, 2012][2]
# * 99.77% [APAC: Augmented PAttern Classification with Neural Networks, 2015][3]
# * 99.76% [Batch-normalized Maxout Network in Network, 2015][4]
# * **99.75% [This Kaggle published kernel, 2018][12]**
# * 99.73% [Convolutional Neural Network Committees, 2011][13]
# * 99.71% [Generalized Pooling Functions in Convolutional Neural Networks, 2016][5]
# * More examples: [here][7], [here][8], and [here][9]
#
# On Kaggle's website, there are no published kernels more accurate than 99.70% besides the one you're reading. The few you will find posted were trained on the full original MNIST dataset (of 70,000 images) which contains known labels for Kaggle's unknown "test.csv" images so those models aren't actually that accurate. For example, [one kernel achieves 100% accuracy][10] training on the original MNIST dataset.
#
# Below is a annotated histogram of Kaggle submission scores. Each bar has range 0.1%. There are spikes at 99.1% and 99.6% accuracy corresponding with using convolutional neural networks. Frequency count decreases as scores exceed 99.69%, hitting a low at 99.8% which is just past the highest possible accuracy. Then frequency count spikes again at accuracies of 99.9% and 100.0% corresponding to mistakenly training with the full original MNIST dataset.
#
# 
#
# [1]:https://cs.nyu.edu/~wanli/dropc/dropc.pdf
# [2]:http://people.idsia.ch/~ciresan/data/cvpr2012.pdf
# [3]:https://arxiv.org/abs/1505.03229
# [4]:https://arxiv.org/abs/1511.02583
# [5]:https://arxiv.org/abs/1509.08985
# [7]:http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html
# [8]:http://yann.lecun.com/exdb/mnist/
# [9]:https://en.wikipedia.org/wiki/MNIST_database
# [10]:https://www.kaggle.com/cdeotte/mnist-perfect-100-using-knn/
# [12]:https://www.kaggle.com/cdeotte/35-million-images-0-99757-mnist
# [13]:http://people.idsia.ch/~ciresan/data/icdar2011a.pdf
# [14]:http://www.mva-org.jp/Proceedings/2015USB/papers/14-21.pdf
# + [markdown] _uuid="1381453d438dbfa2ef72b50f2ba23ea0622078ac"
# # How well can a human classify?
# Take the following quiz. Here are 50 of the most difficult images from Kaggle's "test.csv". For each image, write down a guess as to what digit it is. Then click the link below to see the correct answers. Hint: Nothing on the bottom row is what it seems and the top 4 rows contain 9 different digits!! Good luck!
#
#
# 
#
#
# Click [here][1] for the answers. The ambiguity and/or mislabeling of certain images is why classifiers cannot achieve accuracy greater than 99.8%.
#
# [1]:http://playagricola.com/Kaggle/answers.png
# + [markdown] _uuid="1d55f8c054432beabeed62024a998234c7cdb7b6"
# # Credits
# The code here was inspired by the following outstanding Kaggle kernels (in addition to the publications above).
#
# * [<NAME>][1] - [Introduction to CNN Keras - 0.997 (top 6%)][2]
# * [<NAME>][5] - [Welcome to deep learning (CNN 99%)][6]
# * [<NAME>][3] - [Digits Recognition With CNN Keras][4]
# * [<NAME>][7] - [MNIST with Keras for Beginners(.99457)][8]
#
# [1]:https://www.kaggle.com/yassineghouzam
# [2]:https://www.kaggle.com/yassineghouzam/introduction-to-cnn-keras-0-997-top-6
# [3]:https://www.kaggle.com/dingli
# [4]:https://www.kaggle.com/dingli/digits-recognition-with-cnn-keras
# [5]:https://www.kaggle.com/toregil
# [6]:https://www.kaggle.com/toregil/welcome-to-deep-learning-cnn-99/
# [8]:https://www.kaggle.com/adityaecdrid/mnist-with-keras-for-beginners-99457/
# [7]:https://www.kaggle.com/adityaecdrid
# + [markdown] _uuid="e716712230dda4fc08bc4705cfbbb3c9a9942228"
# # CNN Performance
# How can we evaluate the performance of a neural network? A trained neural network performs differently each time you train it since the weights are randomly initialized. Therefore, to assess a neural network's performance, we must train it many times and take an average of accuracy. The ensemble in this notebook was trained and evaluated 100 times!! (on the original MNIST dataset with 60k/10k split using the code template [here][1] on GitHub.) Below is a histogram of its accuracy.
#
# The maximum accuracy of an individual CNN was 99.81% with average accuracy 99.641% and standard deviation 0.047. The maximum accuracy of an ensemble of fifteen CNNs was 99.79% with average accuracy 99.745% and standard deviation 0.020.
#
# 
#
# ## Data augmentation hyper-parameters
# To determine the best hyper-parameters for data augmentation, grid search was used. Below is the accuracy of an ensemble (of 15 CNNs) with various data augmentation settings. The columns are `rotation` and `zoom`. The rows are `w_shift` and `h_shift`. For example: row 2, column 4 is `r = 15, z = 0.15, w = 0.1, h = 0.1`. Each cell is the average of 6 trials:
#
# 0 5 10 15 20 25 30
# 0 99.70 99.70 99.70 99.70 99.69 99.65 99.62
# 0.1 99.73 99.73 99.75 99.75 99.72 99.67 99.64
# 0.2 99.72 99.72
#
# Below is the accuracy of a single CNN with various data augmentation settings. Each cell is the average of 30 trials.
#
# 0 5 10 15 20 25 30
# 0 99.57 99.58 99.62 99.62 99.62 99.57 99.52
# 0.1 99.62 99.63 99.65 99.65 99.63 99.58 99.52
# 0.2 99.62 99.62
#
# Lastly, I calculated the variance of the MNIST training images. The average center in pixels = (14.9, 15.2). The standard deviation of centers in pixels = (0.99, 1.34). That means that a setting of `w_shift = 0.07` together with `h_shift = 0.09` contains 95% of the centers. Similar analysis shows that a setting of `rotation_range = 13` together with `zoom_range = 0.13` contains 95% of the images.
#
# Based on this analysis, the settings of `rotation_range = 10, zoom_range = 0.10, w_shift = 0.1, and h_shift = 0.1` were chosen.
#
# [1]:https://github.com/cdeotte/MNIST-CNN-99.75
| 2 digit recognizer/25-million-images-0-99757-mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from matplotlib import pyplot as plt
import numpy as np
import GPy
import warnings
warnings.filterwarnings('ignore')
import logging
logging.basicConfig(level=logging.INFO)
# %load_ext autoreload
# %autoreload 2
# +
from emukit.model_wrappers import GPyModelWrapper
from emukit.model_wrappers.gpy_quadrature_wrappers import BaseGaussianProcessGPy, RBFGPy
from emukit.core import ParameterSpace, ContinuousParameter, DiscreteParameter
from emukit.core.loop import UserFunctionWrapper
from emukit.core import ParameterSpace, ContinuousParameter
from emukit.core.initial_designs import RandomDesign
from GPy.models import GPRegression
# -
# Double check that the user function (that provides the data points) returns the appropriate format.
#
# Here, User function (that we want to estimate) should return 2d array or a tuple of 2d arrays
# +
from skopt.benchmarks import branin as _branin
from emukit.test_functions import branin_function
f, _ = branin_function() # or branin()
def branin(x, noise_level=0.):
return np.reshape(_branin(x) + noise_level * np.random.randn(), (-1,1))
# +
from scse.api.simulation import run_simulation
def f(X):
Y = []
for x in X:
num_batteries = x[0]
cum_reward = run_simulation(time_horizon=336, num_batteries=num_batteries)
Y.append(cum_reward[-1])
Y = np.reshape(np.array(Y), (-1, 1))
return Y
# -
# Initial design / data points
# +
max_num_batteries = 25
num_batteries = DiscreteParameter('num_batteries', [i for i in range(0, max_num_batteries)])
week = 336
time_horizon = DiscreteParameter('time_horizon', [i for i in range(0, 52*week, week)])
parameter_space = ParameterSpace([num_batteries])
design = RandomDesign(parameter_space)
num_data_points = 4
X = design.get_samples(num_data_points)
X
# -
Y = f(X)
Y
# +
shape = (100, 1)
X0 = np.random.randint(-5, 10, shape)
X1 = np.random.randint(0, 15, shape)
X = np.concatenate((X0, X1), axis=1)
Y = np.reshape(np.append([], [branin(x) for x in X]), (-1, 1))
# or use RandomDesign
parameter_space = ParameterSpace([ContinuousParameter(
'x1', -5, 10), ContinuousParameter('x2', 0, 15)])
design = RandomDesign(parameter_space)
# if not X:
if len(X) != 0:
num_data_points = shape[0]
X = design.get_samples(num_data_points)
# -
# Plotting initial design values (branin)
# +
from matplotlib.colors import LogNorm
from matplotlib import pyplot as plt
x_ax, y_ax = np.meshgrid(X0, X1)
vals = np.c_[x_ax.ravel(), y_ax.ravel()]
fx = np.reshape([branin(val) for val in vals], (shape[0], shape[0]))
fig, ax = plt.subplots(figsize=(8,8))
cm = ax.pcolormesh(x_ax, y_ax, fx,
norm=LogNorm(vmin=fx.min(),
vmax=fx.max()),
cmap='viridis_r')
minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])
found_minima = np.reshape([x_ax[np.where(fx == np.min(fx))][0], y_ax[np.where(fx == np.min(fx))][0]], (1, 2))
ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14,
lw=0, label="Branin Minima")
ax.plot(found_minima[:, 0], found_minima[:, 1], "y.", markersize=14,
lw=0, label="Sampled Minima")
cb = fig.colorbar(cm)
cb.set_label("f(x)")
ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([-5, 10])
ax.set_ylabel("X2")
ax.set_ylim([0, 15])
plt.show()
# -
# # Initialize Emulator Model
np.std(Y)
# +
kernel = GPy.kern.RBF(1, lengthscale=1e1, variance=1e4)
gpy_model = GPy.models.GPRegression(X, Y, kernel, noise_var=1e-10)
gpy_model.optimize()
model_emukit = GPyModelWrapper(gpy_model)
# -
# # Decision Loop
# +
# Decision loops
from emukit.experimental_design import ExperimentalDesignLoop
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.quadrature.loop import VanillaBayesianQuadratureLoop
# Acquisition functions
from emukit.bayesian_optimization.acquisitions import ExpectedImprovement
from emukit.experimental_design.acquisitions import ModelVariance
# from emukit.quadrature.acquisitions import IntegralVarianceReduction
from emukit.experimental_design.acquisitions import IntegratedVarianceReduction
# Acquistion optimizers
from emukit.core.optimization import GradientAcquisitionOptimizer
# Stopping conditions
from emukit.core.loop import FixedIterationsStoppingCondition
from emukit.core.loop import ConvergenceStoppingCondition
# +
from emukit.bayesian_optimization.acquisitions.log_acquisition import LogAcquisition
# +
# Load core elements for Bayesian optimization
expected_improvement = ExpectedImprovement(model=model_emukit)
us_acquisition = ModelVariance(model_emukit)
ivr_acquisition = IntegratedVarianceReduction(model_emukit, parameter_space)
log_acq = LogAcquisition(expected_improvement)
optimizer = GradientAcquisitionOptimizer(space=parameter_space)
# -
# Create the Bayesian optimization object
bayesopt_loop = BayesianOptimizationLoop(model=model_emukit,
space=parameter_space,
acquisition=log_acq,
batch_size=3)
# +
# Run the loop and extract the optimum
# Run the loop until we either complete 10 steps or converge
stopping_condition = FixedIterationsStoppingCondition(
i_max=10) | ConvergenceStoppingCondition(eps=0.01)
bayesopt_loop.run_loop(f, stopping_condition)
# -
initial_design_samples = num_data_points
new_Y = bayesopt_loop.loop_state.Y[initial_design_samples:, :]
new_X = bayesopt_loop.loop_state.X[initial_design_samples:, :]
print("X shape: ", new_X.shape, ", Y shape: ", new_Y.shape)
# Note that this "sorting" of inputs only works in 1D due to flattening.
# When we move to 2D, we should not plot lines but scatters
new_X = bayesopt_loop.loop_state.X
print(new_X.shape)
order = new_X.argsort(axis=0)
new_X = new_X[order]
print(new_X.shape, bayesopt_loop.loop_state.Y[order].shape)
# new_X = new_X.flatten().reshape(-1, 1)
new_X = new_X[:,:,0].flatten().reshape(-1, 1)
new_Y = bayesopt_loop.loop_state.Y[order]
new_Y = new_Y.flatten().reshape(-1,1)
order.shape, new_Y.shape, new_X.shape
new_Y
plt.plot(new_X, new_Y)
plt.style.use('seaborn')
plt.title("Initial runs")
plt.xlabel(parameter_space.parameters[0].name)
plt.ylabel("Cumulative reward")
plt.show()
x_plot = np.reshape(np.array([i for i in range(0, 25)]), (-1,1))
X
# +
mu_plot, var_plot = model_emukit.predict(x_plot)
plt.figure(figsize=(12, 8))
LEGEND_SIZE = 15
plt.plot(new_X, new_Y, "ro", markersize=10, label="All observations")
plt.plot(X, Y, "bo", markersize=10, label="Initial observations")
# plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 25)
plt.show()
# +
mu_plot, var_plot = model_emukit.predict(x_plot)
plt.figure(figsize=(12, 8))
LEGEND_SIZE = 15
plt.plot(new_X, new_Y, "ro", markersize=10, label="All observations")
plt.plot(X, Y, "bo", markersize=10, label="Initial observations")
# plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 25)
plt.show()
# +
results = bayesopt_loop.get_results()
print("minimum reward = {}".format(results.minimum_value))
print("(X = {}, Y = {})".format(results.minimum_location, results.minimum_value))
# -
us_plot = us_acquisition.evaluate(x_plot)
ivr_plot = ivr_acquisition.evaluate(x_plot)
# IVR is arguably te more principled approach, but often US is preferred over IVR simply because it lends itself to gradient based optimization more easily, is cheaper to compute, and is exact. For both of them (stochastic) gradient base optimizers are used to retrieve the next datapoint
# +
plt.figure(figsize=(12, 8))
plt.plot(x_plot, us_plot / np.max(us_plot), "green", label="US")
plt.plot(x_plot, ivr_plot / np.max(ivr_plot) , "purple", label="IVR")
plt.legend(loc=1, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
# -
# # Saving code for later plotting
# +
def plot_progress(loop, loop_state):
plt.figure(figsize=FIG_SIZE)
plt.contourf(x_1, x_2, y_reshape)
# plt.plot(x_0_constraint, x_1_constraint, linewidth=3, color='k')
plt.plot(loop_state.X[:-1, 0], loop_state.X[:-1, 1],
linestyle='', marker='.', markersize=16, color='b')
plt.plot(loop_state.X[-1, 0], loop_state.X[-1, 1],
linestyle='', marker='.', markersize=16, color='r')
plt.legend(['Constraint boundary', 'Previously evaluated points', 'Last evaluation'])
# Make BO loop
bo_loop = BayesianOptimizationLoop(
space, emukit_model, ei, acquisition_optimizer=acquisition_optimizer)
# append plot_progress function to iteration end event
bo_loop.iteration_end_event.append(plot_progress)
bo_loop.run_loop(f, 10)
# -
# # Plotting 2D branin
# +
plt.figure(figsize=(8,8))
ax = plt.axes(projection='3d')
zline = bayesopt_loop.loop_state.Y.flatten()
xline = bayesopt_loop.loop_state.X[:, 0]
yline = bayesopt_loop.loop_state.X[:, 1]
ax.plot_trisurf(xline, yline, zline, cmap='viridis')
plt.show()
# +
x_ax, y_ax = np.meshgrid(new_X[:, 0], new_X[:, 1])
vals = np.c_[x_ax.ravel(), y_ax.ravel()]
fx = np.reshape([branin(val) for val in vals], x_ax.shape)
fig, ax = plt.subplots(figsize=(8, 8))
cm = ax.pcolormesh(x_ax, y_ax, fx,
norm=LogNorm(vmin=fx.min(),
vmax=fx.max()),
cmap='viridis_r')
minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])
found_minima = new_X[np.where((new_Y == np.min(new_Y)))[0]]
ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14,
lw=0, label="Branin Minima")
ax.plot(found_minima[:, 0], found_minima[:, 1], "y.", markersize=14,
lw=0, label="Sampled Minima")
cb = fig.colorbar(cm)
cb.set_label("f(x)")
ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([-5, 10])
ax.set_ylabel("X2")
ax.set_ylim([0, 15])
# -
| src/bayesian_optimization/test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Noise Bézier
#
# Demo using both `noise()` and `bezier()`.
# +
import vsketch
vsk = vsketch.Vsketch()
vsk.size("a4")
vsk.scale("1cm")
NUM = 200
FREQ = 0.003
SPEED = 0.06
for i in range(NUM):
t = i * FREQ
v = i * SPEED
vsk.bezier(
vsk.noise(t, 0) * 10 + v,
vsk.noise(t, 1000) * 10 + v,
vsk.noise(t, 2000) * 10 + v,
vsk.noise(t, 3000) * 10 + v,
vsk.noise(t, 4000) * 10 + v,
vsk.noise(t, 5000) * 10 + v,
vsk.noise(t, 6000) * 10 + v,
vsk.noise(t, 7000) * 10 + v,
)
vsk.display(mode="matplotlib")
| examples/_notebooks/noise_bezier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Comparing different clustering algorithms on toy datasets
#
# https://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html#sphx-glr-auto-examples-cluster-plot-cluster-comparison-py
# +
print(__doc__)
import time
import warnings
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cluster, datasets, mixture
from sklearn.neighbors import kneighbors_graph
from sklearn.preprocessing import StandardScaler
from itertools import cycle, islice
np.random.seed(0)
# +
# ============
# Generate datasets. We choose the size big enough to see the scalability
# of the algorithms, but not too big to avoid too long running times
# ============
n_samples = 1500
noisy_circles = datasets.make_circles(n_samples=n_samples, factor=.5,
noise=.05)
noisy_moons = datasets.make_moons(n_samples=n_samples, noise=.05)
blobs = datasets.make_blobs(n_samples=n_samples, random_state=8)
no_structure = np.random.rand(n_samples, 2), None
# Anisotropicly distributed data
random_state = 170
X, y = datasets.make_blobs(n_samples=n_samples, random_state=random_state)
transformation = [[0.6, -0.6], [-0.4, 0.8]]
X_aniso = np.dot(X, transformation)
aniso = (X_aniso, y)
# blobs with varied variances
varied = datasets.make_blobs(n_samples=n_samples,
cluster_std=[1.0, 2.5, 0.5],
random_state=random_state)
# ============
# Set up cluster parameters
# ============
plt.figure(figsize=(9 * 2 + 3, 13))
plt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.95, wspace=.05,
hspace=.01)
plot_num = 1
default_base = {'quantile': .3,
'eps': .3,
'damping': .9,
'preference': -200,
'n_neighbors': 10,
'n_clusters': 3,
'min_samples': 20,
'xi': 0.05,
'min_cluster_size': 0.1}
datasets = [
(noisy_circles, {'damping': .77, 'preference': -240,
'quantile': .2, 'n_clusters': 2,
'min_samples': 20, 'xi': 0.25}),
(noisy_moons, {'damping': .75, 'preference': -220, 'n_clusters': 2}),
(varied, {'eps': .18, 'n_neighbors': 2,
'min_samples': 5, 'xi': 0.035, 'min_cluster_size': .2}),
(aniso, {'eps': .15, 'n_neighbors': 2,
'min_samples': 20, 'xi': 0.1, 'min_cluster_size': .2}),
(blobs, {}),
(no_structure, {})]
for i_dataset, (dataset, algo_params) in enumerate(datasets):
# update parameters with dataset-specific values
params = default_base.copy()
params.update(algo_params)
X, y = dataset
# normalize dataset for easier parameter selection
X = StandardScaler().fit_transform(X)
# estimate bandwidth for mean shift
bandwidth = cluster.estimate_bandwidth(X, quantile=params['quantile'])
# connectivity matrix for structured Ward
connectivity = kneighbors_graph(
X, n_neighbors=params['n_neighbors'], include_self=False)
# make connectivity symmetric
connectivity = 0.5 * (connectivity + connectivity.T)
# ============
# Create cluster objects
# ============
ms = cluster.MeanShift(bandwidth=bandwidth, bin_seeding=True)
two_means = cluster.MiniBatchKMeans(n_clusters=params['n_clusters'])
ward = cluster.AgglomerativeClustering(
n_clusters=params['n_clusters'], linkage='ward',
connectivity=connectivity)
spectral = cluster.SpectralClustering(
n_clusters=params['n_clusters'], eigen_solver='arpack',
affinity="nearest_neighbors")
dbscan = cluster.DBSCAN(eps=params['eps'])
optics = cluster.OPTICS(min_samples=params['min_samples'],
xi=params['xi'],
min_cluster_size=params['min_cluster_size'])
affinity_propagation = cluster.AffinityPropagation(
damping=params['damping'], preference=params['preference'])
average_linkage = cluster.AgglomerativeClustering(
linkage="average", affinity="cityblock",
n_clusters=params['n_clusters'], connectivity=connectivity)
birch = cluster.Birch(n_clusters=params['n_clusters'])
gmm = mixture.GaussianMixture(
n_components=params['n_clusters'], covariance_type='full')
clustering_algorithms = (
('MiniBatch\nKMeans', two_means),
('Affinity\nPropagation', affinity_propagation),
('MeanShift', ms),
('Spectral\nClustering', spectral),
('Ward', ward),
('Agglomerative\nClustering', average_linkage),
('DBSCAN', dbscan),
('OPTICS', optics),
('BIRCH', birch),
('Gaussian\nMixture', gmm)
)
for name, algorithm in clustering_algorithms:
t0 = time.time()
# catch warnings related to kneighbors_graph
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="the number of connected components of the " +
"connectivity matrix is [0-9]{1,2}" +
" > 1. Completing it to avoid stopping the tree early.",
category=UserWarning)
warnings.filterwarnings(
"ignore",
message="Graph is not fully connected, spectral embedding" +
" may not work as expected.",
category=UserWarning)
algorithm.fit(X)
t1 = time.time()
if hasattr(algorithm, 'labels_'):
y_pred = algorithm.labels_.astype(int)
else:
y_pred = algorithm.predict(X)
plt.subplot(len(datasets), len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
colors = np.array(list(islice(cycle(['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']),
int(max(y_pred) + 1))))
# add black color for outliers (if any)
colors = np.append(colors, ["#000000"])
plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plt.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'),
transform=plt.gca().transAxes, size=15,
horizontalalignment='right')
plot_num += 1
plt.show()
# -
| manifold_learning/sklearn_clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# name: python2
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/IamShivamJaiswal/object_train_tensorflow_colab/blob/master/object_custom_tf_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="x2fJWG6zpB12" colab_type="text"
# #Tensorflow Object Detection with custom dataset in Google Colab
#
# Jupyter notebook providing steps to retrain a [ModelZoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) model with custom dataset.
#
# It runs in [Google Colab](https://colab.research.google.com) using [Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection).
#
# **Requirements are only dataset images and annotations file.**
#
# The code is compatible with Object Detection api updates in July 13 2018 [release](https://github.com/tensorflow/models/tree/master/research/object_detection#july-13-2018).
#
# **Colab Runtime type: Python2, GPU enabled.**
#
#
# + [markdown] id="5kLfbVg8PPaY" colab_type="text"
# #Create Dataset
#
# I generated dataset annotations with [LabelImg](https://github.com/tzutalin/labelImg).
#
# Notebook train a model for one class object detection. It is possible to slightly modify notebook to train model for multiple classes.
#
# Before running notebook, we need to create dataset:
#
# 1. Collect various pictures of objects to detect
# 2. Rename image filenames with format `objectclass_seq.jpg`
# 3. In LabelImg create annotation files. LabelImg saves annotations as XML files in PASCAL VOC format
# 4. Create dataset.zip file having structure defined below
# 5. Upload the zip file in your Google Drive
#
# Zip file structure:
# ```
# dataset.zip file
# |-images directory
# |-image files (filename format: objectclass_seq.jpg)
# |-annotations directory
# |-xmls directory
# |-annotation files (filename format: objectclass_seq.xml)
# ```
#
# Where `objectclass` is the class name, `seq` is a sequence number (001, 002, 003, ...)
#
# Check my dataset.zip file as dataset example.
#
# + [markdown] id="GOpn4IebMl6p" colab_type="text"
# # Install required packages
#
# + id="NTGqlSTLuxld" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 326} outputId="d78c5957-740e-4801-d085-2ab6be32eda7"
# %cd
# !git clone --quiet https://github.com/tensorflow/models.git
# !apt-get install -qq protobuf-compiler python-tk
# !pip install -q Cython contextlib2 pillow lxml matplotlib PyDrive
# !pip install -q pycocotools
# %cd ~/models/research
# !protoc object_detection/protos/*.proto --python_out=.
import os
os.environ['PYTHONPATH'] += ':'+os.path.abspath(os.curdir)+':'+os.path.abspath(os.curdir)+'/slim/'
# !python object_detection/builders/model_builder_test.py
# + [markdown] id="UiXiLQumY-nz" colab_type="text"
# # Download and extract dataset
#
#
# * Change name attribute in label_map, accordingly with objectclass filename.
# * Substitute fileId value with your dataset.zip id in Google Drive. See [here](https://stackoverflow.com/a/48855034/9250875) my answer to get file id.
# + id="THhnes1ckVA5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1802} outputId="54380b60-b383-40d3-b5dc-d936db302171"
# %cd ~/../datalab/
# !git clone --quiet https://github.com/IamShivamJaiswal/object_train_tensorflow_colab
# !unzip ./object_train_tensorflow_colab/dog_dataset.zip
image_files=os.listdir('images')
im_files=[x.split('.')[0] for x in image_files]
with open('annotations/trainval.txt', 'w') as text_file:
for row in im_files:
text_file.write(row + '\n')
# + [markdown] id="sJDLJeonEVDu" colab_type="text"
# #Empty png files
# Create empty png mask files to avoid error in create_pet_tf_record.py, they are not used in training model.
# + id="yre-rvSJ83bW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2ea72915-8051-44a4-cb29-950d95c0bf56"
# %cd ~/../datalab/annotations
# !mkdir trimaps
from PIL import Image
image = Image.new('RGB', (640, 480))
for filename in os.listdir('xmls'):
filename = os.path.splitext(filename)[0]
image.save('trimaps/' + filename + '.png')
# + [markdown] id="41CuxuIYdWUx" colab_type="text"
# # Create TFRecord
# + id="9CDVwKwcNEX9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9cc44f28-8508-4ccb-ce7b-a543901202f8"
# %cd ~/../datalab/
# + id="tjWVfy6xx8vl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="86c90291-77b7-4402-cacc-2b4c74c89f2b"
# %%writefile ./label_map.pbtxt
item {
id: 1
name: 'dog'
}
# + id="oXeqgzNhluaC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="05a0b6ea-7929-4dde-94d2-a5c522f8a6c9"
# %cd ~/../datalab/
# !python ~/models/research/object_detection/dataset_tools/create_pet_tf_record.py --label_map_path=./label_map.pbtxt --data_dir=. --output_dir=. --num_shards=1
# !mv pet_faces_train.record-00000-of-00001 tf_train.record
# !mv pet_faces_val.record-00000-of-00001 tf_val.record
# + [markdown] id="jeyO_oSKdhsG" colab_type="text"
# # Download pretrained model
#
# Cell downloads **faster_rcnn_inception_v2_coco** model to use as starting checkpoint.
#
# To use another model from ModelZoo change MODEL var.
# + id="sUDk1gLQsWOz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e7da5d31-80f6-4782-ad8d-9b4cfb05c026"
# %cd ~/../datalab/
import os
import shutil
import glob
import urllib
import tarfile
MODEL = 'faster_rcnn_inception_v2_coco_2018_01_28'
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
DEST_DIR = 'pretrained_model'
if not (os.path.exists(MODEL_FILE)):
opener = urllib.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
# + [markdown] id="2HTraQgqgW3v" colab_type="text"
# # Edit model config file
# To you use a different pretrained model in step before, update accordingly filename var and re.sub functions in next cell.
#
# + id="f1twuMBWvhL4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e5bd9d94-e671-495e-90b9-7510a20d0d51"
# %cd ~/../datalab/
import re
filename = '/root/models/research/object_detection/samples/configs/faster_rcnn_inception_v2_pets.config'
with open(filename) as f:
s = f.read()
with open(filename, 'w') as f:
s = re.sub('PATH_TO_BE_CONFIGURED/model.ckpt', '/root/../datalab/pretrained_model/model.ckpt', s)
s = re.sub('PATH_TO_BE_CONFIGURED/pet_faces_train.record-\?\?\?\?\?-of-00010', '/root/../datalab/tf_train.record', s)
s = re.sub('PATH_TO_BE_CONFIGURED/pet_faces_val.record-\?\?\?\?\?-of-00010', '/root/../datalab/tf_val.record', s)
s = re.sub('PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt', '/root/../datalab/label_map.pbtxt', s)
f.write(s)
# + [markdown] id="MAYXLhS2uZ9X" colab_type="text"
# # Train model
# Set num_train_steps and num_eval_steps values to change train and eval steps in training process.
#
#
# + id="hl6mCzbz8QKV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 6072} outputId="ae797803-d978-484a-a8f2-a780108b4e92"
# %cd ~/../datalab/
# !python /root/models/research/object_detection/legacy/train.py \
# --pipeline_config_path=/root/models/research/object_detection/samples/configs/faster_rcnn_inception_v2_pets.config \
# --train_dir=/root/../datalab/trained \
# --alsologtostderr \
# --num_train_steps=3000 \
# --num_eval_steps=500
# + [markdown] id="rjJCB5NKK4Nb" colab_type="text"
# #Export trained model
#
# Export trained model with highest step number in filename.
# + id="cp73hpU8ZrQ9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 10441} outputId="9d4aa19f-a0ae-491b-cfe1-329684e194e3"
# %cd ./../datalab
lst = os.listdir('trained')
lf = filter(lambda k: 'model.ckpt-' in k, lst)
last_model = sorted(lf)[-1].replace('.meta', '')
# !python ~/models/research/object_detection/export_inference_graph.py \
# --input_type=image_tensor \
# --pipeline_config_path=/root/models/research/object_detection/samples/configs/faster_rcnn_inception_v2_pets.config \
# --output_directory=fine_tuned_model \
# --trained_checkpoint_prefix=trained/$last_model
# + [markdown] id="wDEY7rmQE7nQ" colab_type="text"
# #Upload jpg image for inference
# + id="oI8Ya_6GE9ll" colab_type="code" colab={}
# %cd ./../datalab
from google.colab import files
from os import path
uploaded = files.upload()
for name, data in uploaded.items():
with open('image1.jpg', 'wb') as f:
f.write(data)
f.close()
print('saved file ' + name)
# + [markdown] id="yEKYdPJSoHb6" colab_type="text"
# # Run inference
#
# + id="VUy6KXMToLVc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1659} outputId="e537539f-9042-4c51-b747-4e2aee8ee89a"
# %cd ~/models/research/object_detection
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
if tf.__version__ > '1.4.0':
raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')
# This is needed to display the images.
# %matplotlib inline
from utils import label_map_util
from utils import visualization_utils as vis_util
# What model to download.
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = '/root/../datalab/fine_tuned_model' + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('/root/../datalab', 'label_map.pbtxt')
NUM_CLASSES = 37
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
from glob import glob
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = '/content/datalab/'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 2) ]
TEST_IMAGE_PATHS = glob('/root/../datalab/images/*.jpg')[:1]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
# + id="mNsjsTPVLACH" colab_type="code" colab={}
| object_custom_tf_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import os
import tensorflow as tf
import numpy as np
import librosa
import math
from time import time
sys.path.append('..')
from wavenet.model import WaveNetModel
from wavenet.ops import mu_law_encode, mu_law_decode
from IPython.display import Audio
# %matplotlib inline
# +
tf.reset_default_graph()
batch_size = 1
filter_width = 3
n_stack = 2
max_dilation = 10
dilations = [2 ** i for j in range(n_stack) for i in range(max_dilation)]
residual_channels, dilation_channels, skip_channels = 128, 128, 256
use_biases = True
quantization_channels = 256
gc_cardinality = None
gc_channels = None
scalar_input = False
initial_filter_width = filter_width
net = WaveNetModel(batch_size=batch_size,
dilations=dilations,
filter_width=filter_width,
scalar_input=scalar_input,
initial_filter_width=initial_filter_width,
residual_channels=residual_channels,
dilation_channels=dilation_channels,
quantization_channels=quantization_channels,
skip_channels=skip_channels,
global_condition_channels=gc_channels,
global_condition_cardinality=gc_cardinality,
use_biases=use_biases,
local_condition_channels=1)
gen_num = tf.placeholder(tf.int32)
input_batch = tf.placeholder(tf.float32)
lc_batch = tf.placeholder(tf.float32)
ml_encoded = mu_law_encode(input_batch, quantization_channels)
encoded = net._one_hot(ml_encoded)
raw_output = net.create_network(encoded, lc_batch, None)
out = tf.reshape(raw_output, [-1, quantization_channels])
proba = tf.cast(tf.nn.softmax(tf.cast(out, tf.float64)), tf.float32)
# loss = net.loss(input_placeholder, None, None)
# optimizer = tf.train.AdamOptimizer(0.001)
# optim = optimizer.minimize(loss, var_list=tf.trainable_variables())
# For generation
generation_batch_size = 1
sample_placeholder = tf.placeholder(tf.int32)
lc_placeholder = tf.placeholder(tf.float32)
gen_num = tf.placeholder(tf.int32)
next_sample_prob, layers_out, qs = \
net.predict_proba_incremental(sample_placeholder, gen_num, batch_size=generation_batch_size,
local_condition=lc_placeholder)
initial = tf.placeholder(tf.float32)
others = tf.placeholder(tf.float32)
update_q_ops = net.create_update_q_ops(qs, initial, others, gen_num, batch_size=generation_batch_size)
var_q = net.get_vars_q()
print("created.")
# -
src, _ = librosa.load("voice.wav", sr=16000)
src = src[:len(src)//4]
n_samples = len(src)
src = src.reshape(-1, 1)
src = np.pad(src, [[net.receptive_field, 0], [0, 0]],'constant')
# +
sess_config = tf.ConfigProto(
device_count = {'GPU': 0}
)
with tf.Session(config=sess_config) as sess:
sess.run(tf.global_variables_initializer())
_lc = src.reshape(1, -1, 1)
result, _encoded = sess.run([proba, ml_encoded],
feed_dict={input_batch:src, lc_batch:_lc})
_encoded = _encoded.reshape(batch_size, -1)
result = np.argmax(result, axis=-1)
sess.run(tf.variables_initializer(var_q))
t = time()
samples= []
for j in range(net.receptive_field-1):
feed_dict = {sample_placeholder:_encoded[:,j], lc_placeholder:[[0]], gen_num:j}
prob, _layers = sess.run([next_sample_prob, layers_out], feed_dict=feed_dict)
sess.run(update_q_ops, feed_dict={initial:_layers[0], others:np.array(_layers[1:]), gen_num:j})
for j in range(net.receptive_field-1, _encoded.shape[-1]):
feed_dict = {sample_placeholder:_encoded[:,j],
lc_placeholder:_lc[:,j],
gen_num:j}
prob, _layers = sess.run([next_sample_prob, layers_out], feed_dict=feed_dict)
sess.run(update_q_ops, feed_dict={initial:_layers[0], others:np.array(_layers[1:]), gen_num:j})
sample = np.argmax(prob, axis=-1)
samples.append(sample)
samples = np.array(samples).T
print("elapsed:", time()-t)
print("result:", result)
print("generated samples:", samples)
print("difference between result and samples:", np.abs(result-samples).sum())
# -
| notebook/eval model.ipynb |
# ## 声音识别项目介绍
#
#
#
# ## 开发环境
#
# * TensorFlow的版本:2.0 +
# * keras
# * sklearn
# * librosa
#
# ## 下载数据
#
#
#
# +
# !wget http://tianchi-competition.oss-cn-hangzhou.aliyuncs.com/531887/train_sample.zip
# !wget http://tianchi-competition.oss-cn-hangzhou.aliyuncs.com/531887/test_a.zip
# !unzip -qq train_sample.zip
# !\rm train_sample.zip
# !unzip -qq test_a.zip
# !\rm test_a.zip
# -
# 安装语音处理依赖
# !pip install librosa --user
# +
# 基本库
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, Flatten, Dense, MaxPool2D, Dropout
from tensorflow.keras.utils import to_categorical
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
# -
# ## 数据预处理
#
# 特征提取以及数据集的建立
#
#
feature = []
label = []
# 建立类别标签,不同类别对应不同的数字。
label_dict = {'aloe': 0, 'burger': 1, 'cabbage': 2,'candied_fruits':3, 'carrots': 4, 'chips':5,
'chocolate': 6, 'drinks': 7, 'fries': 8, 'grapes': 9, 'gummies': 10, 'ice-cream':11,
'jelly': 12, 'noodles': 13, 'pickles': 14, 'pizza': 15, 'ribs': 16, 'salmon':17,
'soup': 18, 'wings': 19}
label_dict_inv = {v:k for k,v in label_dict.items()}
from tqdm import tqdm
def extract_features(parent_dir, sub_dirs, max_file=10, file_ext="*.wav"):
c = 0
label, feature = [], []
for sub_dir in sub_dirs:
for fn in tqdm(glob.glob(os.path.join(parent_dir, sub_dir, file_ext))[:max_file]): # 遍历数据集的所有文件
# segment_log_specgrams, segment_labels = [], []
#sound_clip,sr = librosa.load(fn)
#print(fn)
label_name = fn.split('/')[-2]
label.extend([label_dict[label_name]])
X, sample_rate = librosa.load(fn,res_type='kaiser_fast')
mels = np.mean(librosa.feature.melspectrogram(y=X,sr=sample_rate).T,axis=0) # 计算梅尔频谱(mel spectrogram),并把它作为特征
feature.extend([mels])
return [feature, label]
# +
# 自己更改目录
parent_dir = './train_sample/'
save_dir = "./"
folds = sub_dirs = np.array(['aloe','burger','cabbage','candied_fruits',
'carrots','chips','chocolate','drinks','fries',
'grapes','gummies','ice-cream','jelly','noodles','pickles',
'pizza','ribs','salmon','soup','wings'])
# 获取特征feature以及类别的label
temp = extract_features(parent_dir,sub_dirs,max_file=100)
# -
temp = np.array(temp)
data = temp.transpose()
# +
# 获取特征
X = np.vstack(data[:, 0])
# 获取标签
Y = np.array(data[:, 1])
print('X的特征尺寸是:',X.shape)
print('Y的特征尺寸是:',Y.shape)
# -
# 在Keras库中:to_categorical就是将类别向量转换为二进制(只有0和1)的矩阵类型表示
Y = to_categorical(Y)
'''最终数据'''
print(X.shape)
print(Y.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 1, stratify=Y)
print('训练集的大小',len(X_train))
print('测试集的大小',len(X_test))
X_train = X_train.reshape(-1, 16, 8, 1)
X_test = X_test.reshape(-1, 16, 8, 1)
# ## 搭建CNN网络¶
#
# +
model = Sequential()
# 输入的大小
input_dim = (16, 8, 1)
model.add(Conv2D(64, (3, 3), padding = "same", activation = "tanh", input_shape = input_dim))# 卷积层
model.add(MaxPool2D(pool_size=(2, 2)))# 最大池化
model.add(Conv2D(128, (3, 3), padding = "same", activation = "tanh")) #卷积层
model.add(MaxPool2D(pool_size=(2, 2))) # 最大池化层
model.add(Dropout(0.1))
model.add(Flatten()) # 展开
model.add(Dense(1024, activation = "tanh"))
model.add(Dense(20, activation = "softmax")) # 输出层:20个units输出20个类的概率
# 编译模型,设置损失函数,优化方法以及评价标准
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# -
model.summary()
# 训练模型
model.fit(X_train, Y_train, epochs = 20, batch_size = 15, validation_data = (X_test, Y_test))
# +
#预测
def extract_features(test_dir, file_ext="*.wav"):
feature = []
for fn in tqdm(glob.glob(os.path.join(test_dir, file_ext))[:]): # 遍历数据集的所有文件
X, sample_rate = librosa.load(fn,res_type='kaiser_fast')
mels = np.mean(librosa.feature.melspectrogram(y=X,sr=sample_rate).T,axis=0) # 计算梅尔频谱(mel spectrogram),并把它作为特征
feature.extend([mels])
return feature
# -
X_test = extract_features('./test_a/')
X_test = np.vstack(X_test)
predictions = model.predict(X_test.reshape(-1, 16, 8, 1))
# +
preds = np.argmax(predictions, axis = 1)
preds = [label_dict_inv[x] for x in preds]
path = glob.glob('./test_a/*.wav')
result = pd.DataFrame({'name':path, 'label': preds})
result['name'] = result['name'].apply(lambda x: x.split('/')[-1])
result.to_csv('submit.csv',index=None)
# -
# !ls ./test_a/*.wav | wc -l
# !wc -l submit.csv
| 1.baseline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
import numpy as np
import collections
from sklearn import metrics
from sklearn.cross_validation import train_test_split
import tensorflow as tf
import pandas as pd
from unidecode import unidecode
from sklearn.preprocessing import LabelEncoder
from tqdm import tqdm
import time
import malaya
# +
tokenizer = malaya.preprocessing._SocialTokenizer().tokenize
rules_normalizer = malaya.texts._tatabahasa.rules_normalizer
def is_number_regex(s):
if re.match("^\d+?\.\d+?$", s) is None:
return s.isdigit()
return True
def detect_money(word):
if word[:2] == 'rm' and is_number_regex(word[2:]):
return True
else:
return False
def preprocessing(string):
tokenized = tokenizer(unidecode(string))
tokenized = [malaya.stem.naive(w) for w in tokenized]
tokenized = [w.lower() for w in tokenized if len(w) > 1]
tokenized = [rules_normalizer.get(w, w) for w in tokenized]
tokenized = ['<NUM>' if is_number_regex(w) else w for w in tokenized]
tokenized = ['<MONEY>' if detect_money(w) else w for w in tokenized]
return tokenized
# +
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 3)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def str_idx(corpus, dic, maxlen, UNK = 3):
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i][:maxlen][::-1]):
X[i, -1 - no] = dic.get(k, UNK)
return X
# -
preprocessing('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya jd')
# +
import json
with open('tokenization.json') as fopen:
dataset = json.load(fopen)
texts = dataset['texts']
labels = dataset['labels']
del dataset
# -
with open('sentiment-dictionary.json') as fopen:
d = json.load(fopen)
dictionary = d['dictionary']
rev_dictionary = d['reverse_dictionary']
# +
def position_encoding(inputs):
T = tf.shape(inputs)[1]
repr_dim = inputs.get_shape()[-1].value
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1])
def layer_norm(inputs, epsilon=1e-8):
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
normalized = (inputs - mean) / (tf.sqrt(variance + epsilon))
params_shape = inputs.get_shape()[-1:]
gamma = tf.get_variable('gamma', params_shape, tf.float32, tf.ones_initializer())
beta = tf.get_variable('beta', params_shape, tf.float32, tf.zeros_initializer())
return gamma * normalized + beta
def Attention(inputs, num_units, num_heads = 8, activation = None):
inputs = tf.layers.dropout(inputs, 0.3, training=True)
T_q = T_k = tf.shape(inputs)[1]
Q_K_V = tf.layers.dense(inputs, 3*num_units, activation)
Q, K, V = tf.split(Q_K_V, 3, -1)
Q_ = tf.concat(tf.split(Q, num_heads, axis=2), 0)
K_ = tf.concat(tf.split(K, num_heads, axis=2), 0)
V_ = tf.concat(tf.split(V, num_heads, axis=2), 0)
align = tf.matmul(Q_, K_, transpose_b=True)
align *= tf.rsqrt(tf.to_float(K_.get_shape()[-1].value))
paddings = tf.fill(tf.shape(align), float('-inf'))
lower_tri = tf.ones([T_q, T_k])
lower_tri = tf.linalg.LinearOperatorLowerTriangular(lower_tri).to_dense()
masks = tf.tile(tf.expand_dims(lower_tri,0), [tf.shape(align)[0],1,1])
align = tf.where(tf.equal(masks, 0), paddings, align)
align = tf.nn.softmax(align)
alignments = tf.transpose(align, [0, 2, 1])
x = tf.matmul(align, V_)
x = tf.concat(tf.split(x, num_heads, axis=0), 2)
x += inputs
x = layer_norm(x)
return x, alignments
class Model:
def __init__(self, size_layer, embed_size, dict_size, dimension_output, learning_rate = 1e-3):
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None])
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embed_size], -1, 1))
x = tf.nn.embedding_lookup(encoder_embeddings, self.X)
x += position_encoding(x)
x = tf.layers.dropout(x, 0.3, training=True)
x, self.alignments = Attention(x, size_layer)
self.logits_seq = tf.layers.dense(x, dimension_output)
self.logits_seq = tf.identity(self.logits_seq, name = 'logits_seq')
self.logits = self.logits_seq[:,-1]
self.logits = tf.identity(self.logits, name = 'logits')
self.cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
correct_pred = tf.equal(
tf.argmax(self.logits, 1, output_type = tf.int32), self.Y
)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
self.attention = tf.identity(tf.reduce_mean(self.alignments[0], 1), name = 'alphas')
# +
size_layer = 256
dimension_output = 2
learning_rate = 1e-4
batch_size = 32
maxlen = 100
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(
size_layer,
size_layer,
len(dictionary),
dimension_output,
learning_rate,
)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'only-attention/model.ckpt')
# -
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name
or 'alphas' in n.name)
and 'Adam' not in n.name
and '_power' not in n.name
]
)
strings.split(',')
tf.trainable_variables()
train_X, test_X, train_Y, test_Y = train_test_split(
texts, labels, test_size = 0.2
)
# +
from tqdm import tqdm
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
batch_x = str_idx(train_X[i : min(i + batch_size, len(train_X))], dictionary, maxlen)
batch_y = train_Y[i : min(i + batch_size, len(train_X))]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost, _ = sess.run(
[model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.Y: batch_y,
model.X: batch_x
},
)
assert not np.isnan(cost)
train_loss += cost
train_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc = 'test minibatch loop')
for i in pbar:
batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)
batch_y = test_Y[i : min(i + batch_size, len(test_X))]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost = sess.run(
[model.accuracy, model.cost],
feed_dict = {
model.Y: batch_y,
model.X: batch_x
},
)
test_loss += cost
test_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
train_loss /= len(train_X) / batch_size
train_acc /= len(train_X) / batch_size
test_loss /= len(test_X) / batch_size
test_acc /= len(test_X) / batch_size
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time() - lasttime)
print(
'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'
% (EPOCH, train_loss, train_acc, test_loss, test_acc)
)
EPOCH += 1
# -
saver.save(sess, 'only-attention/model.ckpt')
text = preprocessing('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya')
new_vector = str_idx([text], dictionary, len(text))
sess.run(tf.nn.softmax(model.logits), feed_dict={model.X:new_vector})
text = preprocessing('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya')
new_vector = str_idx([text], dictionary, len(text))
sess.run([tf.nn.softmax(model.logits_seq), model.attention], feed_dict={model.X:new_vector})
# +
real_Y, predict_Y = [], []
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'validation minibatch loop'
)
for i in pbar:
batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)
batch_y = test_Y[i : min(i + batch_size, len(test_X))]
predict_Y += np.argmax(
sess.run(
model.logits, feed_dict = {model.X: batch_x, model.Y: batch_y}
),
1,
).tolist()
real_Y += batch_y
# -
print(
metrics.classification_report(
real_Y, predict_Y, target_names = ['negative', 'positive']
)
)
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('only-attention', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('only-attention/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits_seq = g.get_tensor_by_name('import/logits_seq:0')
logits = g.get_tensor_by_name('import/logits:0')
alphas = g.get_tensor_by_name('import/alphas:0')
test_sess = tf.InteractiveSession(graph = g)
result = test_sess.run([logits, alphas, logits_seq], feed_dict = {x: new_vector})
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
news_string = 'Kerajaan juga perlu prihatin dan peka terhadap nasib para nelayan yang bergantung rezeki sepenuhnya kepada sumber hasil laut. Malah, projek ini memberikan kesan buruk yang berpanjangan kepada alam sekitar selain menjejaskan mata pencarian para nelayan'
text = preprocessing(news_string)
new_vector = str_idx([text], dictionary, len(text))
result = test_sess.run([tf.nn.softmax(logits), alphas, tf.nn.softmax(logits_seq)], feed_dict = {x: new_vector})
plt.figure(figsize = (15, 7))
labels = [word for word in text]
val = [val for val in result[1]]
plt.bar(np.arange(len(labels)), val)
plt.xticks(np.arange(len(labels)), labels, rotation = 'vertical')
plt.title('negative %f positive %f' % (result[0][0,0], result[0][0,1]))
plt.show()
| session/sentiment/self-attention.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,md
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import geopandas
import numpy
import matplotlib.pyplot as plt
import geoplanar
from shapely.geometry import box, Polygon
# ## Planar Enforcement Violation: One polygon overlapping another polygon
# +
p1 = box(0,0,10,10)
p2 = box(8,4, 12,6)
gdf = geopandas.GeoDataFrame(geometry=[p1,p2])
gdf.plot(edgecolor='k')
# -
geoplanar.is_overlapping(gdf)
gdf.geometry[0]
gdf.geometry[1]
gdf = geoplanar.trim_overlaps(gdf)
geoplanar.is_overlapping(gdf)
gdf.geometry[0]
gdf.geometry[1]
gdf.area
# ## Default trims the largest of the two overlapping polygons
#
# To have the correction apply the trim to the smaller of the two polygons, set `largest=False`:
# +
p1 = box(0,0,10,10)
p2 = box(8,4, 12,6)
gdf = geopandas.GeoDataFrame(geometry=[p1,p2])
gdf.plot(edgecolor='k')
# -
gdf = geoplanar.trim_overlaps(gdf, largest=False)
gdf.plot(edgecolor='k')
gdf.geometry[0]
gdf.geometry[1]
gdf.area
# ## Planar Enforcement Violation: One polygon overlapping two
# As always, care must be taken when carrying out a planar correction, as the result may not be what is desired:
# +
p1 = box(0,0,10,10)
p2 = box(10,0, 20,10)
p3 = box(8,4, 12,6)
gdf = geopandas.GeoDataFrame(geometry=[p1,p2,p3])
gdf.plot(edgecolor='k')
# -
gdf1 = geoplanar.trim_overlaps(gdf, largest=False) # trim the smallest feature of an intersecting pair
gdf1.area
gdf1.plot(edgecolor='k')
# +
p1 = box(0,0,10,10)
p2 = box(10,0, 20,10)
p3 = box(8,4, 12,6)
gdf = geopandas.GeoDataFrame(geometry=[p1,p2,p3])
gdf2 = geoplanar.trim_overlaps(gdf)
# -
gdf2.geometry
gdf2.area
gdf2.plot(edgecolor='k')
| notebooks/overlaps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="2_l6TVZdgsOl"
# # Building a simple Gridworld v2 Environment
# + id="IB4O1CBZtogr"
import gym
import numpy as np
# + id="fLP8mGdPtrxQ"
class GridworldV2Env(gym.Env):
def __init__(self, step_cost=-0.2, max_ep_length=500, explore_start=False):
self.index_to_coordinate_map = {
"0": [0, 0],
"1": [0, 1],
"2": [0, 2],
"3": [0, 3],
"4": [1, 0],
"5": [1, 1],
"6": [1, 2],
"7": [1, 3],
"8": [2, 0],
"9": [2, 1],
"10": [2, 2],
"11": [2, 3],
}
self.coordinate_to_index_map = {
str(val): int(key) for key, val in self.index_to_coordinate_map.items()
}
self.map = np.zeros((3, 4))
self.observation_space = gym.spaces.Discrete(1)
self.distinct_states = [str(i) for i in range(12)]
self.goal_coordinate = [0, 3]
self.bomb_coordinate = [1, 3]
self.wall_coordinate = [1, 1]
self.goal_state = self.coordinate_to_index_map[str(self.goal_coordinate)] # 3
self.bomb_state = self.coordinate_to_index_map[str(self.bomb_coordinate)] # 7
self.map[self.goal_coordinate[0]][self.goal_coordinate[1]] = 1
self.map[self.bomb_coordinate[0]][self.bomb_coordinate[1]] = -1
self.map[self.wall_coordinate[0]][self.wall_coordinate[1]] = 2
self.exploring_starts = explore_start
self.state = 8
self.done = False
self.max_ep_length = max_ep_length
self.steps = 0
self.step_cost = step_cost
self.action_space = gym.spaces.Discrete(4)
self.action_map = {"UP": 0, "RIGHT": 1, "DOWN": 2, "LEFT": 3}
self.possible_actions = list(self.action_map.values())
def reset(self):
self.done = False
self.steps = 0
self.map = np.zeros((3, 4))
self.map[self.goal_coordinate[0]][self.goal_coordinate[1]] = 1
self.map[self.bomb_coordinate[0]][self.bomb_coordinate[1]] = -1
self.map[self.wall_coordinate[0]][self.wall_coordinate[1]] = 2
if self.exploring_starts:
self.state = np.random.choice([0, 1, 2, 4, 6, 8, 9, 10, 11])
else:
self.state = 8
return self.state
def get_next_state(self, current_position, action):
next_state = self.index_to_coordinate_map[str(current_position)].copy()
if action == 0 and next_state[0] != 0 and next_state != [2, 1]:
# Move up
next_state[0] -= 1
elif action == 1 and next_state[1] != 3 and next_state != [1, 0]:
# Move right
next_state[1] += 1
elif action == 2 and next_state[0] != 2 and next_state != [0, 1]:
# Move down
next_state[0] += 1
elif action == 3 and next_state[1] != 0 and next_state != [1, 2]:
# Move left
next_state[1] -= 1
else:
pass
return self.coordinate_to_index_map[str(next_state)]
def step(self, action):
assert action in self.possible_actions, f"Invalid action:{action}"
current_position = self.state
next_state = self.get_next_state(current_position, action)
self.steps += 1
if next_state == self.goal_state:
reward = 1
self.done = True
elif next_state == self.bomb_state:
reward = -1
self.done = True
else:
reward = self.step_cost
if self.steps == self.max_ep_length:
self.done = True
self.state = next_state
return next_state, reward, self.done
# + [markdown] id="m56HRuQatzsF"
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="5OJ01_h1tzsI" executionInfo={"status": "ok", "timestamp": 1638441388194, "user_tz": -330, "elapsed": 3753, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}} outputId="db7eed13-feb9-4a28-88d5-cd8e5398bc76"
# !pip install -q watermark
# %reload_ext watermark
# %watermark -a "Sparsh A." -m -iv -u -t -d
# + [markdown] id="eGbBJ6D9tzsJ"
# ---
# + [markdown] id="EXVMZ6h8tzsL"
# **END**
| _notebooks/2022-01-22-gridworld-v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="3PturmDN40NY"
import numpy as np
import os
from time import time
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import load_img, array_to_img
from keras.models import Model
from keras.layers import Conv2D, SeparableConv2D
from keras.layers import BatchNormalization, Activation, advanced_activations
from keras.layers import Input, MaxPooling2D, Add
from keras.layers import Conv2DTranspose, UpSampling2D
# + id="tLtNuCH7l2rx"
TRAIN_MODE = 1
EPOCH = 1000
BATCH_SIZE = 32
# + id="r25FCPYv41Kd"
with open('/content/drive/MyDrive/IDEC/arrays.npy', 'rb') as f:
X_data = np.load(f)
Y_data = np.load(f)
# + colab={"base_uri": "https://localhost:8080/"} id="bf3al1tr45EE" outputId="75636429-9c67-4db6-b86d-ccb9f8070107"
X_train, X_test, Y_train, Y_test = train_test_split(X_data,
Y_data,
test_size = 0.2)
print("Train Input Data :", X_train.shape)
print("Train Output Data :", Y_train.shape)
print("Test Input Data :", X_test.shape)
print("Test Output Data :", Y_test.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 449} id="KOFvl5al5EW0" outputId="ef65fe73-21c1-45f1-b950-0b903b6aa05c"
# Image show after Scaling
flg, spot = plt.subplots(1,2,figsize=(15,10))
# Color Image
img = array_to_img(X_data[1154])
spot[0].imshow(img)
# Edge Image
img = array_to_img(Y_data[1154])
spot[1].imshow(img)
# + [markdown] id="x5fQo4xo69Sd"
# Design Network
# + id="tKMPgs7X5Ewd"
# U-Net Encoder
def build_encoder():
inputs = Input(shape=(160, 160, 3))
x = Conv2D(filters=32,
kernel_size=3,
strides=2,
padding='same')(inputs)
x = BatchNormalization()(x) # Normalizatoin : mean = 0, deviation = 1
x = Activation('relu')(x)
jump = x
for filters in [64, 128, 256]:
#x = advanced_activations.LeakyReLU(alpha=0.2)(x)
# Conv1
x = Activation('relu')(x)
x = SeparableConv2D(filters,
kernel_size=3,
padding='same')(x)
x = BatchNormalization()(x)
# Conv2
x = Activation('relu')(x)
x = SeparableConv2D(filters,
kernel_size=3,
padding='same')(x)
x = BatchNormalization()(x)
# Pooling
x = MaxPooling2D(pool_size=3,
strides=2,
padding='same')(x)
# Residual
residual = Conv2D(filters,
kernel_size=1,
strides=2,
padding='same')(jump)
x = Add()([x, residual])
jump = x
return inputs, x
# + id="JMb-WpfR7ERb"
# U-Net Decoder
def build_decoder(inputs, x):
# Residual
jump = x
# De-Conv
for filters in [256, 128, 64, 32]:
# Conv1
x = Activation('relu')(x)
x = Conv2DTranspose(filters,
kernel_size=3,
padding='same')(x)
x = BatchNormalization()(x)
# Conv2
x = Activation('relu')(x)
x = Conv2DTranspose(filters,
kernel_size=3,
padding='same')(x)
x = BatchNormalization()(x)
x = UpSampling2D(size=2)(x)
# Residual
residual = UpSampling2D(size=2)(jump)
residual = Conv2D(filters,
kernel_size=1,
padding='same')(residual)
x = Add()([x, residual])
jump = x
outputs = Conv2D(filters=3,
kernel_size=3,
activation='softmax',
padding='same')(x)
model = Model(inputs, outputs)
return model
# + id="TOt41x0LDjOR"
inputs, link = build_encoder()
model = build_decoder(inputs, link)
# + [markdown] id="b3ovK_gdoN7_"
# Train Model
# + colab={"base_uri": "https://localhost:8080/"} id="Swza4Pa0fmVG" outputId="15cc9d0b-1781-4b4d-819d-e6e1772770fd"
# Train Model
if TRAIN_MODE:
model.compile(optimizer='rmsprop',
loss='sparse_categorical_crossentropy')
print("Start Train\n")
begin = time()
model.fit(X_train, Y_train, BATCH_SIZE, EPOCH, verbose=1)
end = time()
print("Learning TIme : {:.2f}".format(end-begin))
model.save_weights('segment.h5')
else:
model.load_weights('segment.h5')
# + [markdown] id="oevjOqGHopKu"
# Test Model
# + colab={"base_uri": "https://localhost:8080/", "height": 333} id="dLXTw40OoEMe" outputId="3dc0b764-8dc9-4180-c0df-0715474117f8"
which = 232
fig, spot = plt.subplots(1, 3, figsize=(15,8))
img = array_to_img(X_test[which])
spot[0].imshow(img)
img = array_to_img(Y_test[which])
spot[1].imshow(img)
pred = model.predict(X_test, verbose=1)
mask = np.argmax(pred[which], axis=2)
mask = np.expand_dims(mask, axis=2)
img = array_to_img(mask)
spot[2].imshow(img)
| 03-TrainUnet-1000.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.12 64-bit (''azure_automl'': conda)'
# name: python3
# ---
# ###### NB! This,InteractiveLoginAuthentication, is only needed to run 1st time, then when ws_config is written, use later CELL in notebook, that just reads that file
# import repackage
# repackage.add("../azure-enterprise-scale-ml/esml/common/")
# from azureml.core import Workspace
# from azureml.core.authentication import InteractiveLoginAuthentication
# from esml import ESMLDataset, ESMLProject
#
# p = ESMLProject()
# p.dev_test_prod="dev"
# auth = InteractiveLoginAuthentication(tenant_id = p.tenant)
# ws, config_name = p.authenticate_workspace_and_write_config(auth)
# ###### NB!
# # ESML - accelerator: Batch scoring pipeline
# - 1) `AutoMap datalake` & init ESML project
# - 2) `Get earlier trained model`
# - 3) `Score with GOLD_TEST` and calculate ML-performance
#
#
#
# +
import repackage
repackage.add("../azure-enterprise-scale-ml/esml/common/")
from esml import ESMLDataset, ESMLProject
import pandas as pd
p = ESMLProject() # Will search in ROOT for your copied SETTINGS folder '../../../settings', you should copy template settings from '../settings'
p.ws = p.get_workspace_from_config() #2) Load DEV or TEST or PROD Azure ML Studio workspace
# +
#p.describe()
# -
try:
print(p.GoldTest.to_pandas_dataframe().head()) # gold_test_1 = Dataset.get_by_name(ws, name=p.dataset_gold_test_name_azure)
except:
print ("you need to have splitted GOLD dataset, GoldTest need to exist. Change next cell from MARKDOWN, to CODE, and run that. Try this again... ")
# p.inference_mode = False
# datastore = p.init(ws)
#
# esml_dataset = p.DatasetByName("ds01_titanic")
# df_bronze = esml_dataset.Bronze.to_pandas_dataframe()
# p.save_silver(esml_dataset,df_bronze)
# df = esml_dataset.Silver.to_pandas_dataframe()
# gold_train = p.save_gold(df)
# label = "Survived"
# train_set, validate_set, test_set = p.split_gold_3(0.6,label)
# # TEST_SET scoring: CLASSIFICATION
# - Autoregisters in Azure ML Studio, the TEST-scoring as TAGS on the GOLD_TEST dataset
# - locally on build server, or via pipeline.
# from azureml.train.automl import AutoMLConfig
#
# automl_config = AutoMLConfig(task = 'classification', # 4) Override the ENV config, for model(that inhertits from enterprise DEV_TEST_PROD config baseline)
# primary_metric = "Accuracy", # # Note: Regression(MAPE) are not possible in AutoML
# compute_target = None,
# training_data = None, # is 'train_6' pandas dataframe, but as an Azure ML Dataset
# experiment_exit_score = '0.922', # DEMO purpose (0.308 for diabetes regression, 0.6 for classification titanic)
# label_column_name = "Survived"
# )
#
# automl_config.user_settings['label_column_name']
# my_def_of_what_model_is_better_function = lambda sklearn_model_new,sklearn_model_current : (sklearn_model_new > sklearn_model_current)
#
# def my_function(my_lambda):
# model_a_new = 5
# model_b_current = 2
# if(my_lambda(model_a_new,model_b_current)):
# print("Model A, newly trained, is better")
# else:
# print("Model B, Current, is better")
#
# my_function(my_def_of_what_model_is_better_function)
# source_best_run.tags
# source_best_run.id
# source_best_run.properties['predicted_cost']
# source_best_run.properties
label = p.active_model["label"]
try:
p.GoldTest.name
except:
p.connect_to_lake()
train_6, validate_set_2, test_set_2 = p.split_gold_3(0.6,label)
# + tags=[]
from baselayer_azure_ml import ESMLTestScoringFactory
label = p.active_model["label"]
auc,accuracy,f1, precision,recall,matrix,matthews, plt = ESMLTestScoringFactory(p).get_test_scoring_7_classification(label)
print("AUC:")
print(auc)
print()
print("Accuracy:")
print(accuracy)
print()
print("F1 Score:")
print(f1)
print()
print("Precision:")
print(precision)
print()
print("Recall:")
print(recall)
print()
print("Confusion Matrix:")
print(matrix)
print("matthews :")
print(matthews)
# -
# # END - CLASSIFICATION, TEST_SET scoring
# # 2) CLASSIFICATION - predict_proba
# +
# ESML specific start
source_best_run, fitted_model, experiment = p.get_best_model(p.ws)
X_test = p.GoldTest.to_pandas_dataframe() # X_test
# ESML end
from sklearn.metrics import mean_squared_error, r2_score,recall_score,average_precision_score,f1_score,roc_auc_score,accuracy_score,roc_curve,confusion_matrix
y_test = X_test.pop(label).to_frame() # y_test (true labels)
y_predict = fitted_model.predict(X_test) # y_predict (predicted labels)
y_predict_proba = fitted_model.predict_proba(X_test) # y_predict (predicted probabilities)
predict_proba = y_predict_proba[:, 1] # Positive values only
auc = roc_auc_score(y_test, predict_proba)
fpr, tpr, thresholds = roc_curve(y_test, predict_proba)
accuracy, precision, recall, f1, matrix = \
accuracy_score(y_test, y_predict),\
average_precision_score(y_test, y_predict),\
recall_score(y_test, y_predict),\
f1_score(y_test,y_predict), \
confusion_matrix(y_test, y_predict)
# -
print("ROC AUC", auc)
# +
probs = y_predict_proba[:, 1].tolist() # positive. negative: [:, 0]
result = {'predict_survive': y_predict.tolist(), 'probability': probs}
df_res = pd.DataFrame.from_dict(result)
all_result = X_test.join(df_res)
all_result.head()
# -
| copy_my_subfolders_to_my_grandparent/notebook_demos/esml_classification_2_score_testset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
df1 = pd.DataFrame()
df2 = pd.DataFrame()
df1['viewers'] = ["Sushmita", "Adam", "Benny", "Anurag"]
df2['users'] = ["Adam", "Anurag", "Benny", "Sushmita", "Apoorva"]
df1 = df1.assign(views = [31.2,17.9,265.23,42.47])
df2 = df2.assign(cost = [20,np.nan, 15, 2, 7])
# +
df1.head()
# -
df2.head()
df = df1.merge(df2, left_on="viewers", right_on="users", how="left")
df.head()
df=df.fillna(df.mean())
df['Gender'] = ["Female", "Male", "Male", "Female"]
df
df['Gender']=df['Gender'].map({"Female":"F","Male":"M"})
df
df.groupby('Gender')['cost'].sum()
df.set_index(['viewers'], inplace = True)
df.head()
df.loc[['Adam','Anurag'],['cost','views']]
# +
df=pd.DataFrame({'Currency': pd.Series(['USD','EUR','GBP']),'ValueInINR': pd.Series([70, 89, 99])})
df.head()
# -
import copy
df1 = df.copy(deep=True)
df1.head()
df.drop(['Currency'],axis=1)
pd.DataFrame.from_dict({'Currency': ['USD','EUR','GBP'],'ValueInINR':[70, 89, 99]})
| Chapter01/Data_Manipulation_Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial: `batch`
#
# `hs_process.batch` extends the functionality of the core modules of `hs_process` ([segment](api/hs_process.segment.html#hs_process.segment), [spatial_mod](api/hs_process.spatial_mod.html#hs_process.spatial_mod), and [spec_mod](api/hs_process.spec_mod.html#hs_process.spec_mod)) so that image processing can run via a *relatively straightforward* script without end user interaction.
#
# The overall goal of the `batch` module is to implement many post-processing steps across many datacubes via an easy-to-use API. It was designed to save all data products (intermediate or final) to disk. Any unwanted files must be manually deleted by the user.
#
# ***
#
# ## Sample data
# Sample imagery captured from a [Resonon](https://resonon.com/) Pika II VIS-NIR line scanning imager and ancillary sample files can be downloaded from this [link](https://drive.google.com/drive/folders/1KpOBB4-qghedVFd8ukQngXNwUit8PFy_?usp=sharing).
#
# Before trying this tutorial on your own machine, please download the [sample files](https://drive.google.com/drive/folders/1KpOBB4-qghedVFd8ukQngXNwUit8PFy_?usp=sharing) and place into a local directory of your choosing (and do not change the file names). Indicate the location of your sample files by modifying `data_dir`:
data_dir = r'F:\\nigo0024\Documents\hs_process_demo'
# ***
#
# ## Confirm your environment
#
# Before trying the tutorials, be sure `hs_process` and its dependencies are [properly installed](installation.html#). If you installed in a *virtual environment*, first check we are indeed using the Python instance that was installed with the virtual environment:
import sys
import hs_process
print('Python install location: {0}'.format(sys.executable))
print('Version: {0}'.format(hs_process.__version__))
# The *spec* folder that contains `python.exe` tells me that the activated Python instance is indeed in the `spec` environment, just as I intend. If you created a virtual environment, but your `python.exe` is not in the `envs\spec` directory, you either did not properly create your virtual environment or you are not pointing to the correct Python installation in your IDE (e.g., Spyder, Jupyter notebook, etc.).
#
# ***
#
# ## `batch.cube_to_spectra`
# Calculates the mean and standard deviation for each cube in `fname_list` and writes the result to a ".spec" file. [[API]](api/hs_process.batch.html#hs_process.batch.cube_to_spectra)
#
# **Note:** The following `batch` example builds on the results of the [`spatial_mod.crop_many_gdf` tutorial](tutorial_spatial_mod.html#spatial_mod.crop_many_gdf). Please complete the [`spatial_mod.crop_many_gdf`](tutorial_spatial_mod.html#spatial_mod.crop_many_gdf) example to be sure your directory (i.e., `base_dir`) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf`
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.bip',
progress_bar=True) # searches for all files in ``base_dir`` with a ".bip" file extension
# -
# Use `batch.cube_to_spectra` to calculate the *mean* and *standard deviation* across all pixels for each of the datacubes in `base_dir`.
hsbatch.cube_to_spectra(base_dir=base_dir, write_geotiff=False, out_force=True)
# Use [`seaborn`](https://seaborn.pydata.org/index.html) to visualize the spectra of plots 1011, 1012, and 1013. Notice how ``hsbatch.io.name_plot`` is utilized to retrieve the plot ID, and how the *"history"* tag is referenced from the metadata to determine the number of pixels whose reflectance was averaged to create the mean spectra. Also remember that pixels across the original input image likely represent a combination of soil, vegetation, and shadow.
import seaborn as sns
import re
fname_list = [os.path.join(base_dir, 'cube_to_spec', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-cube-to-spec-mean.spec'),
os.path.join(base_dir, 'cube_to_spec', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1012-cube-to-spec-mean.spec'),
os.path.join(base_dir, 'cube_to_spec', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1013-cube-to-spec-mean.spec')]
ax = None
for fname in fname_list:
hsbatch.io.read_spec(fname)
meta_bands = list(hsbatch.io.tools.meta_bands.values())
data = hsbatch.io.spyfile_spec.load().flatten() * 100
hist = hsbatch.io.spyfile_spec.metadata['history']
pix_n = re.search('<pixel number: (.*)>', hist).group(1)
if ax is None:
ax = sns.lineplot(x=meta_bands, y=data, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')')
else:
ax = sns.lineplot(x=meta_bands, y=data, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')', ax=ax)
ax.set_xlabel('Wavelength (nm)', weight='bold')
ax.set_ylabel('Reflectance (%)', weight='bold')
ax.set_title(r'API Example: `batch.cube_to_spectra`', weight='bold')
# ***
#
# ## `batch.segment_band_math`
# Batch processing tool to perform band math on multiple datacubes in the same way. `batch.segment_band_math` is typically used prior to `batch.segment_create_mask` to generate the images/directory required for the masking process. [[API]](api/hs_process.batch.html#hs_process.batch.segment_band_math)
#
# **Note:** The following `batch` example builds on the results of the [`spatial_mod.crop_many_gdf` tutorial](tutorial_spatial_mod.html#spatial_mod.crop_many_gdf). Please complete the [`spatial_mod.crop_many_gdf`](tutorial_spatial_mod.html#spatial_mod.crop_many_gdf) example to be sure your directory (i.e., `base_dir`) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf`
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.bip',
progress_bar=True) # searches for all files in ``base_dir`` with a ".bip" file extension
# -
# Use `batch.segment_band_math` to compute the MCARI2 (Modified Chlorophyll Absorption Ratio Index Improved; Haboudane et al., 2004) spectral index for each of the datacubes in ``base_dir``. See `Harris Geospatial`_ for more information about the MCARI2 spectral index and references to other spectral indices.
folder_name = 'band_math_mcari2-800-670-550' # folder name can be modified to be more descriptive in what type of band math is being performed
method = 'mcari2' # must be one of "ndi", "ratio", "derivative", or "mcari2"
wl1 = 800
wl2 = 670
wl3 = 550
hsbatch.segment_band_math(base_dir=base_dir, folder_name=folder_name,
name_append='band-math', write_geotiff=True,
method=method, wl1=wl1, wl2=wl2, wl3=wl3,
plot_out=True, out_force=True)
# `batch.segment_band_math` creates a new folder in `base_dir` (in this case the new directory is `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\band_math_mcari2-800-670-550`), which contains several data products.
#
# The **first** is `band-math-stats.csv`: a spreadsheet containing summary statistics for each of the image cubes that were processed via `batch.segment_band_math`; stats include *pixel count*, *mean*, *standard deviation*, *median*, and *percentiles* across all image pixels.
# +
import pandas as pd
pd.read_csv(os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf',
'band_math_mcari2-800-670-550', 'band-math-stats.csv')).head(5)
# -
# **Second** is a `geotiff` file for each of the image cubes after the band math processing. This can be opened in *QGIS* to visualize in a spatial reference system, or can be opened using any software that supports floating point *.tif* files.
#
# 
#
# **Third** is the band math raster saved in the *.hdr* file format. Note that the data conained here should be the same as in the *.tif* file, so it's a matter of preference as to what may be more useful. This single band *.hdr* can also be opend in *QGIS*.
#
# **Fourth** is a histogram of the band math data contained in the image. The histogram illustrates the 90th percentile value, which may be useful in the segmentation step (e.g., see [`batch.segment_create_mask`](tutorial_batch.html#batch.segment_create_mask)).
# ***
#
# ## `batch.segment_create_mask`
# Batch processing tool to create a masked array on many datacubes. `batch.segment_create_mask` is typically used after `batch.segment_band_math` to mask all the datacubes in a directory based on the result of the band math process. [[API]](api/hs_process.batch.html#hs_process.batch.segment_create_mask)
#
# **Note:** The following `batch` example builds on the results of the [`spatial_mod.crop_many_gdf` tutorial](tutorial_spatial_mod.html#spatial_mod.crop_many_gdf) and and [`batch.segment_band_math`](tutorial_batch.html#batch.segment_band_math). Please complete both the `spatial_mod.crop_many_gdf` and `batch.segment_band_math` tutorial examples to be sure your directories (i.e., `base_dir`, and `mask_dir`) are populated with image files. The following example will be using datacubes located in: `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf`
# based on MCARI2 images located in: `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\band_math_mcari2-800-670-550`
#
# Load and initialize the `batch` module, ensuring `base_dir` is a valid directory
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.bip',
progress_bar=True) # searches for all files in ``base_dir`` with a ".bip" file extension
# -
# There must be a single-band image that will be used to determine which datacube pixels are to be masked (determined via the `mask_dir` parameter). Point to the directory that contains the MCARI2 images.
mask_dir = os.path.join(base_dir, 'band_math_mcari2-800-670-550')
print(os.path.isdir(mask_dir))
# Indicate how the MCARI2 images should be used to determine which hyperspectal pixels are to be masked. The available parameters for controlling this are `mask_thresh`, `mask_percentile`, and `mask_side`. We will mask out all pixels that fall below the MCARI2 90th percentile.
mask_percentile = 90
mask_side = 'lower'
# Finally, indicate the folder to save the masked datacubes and perform the batch masking via `batch.segment_create_mask`
folder_name = 'mask_mcari2_90th'
hsbatch.segment_create_mask(base_dir=base_dir, mask_dir=mask_dir,
folder_name=folder_name,
name_append='mask-mcari2-90th', write_geotiff=True,
mask_percentile=mask_percentile,
mask_side=mask_side, out_force=True)
# `batch.segment_create_mask` creates a new folder in `base_dir` named according to the `folder_name` parameter (in this case the new directory is `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th`) which contains several data products.
#
# The **first** is `mask-stats.csv`: a spreadsheet containing the band math threshold value for each image file. In this example, the MCARI2 value corresponding to the 90th percentile is listed.
stats_fname = 'mask-{0}-pctl-{1}.csv'.format(mask_side, mask_percentile)
pd.read_csv(os.path.join(base_dir, 'mask_mcari2_90th', stats_fname)).head(5)
# **Second** is a `geotiff` file for each of the image cubes after the masking procedure. This can be opened in *QGIS* to visualize in a spatial reference system, or can be opened using any software that supports floating point *.tif* files. The masked pixels are saved as `null` values and should render transparently.
#
#
# 
#
# **Third** is the full hyperspectral datacube, also with the masked pixels saved as ``null`` values. Note that the only pixels remaining are the 10% with the highest MCARI2 values.
#
# 
#
# **Fourth** is the mean spectra across the unmasked datacube pixels. This is illustrated above by the green line plot (the light green shadow represents the standard deviation for each band).
# ***
#
# ## `batch.spatial_crop`
# Iterates through a spreadsheet that provides necessary information about how each image should be cropped and how it should be saved. [[API]](api/hs_process.batch.html#hs_process.batch.spatial_crop)
#
# If `gdf` is passed (a `geopandas.GoeDataFrame` polygon file), the cropped images will be shifted to the center of appropriate "plot" polygon.
#
# **Tips and Tricks for** `fname_sheet` **when** `gdf` **is not passed**
#
# If `gdf` is not passed, `fname_sheet` may have the following required column headings that correspond to the relevant parameters in [`spatial_mod.crop_single`](tutorial_spatial_mod.html#spatial_mod.crop_single) and [`spatial_mod.crop_many_gdf`](tutorial_spatial_mod.html#spatial_mod.crop_many_gdf):
#
# 1. "directory"
# 2. "name_short"
# 3. "name_long"
# 4. "ext"
# 5. "pix_e_ul"
# 6. "pix_n_ul".
#
# With this minimum input, `batch.spatial_crop` will read in each image, crop from the upper left pixel (determined as `pix_e_ul`/`pix_n_ul`) to the lower right pixel calculated based on `crop_e_pix`/`crop_n_pix` (which is the width of the cropped area in units of pixels).
#
# **Note:** `crop_e_pix` and `crop_n_pix` have default values (see [`defaults.crop_defaults`](api/hs_process.defaults.html#hs_process.defaults)), but they can also be passed specifically for each datacube by including appropriate columns in `fname_sheet` (which takes precedence over `defaults.crop_defaults`).
#
# `fname_sheet` may also have the following optional column headings:
#
# 1. "crop_e_pix"
# 2. "crop_n_pix"
# 3. "crop_e_m"
# 4. "crop_n_m"
# 5. "buf_e_pix"
# 6. "buf_n_pix"
# 7. "buf_e_m"
# 8. "buf_n_m"
# 9. "plot_id"
#
# **More** ``fname_sheet`` **Tips and Tricks**
#
# 1. These optional inputs passed via `fname_sheet` allow more control over exactly how the images are to be cropped. For a more detailed explanation of the information that many of these columns are intended to contain, see the documentation for [`spatial_mod.crop_single`](tutorial_spatial_mod.html#spatial_mod.crop_single) and [`spatial_mod.crop_many_gdf`](tutorial_spatial_mod.html#spatial_mod.crop_many_gdf). Those parameters not referenced should be apparent in the API examples and tutorials.
# 2. If the column names are different in `fname_sheet` than described here, [`defaults.spat_crop_cols`](api/hs_process.defaults.html#hs_process.defaults) can be modified to indicate which columns correspond to the relevant information.
# 3. Any other columns can be added to `fname_sheet`, but `batch.spatial_crop` does not use them in any way.
#
# **Note:** The following `batch` example only actually processes *a single* hyperspectral image. If more datacubes were present in `base_dir`, however, `batch.spatial_crop` would process all datacubes that were available.
#
# **Note:** This example uses `spatial_mod.crop_many_gdf` to crop many plots from a datacube using a polygon geometry file describing the spatial extent of each plot.
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
import geopandas as gpd
import pandas as pd
from hs_process import batch
base_dir = data_dir
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.bip', dir_level=0,
progress_bar=True) # searches for all files in ``base_dir`` with a ".bip" file extension
# -
# Load the plot geometry as a `geopandas.GeoDataFrame`
fname_gdf = os.path.join(data_dir, 'plot_bounds.geojson')
gdf = gpd.read_file(fname_gdf)
# Perform the spatial cropping using the *"many_gdf"* `method`. Note that nothing is being passed to `fname_sheet` here, so `batch.spatial_crop` is simply going to attempt to crop all plots contained within `gdf` that overlap with any datacubes in `base_dir`.
#
# Passing `fname_sheet` directly is definitely more flexible for customization. However, some customization is possible while not passing `fname_sheet`. In the example below, we set an easting and northing buffer, as well as limit the number of plots to crop to 40. These defaults trickle through to `spatial_mod.crop_many_gdf()`, so by setting them on the `batch` object, they will be recognized when calculating crop boundaries from `gdf`.
# +
import warnings
hsbatch.io.defaults.crop_defaults.buf_e_m = 2 # Sets buffer in the easting direction (units of meters)
hsbatch.io.defaults.crop_defaults.buf_n_m = 0.5
hsbatch.io.defaults.crop_defaults.n_plots = 40 # We can limit the number of plots to process from gdf
with warnings.catch_warnings(): # Suppresses the UserWarning that is issued
warnings.simplefilter('ignore')
hsbatch.spatial_crop(base_dir=base_dir, method='many_gdf',
gdf=gdf, out_force=True)
# -
# A new folder was created in `base_dir` - `F:\\nigo0024\Documents\hs_process_demo\spatial_crop` - that contains the cropped datacubes and the cropped `geotiff` images. The Plot ID from the `gdf` is used to name each datacube according to its plot ID. The `geotiff` images can be opened in *QGIS* to visualize the images after cropping them.
#
# 
#
# The cropped images were brightened in *QGIS* to emphasize the cropped boundaries. The plot boundaries are overlaid for reference (notice the 2.0 m buffer on the East/West ends and the 0.5 m buffer on the North/South sides).
# ## `batch.spectra_derivative`
# Calculates the numeric spectral derivative for each spectra in `fname_list` and writes the result as a ".spec" file. [[API]](api/hs_process.batch.html#hs_process.batch.spectra_derivative)
#
# **Note:** The following `batch` example builds on the results of the [`spatial_mod.cube_to_spectra` tutorial](tutorial_batch.html#batch.cube_to_spectra). Please complete both the [`spatial_mod.crop_many_gdf`](tutorial_spatial_mod.html#spatial_mod.crop_many_gdf) and [`spatial_mod.cube_to_spectra` tutorial](tutorial_batch.html#batch.cube_to_spectra) examples to be sure your directory (i.e., `base_dir`) is populated with multiple hyperspectral spectra. The following example will be using spectra located in the following directory: `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\cube_to_spec`
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf', 'cube_to_spec')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.spec', progress_bar=True)
# -
# Use ``batch.spectra_derivative`` to calculate the numeric spectral derivative for each of the .spec files in ``base_dir``.
hsbatch.spectra_derivative(base_dir=base_dir, out_force=True)
# Use seaborn to visualize the derivative spectra of plots 1011, 1012, and 1013.
import seaborn as sns
import re
fname_list = [os.path.join(base_dir, 'spec_derivative', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spec-derivative-order-1.spec'),
os.path.join(base_dir, 'spec_derivative', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1012-spec-derivative-order-1.spec'),
os.path.join(base_dir, 'spec_derivative', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1013-spec-derivative-order-1.spec')]
ax = None
for fname in fname_list:
hsbatch.io.read_spec(fname)
meta_bands = list(hsbatch.io.tools.meta_bands.values())
data = hsbatch.io.spyfile_spec.open_memmap().flatten() * 100
hist = hsbatch.io.spyfile_spec.metadata['history']
pix_n = re.search('<pixel number: (?s)(.*)>] ->', hist).group(1)
if ax is None:
ax = sns.lineplot(x=meta_bands, y=data, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')')
else:
ax = sns.lineplot(x=meta_bands, y=data, label='Plot '+hsbatch.io.name_plot+' (n='+pix_n+')', ax=ax)
ax.set(ylim=(-1.5, 1.5))
ax.set_xlabel('Wavelength (nm)', weight='bold')
ax.set_ylabel('Derivative reflectance (%)', weight='bold')
ax.set_title(r'API Example: `batch.spectra_derivative`', weight='bold')
# ***
#
# ## `batch.spectra_combine`
# Batch processing tool to gather all pixels from every image in a directory, compute the mean and standard deviation, and save as a single spectra (i.e., a spectra file is equivalent to a single spectral pixel with no spatial information). [[API]](api/hs_process.batch.html#hs_process.batch.spectra_combine)
#
# Visualize the individual spectra by opening in *Spectronon*.
#
# 
#
# Notice that there is a range in radiance values across the various reference panels (e.g., the radiance in the green region ranges from ~26k to ~28k μW sr<sup>-1</sup> cm<sup>-2</sup> μm<sup>-1</sup>).
#
# **Note:** The following example will load in several small hyperspectral radiance datacubes *(not reflectance)* that were previously cropped manually (via Spectronon software). These datacubes represent the radiance values of grey reference panels that were placed in the field to provide data necessary for converting radiance imagery to reflectance. These particular datacubes were extracted from several different images captured within ~10 minutes of each other.
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'cube_ref_panels')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir)
# -
# Combine all the *radiance* datacubes in the directory via `batch.spectra_combine`.
hsbatch.spectra_combine(base_dir=base_dir, search_ext='bip', dir_level=0, out_force=True)
# Visualize the combined spectra by opening in *Spectronon*. The solid line represents the mean radiance spectra across all pixels and images in `base_dir`, and the lighter, slightly transparent line represents the standard deviation of the radiance across all pixels and images in `base_dir`.
#
# 
#
# Notice the lower signal at the oxygen absorption region (near 770 nm). After converting datacubes to reflectance, it may be desireable to spectrally clip this region (see [`spec_mod.spectral_clip`](tutorial_spec_mod.html#spec_mod.spectral_clip))
# ***
#
# ## `batch.spectra_to_csv`
# Reads all the `.spec` files in a direcory and saves their reflectance information to a `.csv`. `batch.spectra_to_csv` is identical to `batch.spectra_to_df` except a `.csv` file is saved rather than returning a `pandas.DataFrame`. [[API]](api/hs_process.batch.html#hs_process.batch.spectra_to_csv)
#
# **Note:** The following example builds on the results of the [`batch.segment_band_math` tutorial](tutorial_batch.html#batch.segment_band_math) and [`batch.segment_create_mask`](tutorial_batch.html#batch.segment_create_mask). Please complete each of those tutorial examples to be sure your directory (i.e., `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th`) is populated with image files.
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf', 'mask_mcari2_90th')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir)
# -
# Read all the `.spec` files in `base_dir` and save them to a `.csv` file.
hsbatch.spectra_to_csv(base_dir=base_dir, search_ext='spec', dir_level=0)
# When `stats-spectra.csv` is opened, we can see that each row is a `.spec` file from a different plot, and each column is a particular spectral band/wavelength.
import pandas as pd
pd.read_csv(os.path.join(base_dir, 'stats-spectra.csv')).head(5)
# ***
#
# ## `batch.spectra_to_df`
# Reads all the .spec files in a direcory and returns their data as a `pandas.DataFrame` object. `batch.spectra_to_df` is identical to `batch.spectra_to_csv` except a `pandas.DataFrame` is returned rather than saving a `.csv` file. [[API]](api/hs_process.batch.html#hs_process.batch.spectra_to_df)
#
# **Note:** The following example builds on the results of the [`batch.segment_band_math` tutorial](tutorial_batch.html#batch.segment_band_math) and [`batch.segment_create_mask`](tutorial_batch.html#batch.segment_create_mask). Please complete each of those tutorial examples to be sure your directory (i.e., `F:\\nigo0024\Documents\hs_process_demo\spatial_mod\crop_many_gdf\mask_mcari2_90th`) is populated with image files.
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_mod', 'crop_many_gdf', 'mask_mcari2_90th')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir)
# -
# Read all the `.spec` files in `base_dir` and load them to `df_spec`, a `pandas.DataFrame`.
df_spec = hsbatch.spectra_to_df(base_dir=base_dir, search_ext='spec', dir_level=0)
df_spec.head(5)
# Each row is a `.spec` file from a different plot, and each column is a particular spectral band.
#
# It is somewhat confusing to conceptualize spectral data by band number (as opposed to the wavelenth it represents). `hs_process.hs_tools.get_band` can be used to retrieve spectral data for all plots via indexing by wavelength. Say we need to access reflectance at 710 nm for each plot (in this case, the 710 nm band is band number 155).
df_710nm = df_spec[['fname', 'plot_id', hsbatch.io.tools.get_band(710)]]
df_710nm.head(5)
# ***
#
# ## `batch.spectral_clip`
# Batch processing tool to spectrally clip multiple datacubes in the same way. [[API]](api/hs_process.batch.html#hs_process.batch.spectral_clip)
#
# **Note:** The following example builds on the results of the [`batch.spatial_crop` tutorial](tutorial_batch.html#batch.spatial_crop). Please complete the `batch.spatial_crop` tutorial example to be sure your directory (i.e., `base_dir`) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in teh following directory: `F:\\nigo0024\Documents\hs_process_demo\spatial_crop`
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_crop')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.bip',
progress_bar=True) # searches for all files in ``base_dir`` with a ".bip" file extension
# -
# Use `batch.spectral_clip` to clip all spectral bands below *420 nm* and above *880 nm*, as well as the bands near the oxygen absorption (i.e., *760-776 nm*) and water absorption (i.e., *813-827 nm*) regions.
hsbatch.spectral_clip(base_dir=base_dir, folder_name='spec_clip',
wl_bands=[[0, 420], [760, 776], [813, 827], [880, 1000]],
out_force=True)
# Use [Seaborn](https://seaborn.pydata.org/index.html) to visualize the spectra of a single pixel in one of the processed images.
# +
import seaborn as sns
fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spatial-crop.bip')
hsbatch.io.read_cube(fname)
spy_mem = hsbatch.io.spyfile.open_memmap() # datacube before clipping
meta_bands = list(hsbatch.io.tools.meta_bands.values())
fname = os.path.join(base_dir, 'spec_clip', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spec-clip.bip')
hsbatch.io.read_cube(fname)
spy_mem_clip = hsbatch.io.spyfile.open_memmap() # datacube after clipping
meta_bands_clip = list(hsbatch.io.tools.meta_bands.values())
ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Before spectral clipping', linewidth=3)
ax = sns.lineplot(x=meta_bands_clip, y=spy_mem_clip[26][29], label='After spectral clipping', ax=ax)
ax.set_xlabel('Wavelength (nm)', weight='bold')
ax.set_ylabel('Reflectance (%)', weight='bold')
ax.set_title(r'API Example: `batch.spectral_clip`', weight='bold')
# -
# ***
#
# ## `batch.spectral_mimic`
# Batch processing tool to spectrally mimic a multispectral sensor for multiple datacubes in the same way. [[API]](api/hs_process.batch.html#hs_process.batch.spectral_mimic)
#
# **Note:** The following example builds on the results of the [batch.spatial_crop tutorial](tutorial_batch.html#batch.spatial_crop). Please complete the `batch.spatial_crop` tutorial example to be sure your directory (i.e., `base_dir`) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: `F:\\nigo0024\Documents\hs_process_demo\spatial_crop`
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_crop')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.bip', progress_bar=True) # searches for all files in ``base_dir`` with a ".bip" file extension
# -
# Use `batch.spectral_mimic` to spectrally mimic the Sentinel-2A multispectral satellite sensor.
hsbatch.spectral_mimic(base_dir=base_dir, folder_name='spec_mimic',
name_append='sentinel-2a',
sensor='sentinel-2a', center_wl='weighted',
out_force=True)
# Use `seaborn` to visualize the spectra of a single pixel in one of the processed images.
import seaborn as sns
fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spatial-crop.bip')
hsbatch.io.read_cube(fname)
spy_mem = hsbatch.io.spyfile.open_memmap() # datacube before mimicking
meta_bands = list(hsbatch.io.tools.meta_bands.values())
fname = os.path.join(base_dir, 'spec_mimic', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-sentinel-2a.bip')
hsbatch.io.read_cube(fname)
spy_mem_sen2a = hsbatch.io.spyfile.open_memmap() # datacube after mimicking
meta_bands_sen2a = list(hsbatch.io.tools.meta_bands.values())
ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Hyperspectral (Pika II)', linewidth=3)
ax = sns.lineplot(x=meta_bands_sen2a, y=spy_mem_sen2a[26][29], label='Sentinel-2A "mimic"', marker='o', ms=6, ax=ax)
ax.set_xlabel('Wavelength (nm)', weight='bold')
ax.set_ylabel('Reflectance (%)', weight='bold')
ax.set_title(r'API Example: `batch.spectral_mimic`', weight='bold')
# Use `spec_mod.spectral_mimic` to mimic the [Sentera 6x spectral configuration](https://sentera.com/6x/) and compare to both hyperspectral and Sentinel-2A.
# +
hsbatch.spectral_mimic(base_dir=base_dir, folder_name='spec_mimic',
name_append='sentera-6x',
sensor='sentera_6x', center_wl='weighted',
out_force=True)
fname = os.path.join(base_dir, 'spec_mimic', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-sentera-6x.bip')
hsbatch.io.read_cube(fname)
spy_mem_6x = hsbatch.io.spyfile.open_memmap() # datacube after mimicking
meta_bands_6x = list(hsbatch.io.tools.meta_bands.values())
ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Hyperspectral (Pika II)', linewidth=3)
ax = sns.lineplot(x=meta_bands_sen2a, y=spy_mem_sen2a[26][29], label='Sentinel-2A "mimic"', marker='o', ms=6, ax=ax)
ax = sns.lineplot(x=meta_bands_6x, y=spy_mem_6x[26][29], label='Sentera 6X "mimic"', color='green', marker='o', ms=8, ax=ax)
ax.set_xlabel('Wavelength (nm)', weight='bold')
ax.set_ylabel('Reflectance (%)', weight='bold')
ax.set_title(r'API Example: `batch.spectral_mimic`', weight='bold')
# -
# And finally, mimic the [Micasense RedEdge-MX](https://micasense.com/rededge-mx/) and compare to hyperspectral, Sentinel-2A, and Sentera 6X.
# +
hsbatch.spectral_mimic(base_dir=base_dir, folder_name='spec_mimic',
name_append='micasense-rededge-3',
sensor='micasense_rededge_3', center_wl='weighted',
out_force=True)
fname = os.path.join(base_dir, 'spec_mimic', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-micasense-rededge-3.bip')
hsbatch.io.read_cube(fname)
spy_mem_re3 = hsbatch.io.spyfile.open_memmap() # datacube after mimicking
meta_bands_re3 = list(hsbatch.io.tools.meta_bands.values())
ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Hyperspectral (Pika II)', linewidth=3)
ax = sns.lineplot(x=meta_bands_sen2a, y=spy_mem_sen2a[26][29], label='Sentinel-2A "mimic"', marker='o', ms=6, ax=ax)
ax = sns.lineplot(x=meta_bands_6x, y=spy_mem_6x[26][29], label='Sentera 6X "mimic"', color='green', marker='o', ms=8, ax=ax)
ax = sns.lineplot(x=meta_bands_re3, y=spy_mem_re3[26][29], label='Micasense RedEdge 3 "mimic"', color='red', marker='o', ms=8, ax=ax)
ax.set_xlabel('Wavelength (nm)', weight='bold')
ax.set_ylabel('Reflectance (%)', weight='bold')
ax.set_title(r'API Example: `batch.spectral_mimic`', weight='bold')
# -
# ***
#
# ## `batch.spectral_resample`
# Batch processing tool to spectrally resample (a.k.a. "bin") multiple datacubes in the same way. [[API]](api/hs_process.batch.html#hs_process.batch.spectral_resample)
#
# **Note:** The following example builds on the results of the [batch.spatial_crop tutorial](tutorial_batch.html#batch.spatial_crop). Please complete the `batch.spatial_crop` tutorial example to be sure your directory (i.e., `base_dir`) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: `F:\\nigo0024\Documents\hs_process_demo\spatial_crop`
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_crop')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.bip', progress_bar=True) # searches for all files in ``base_dir`` with a ".bip" file extension
# -
# Use `batch.spectral_resample` to bin (a.k.a., "group") all spectral bands into 20 nm bandwidth bands (from ~2.3 nm bandwidth originally) on a per-pixel basis.
hsbatch.spectral_resample(base_dir=base_dir, folder_name='spec_bin',
name_append='spec-bin-20',
bandwidth=20, out_force=True)
# Use `seaborn` to visualize the spectra of a single pixel in one of the processed images.
import seaborn as sns
fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spatial-crop.bip')
hsbatch.io.read_cube(fname)
spy_mem = hsbatch.io.spyfile.open_memmap() # datacube before resampling
meta_bands = list(hsbatch.io.tools.meta_bands.values())
fname = os.path.join(base_dir, 'spec_bin', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spec-bin-20.bip')
hsbatch.io.read_cube(fname)
spy_mem_bin = hsbatch.io.spyfile.open_memmap() # datacube after resampling
meta_bands_bin = list(hsbatch.io.tools.meta_bands.values())
ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Hyperspectral (Pika II)', linewidth=3)
ax = sns.lineplot(x=meta_bands_bin, y=spy_mem_bin[26][29], label='Spectral resample (20 nm)', marker='o', ms=6, ax=ax)
ax.set_xlabel('Wavelength (nm)', weight='bold')
ax.set_ylabel('Reflectance (%)', weight='bold')
ax.set_title(r'API Example: `batch.spectral_resample`', weight='bold')
# ***
#
# ## `batch.spectral_smooth`
# Batch processing tool to spectrally smooth multiple datacubes in the same way. [[API]](api/hs_process.batch.html#hs_process.batch.spectral_smooth)
#
# **Note:** The following example builds on the results of the [batch.spatial_crop tutorial](tutorial_batch.html#batch.spatial_crop). Please complete the `batch.spatial_crop` tutorial example to be sure your directory (i.e., `base_dir`) is populated with multiple hyperspectral datacubes. The following example will be using datacubes located in the following directory: `F:\\nigo0024\Documents\hs_process_demo\spatial_crop`
#
# Load and initialize the `batch` module, checking to be sure the directory exists.
# +
import os
from hs_process import batch
base_dir = os.path.join(data_dir, 'spatial_crop')
print(os.path.isdir(base_dir))
hsbatch = batch(base_dir, search_ext='.bip', progress_bar=True) # searches for all files in ``base_dir`` with a ".bip" file extension
# -
# Use `batch.spectral_smooth` to perform a *Savitzky-Golay* smoothing operation on each image/pixel in `base_dir`. The `window_size` and `order` can be adjusted to achieve desired smoothing results.
hsbatch.spectral_smooth(base_dir=base_dir, folder_name='spec_smooth',
window_size=11, order=2, out_force=True)
# Use [Seaborn](https://seaborn.pydata.org/index.html) to visualize the spectra of a single pixel in one of the processed images.
# +
import seaborn as sns
fname = os.path.join(base_dir, 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spatial-crop.bip')
hsbatch.io.read_cube(fname)
spy_mem = hsbatch.io.spyfile.open_memmap() # datacube before smoothing
meta_bands = list(hsbatch.io.tools.meta_bands.values())
fname = os.path.join(base_dir, 'spec_smooth', 'Wells_rep2_20180628_16h56m_pika_gige_7_plot_1011-spec-smooth.bip')
hsbatch.io.read_cube(fname)
spy_mem_clip = hsbatch.io.spyfile.open_memmap() # datacube after smoothing
meta_bands_clip = list(hsbatch.io.tools.meta_bands.values())
ax = sns.lineplot(x=meta_bands, y=spy_mem[26][29], label='Before spectral smoothing', linewidth=3)
ax = sns.lineplot(x=meta_bands_clip, y=spy_mem_clip[26][29], label='After spectral smoothing', ax=ax)
ax.set_xlabel('Wavelength (nm)', weight='bold')
ax.set_ylabel('Reflectance (%)', weight='bold')
ax.set_title(r'API Example: `batch.spectral_smooth`', weight='bold')
| hs_process/examples/tutorial_batch.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.4.1
# language: julia
# name: julia-1.4
# ---
# ## Ideal age(@id ideal-age)
#
# The tracer equation for the ideal age is:
#
# $$\left(\partial_t + \mathbf{T}\right) \boldsymbol{a} = 1 - \frac{\boldsymbol{a}}{τ} \, (\boldsymbol{z} \le z_0),$$
#
# where the sink term on the right clamps the age to $0$ at the surface (where $\boldsymbol{z} \le z_0$).
# The smaller the timescale $\tau$, the quicker $\boldsymbol{a}$ is restored to $0$ at the surface.
#
# AIBECS can interpret tracer equations as long as you arrange them under the generic form:
#
# $$\big(\partial_t + \mathbf{T}(\boldsymbol{p}) \big) \boldsymbol{x} = \boldsymbol{G}(\boldsymbol{x}, \boldsymbol{p}),$$
#
# where $\mathbf{T}(\boldsymbol{p})$ is the transport, $\boldsymbol{G}(\boldsymbol{x}, \boldsymbol{p})$ is the net local sources and sinks, and $\boldsymbol{p}$ is the vector of model parameters.
# We will then use the AIBECS to simulate the ideal age by finding the steady-state of the system, i.e., the solution of
#
# $$\partial_t \boldsymbol{x} = \boldsymbol{F}(\boldsymbol{x}, \boldsymbol{p}) = \boldsymbol{G}(\boldsymbol{x}, \boldsymbol{p}) - \mathbf{T}(\boldsymbol{p}) \, \boldsymbol{x} = 0.$$
#
# In this tutorial, we will simulate the ideal age by
# 1. defining functions for `T(p)` and `G(x,p)`,
# 1. defining the parameters `p`,
# 1. generating the state function `F(x,p)` and solving the associated steady-state problem,
# 1. and finally making a plot of our simulated ideal age.
#
# We start by telling Julia that we want to use the AIBECS package and the OCIM2 circulation
# (the Ocean Circulation Inverse Model, see [1](https://doi.org/10.1029/2018JC014716) for details).
# (the Ocean Circulation Inverse Model[^1]).
#
# [^1]:
# <NAME>., & <NAME>. (2019). Radiocarbon and helium isotope constraints on deep ocean ventilation and mantle‐³He sources. Journal of Geophysical Research: Oceans, 124, 3036–3057. doi:[10.1029/2018JC014716](https://doi.org/10.1029/2018JC014716)
#
using AIBECS
grd, T_OCCA = OCCA.load()
sum(iswet(grd))
# **Note**
# If it's your first time, Julia will ask you to download the OCIM2, in which case you should accept (i.e., type `y` and "return").
# Once downloaded, AIBECS will remember where it downloaded the file and it will only load it from your laptop.
#
# # `grd` is an `OceanGrid` object containing information about the 3D grid of the OCIM2 circulation and `T_OCCA` is the transport matrix representing advection and diffusion.
#
# # We define the function `T(p)` as
T(p) = T_OCCA
# ## (It turns out the circulation `T(p)` does not effectively depend on `p` but that's how we must define it anyway, i.e., as a function of `p`.)
#
# # The local sources and sinks for the age take the form
function G(x,p)
@unpack τ, z₀ = p
return @. 1 - x / τ * (z ≤ z₀)
end
# ## as per the tracer equation.
# ## The `@unpack` line unpacks the parameters `τ` and `z₀`.
# ## The `return` line returns the net sources and sinks.
# ## (The `@.` "macro" tells Julia that the operations apply to every element.)
#
# ## We can define the vector `z` of depths with `depthvec`.
z = depthvec(grd)
# ## Now we must construct a type for `p` the parameters.
# ## This type must contain our parameters `τ` and `z₀`.
struct IdealAgeParameters{U} <: AbstractParameters{U}
τ::U
z₀::U
end
# ## The type is now ready for us to generate an instance of the parameter `p`.
# ## Let's use `τ = 1.0` (s) and `z₀` the minimum depth of the model.
p = IdealAgeParameters(1.0, 30.0)
# ## We now use the AIBECS to generate the state function $\boldsymbol{F}$ (and its Jacobian) via
F, ∇ₓF = state_function_and_Jacobian(T, G)
# # (`∇ₓF` is the **Jacobian** of the state function $\nabla_{\boldsymbol{x}}\boldsymbol{F}$, calculated automatically using dual numbers.)
#
# ## Now that `F(x,p)`, and `p` are defined, we are going to solve for the steady-state.
# ## But first, we must create a `SteadyStateProblem` object that contains `F`, `∇ₓF`, `p`, and an initial guess `x_init` for the age.
# ## (`SteadyStateProblem` is specialized from [DiffEqBase](https://github.com/JuliaDiffEq/DiffEqBase.jl) for AIBECS models.)
#
# ## Let's make a vector of 0's for our initial guess.
nb = sum(iswet(grd)) # number of wet boxes
x_init = zeros(nb) # Start with age = 0 everywhere
# ## Now we can create our `SteadyStateProblem` instance
prob = SteadyStateProblem(F, ∇ₓF, x_init, p)
# ## And finally, we can `solve` this problem, using the AIBECS `CTKAlg()` algorithm,
age = solve(prob, CTKAlg())
# ## This should take a few seconds.
#
# # To conclude this tutorial, let's have a look at the age using AIBECS' plotting recipes and [Plots.jl](https://github.com/JuliaPlots/Plots.jl).
using Plots
# ## We first convert the age in years
# ## (because the default SI unit we used, i.e., seconds, is a bit small relative to global ocean timescales).
age_in_yrs = age * u"s" .|> u"yr"
# ## And we take a horizontal slice at about 2000m.
plothorizontalslice(age_in_yrs, grd, depth=2000u"m", color=:magma, levels=range(0.0, 1500, length = 16), clim=(0.0,1500))
# ## Or look at the horiontal mean
plothorizontalmean(age_in_yrs, grd)
# ## That's it for this tutorial...
# # Good job!
| expt/1_ideal_age.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2A.ml - Features ou modèle
#
# On se pose toujours la question du modèle de machine learning qui conviendrait le mieux à notre problème. Faut-il choisir un modèle complexe avec des features brutes ou plutôt un modèle simple avec des features retravaillées ?
# %matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import random
import math
import numpy
# On s'arrête quelque temps sur la fresque présente sur le site de [scikit-learn](http://scikit-learn.org/stable/).
from IPython.core.display import Image
Image("http://scikit-learn.org/stable/_images/sphx_glr_plot_classifier_comparison_001.png", width=1000)
# Un exemple fréquemment utilisé pour illustrer la difficulté du problème est celui de deux cercles concentriques (seconde ligne).
# +
X1 = [ (random.gauss(0,1), random.gauss(0,1)) for i in range(0,100) ]
X2 = [ (random.gauss(4,0.5), random.random() * 2 * math.pi) for i in range(0,100) ]
X2 = [ (x[0]*math.cos(x[1]), x[0]*math.sin(x[1])) for x in X2 ]
Y1 = [ 0 for i in X1 ]
Y2 = [ 1 for i in X2 ]
plt.plot( [ x[0] for x in X1], [ x[1] for x in X1 ], "o")
plt.plot( [ x[0] for x in X2], [ x[1] for x in X2 ], "o")
# -
# On applique un modèle linéaire simple : la régression logistique (assez semblable au modèle LDA=Linear Discriminant Analysis).
X = numpy.array( X1 + X2 )
Y = numpy.array( Y1 + Y2 )
import sklearn
from sklearn.linear_model import LogisticRegression
clr = LogisticRegression()
clr.fit(X,Y)
# La séparation opérée par le modèle est loin d'être optimale.
# +
from matplotlib.colors import ListedColormap
import numpy as np
def plot_clf_2classes(clf, X, y, title):
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max()
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max()
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 500), np.linspace(y_min, y_max, 500))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.close('all')
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
aspect='auto', origin='lower', cmap=plt.cm.coolwarm)
contours = plt.contour(xx, yy, Z, levels=[0], linewidths=2, linetypes='--')
plt.scatter(X[:, 0], X[:, 1], s=30, c=y, cmap=plt.cm.Paired)
plt.xticks(())
plt.yticks(())
plt.title(title)
plot_clf_2classes(clr, X, Y, "LogisticRegression")
# -
# On passe alors à un modèle toujours simple mais plus long à entraîner les plus proches voisins.
from sklearn.neighbors import KNeighborsClassifier
clr = KNeighborsClassifier()
clr.fit(X,Y)
plot_clf_2classes(clr, X, Y, "kNN")
# C'est nettement mieux mais le modèle n'est plus aussi interprétable que le précédent, il est nettement plus long à calculer. Plus la frontière entre les classes est grande, plus il faut d'exemples dans la base d'apprentissage. Les autres modèles (arbre de décision, réseaux de neurones) proposent des séparations plus ou moins proches de la solution optimale. Le modèle SVC fonctionne bien sur ce problème.
from sklearn.svm import SVC
clr = SVC()
clr.fit(X,Y)
plot_clf_2classes(clr, X, Y, "SVC")
# Cette approche est quelque peu séduisante. Elle donne l'impression qu'il suffit de parcourir la liste des modèles disponibles pour trouver celui qui convient le mieux. Sur un problème aussi simple et petit, cela ne pose pas de problème. Un très grand nombre d'observations réduit considérablement de choix. Les plus proches ou les SVM sont peu recommandés dans ce cas. Le nombre de variables ou features peut devenir un obstacle : en grande dimension, les algorithmes d'optimisation converge moins bien.
# Lorsque plus rien ne marche, il faut revenir aux données et essayer de comprendre pourquoi les modèles n'arrivent pas à _capturer_ l'information. On cherche alors à construire une combinaison non linéaire des variables initiales. Dans notre cas, il suffit d'ajouter les produits des variables initiales pour se ramener à un problème de classification linéaire : $x_1$, $x_2$, $x_1^2$, $x_2^2$, $x_1 x_2$.
Xext = numpy.zeros( (len(X), 5) )
Xext[:,:2] = X
Xext[:,2] = X[:,0]**2
Xext[:,3] = X[:,1]**2
Xext[:,4] = X[:,0]*X[:,1]
clr = LogisticRegression()
clr.fit(Xext,Y)
clr.coef_
# +
def plot_clf_2classes(clf, X, y, title):
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max()
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max()
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 500), np.linspace(y_min, y_max, 500))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel(), (xx*xx).ravel(), (yy*yy).ravel(), (xx*yy).ravel()])
Z = Z.reshape(xx.shape)
plt.close('all')
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
aspect='auto', origin='lower', cmap=plt.cm.coolwarm)
contours = plt.contour(xx, yy, Z, levels=[0], linewidths=2, linetypes='--')
plt.scatter(X[:, 0], X[:, 1], s=30, c=y, cmap=plt.cm.Paired)
plt.xticks(())
plt.yticks(())
plt.title(title)
plot_clf_2classes(clr, Xext, Y, "LogisticRegression + features**2")
# -
# Un problème qui n'était pas linéaire l'est devenu en ajoutant les bonnes features. D'une manière générale, il est utile d'essayer de convertir toute connaissance a priori d'un problème en features de façon à aider l'apprentissage d'un modèle. Le cas le plus fréquent est le calcul de statistiques exhaustive sur un groupe d'observations liées :
#
# * On dispose de tous les achats des utilisateurs d'un site.
# * On veut prédire la probabilité que l'utilisateur achète lors de sa prochaine visite.
# * On utilise pour cela des moyennes calculées sur l'ensemble des achats précédents : on prédit au niveau *achat* avec des features calculées sur des groupes d'*achats*.
| _doc/notebooks/expose/ml_features_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
############## PLEASE RUN THIS CELL FIRST! ###################
# import everything and define a test runner function
from importlib import reload
from helper import run
import network, compactfilter
# -
# ### Exercise 1
# Verify that the block which had your previous transaction matches the filter for your address.
#
# +
# Exercise 1
from block import Block
from compactfilter import GetCFiltersMessage, CFilterMessage
from ecc import PrivateKey
from helper import decode_base58, hash256, little_endian_to_int
from network import SimpleNode, GetDataMessage, BLOCK_DATA_TYPE
from script import p2pkh_script
from tx import Tx
block_hash = bytes.fromhex('00000006439f526ce138524262a29500258db39130e1ddf0c168ca59002877b8')
block_height = 75912
passphrase = b'<PASSWORD>'
secret = little_endian_to_int(hash256(passphrase))
private_key = PrivateKey(secret=secret)
addr = private_key.point.address(network="signet")
print(addr)
# convert the address to a ScriptPubKey using decode_base58 and p2pkh_script
script_pubkey = p2pkh_script(decode_base58(addr))
# connect to signet.programmingbitcoin.com
node = SimpleNode('signet.programmingbitcoin.com', network="signet")
# complete the handshake
node.handshake()
# create a GetCFiltersMessage(start_height, stop_hash) using the block height and block hash
getcfilters = GetCFiltersMessage(start_height=block_height, stop_hash=block_hash)
# send the getcfilters message
node.send(getcfilters)
# wait for the CFilterMessage command
cfilter = node.wait_for(CFilterMessage)
# check that the compact filter's block hash is the same as the block hash
if cfilter.block_hash != block_hash:
raise RuntimeError('Wrong Compact Filter')
# check if your ScriptPubKey is in the filter
if not script_pubkey in cfilter:
raise RuntimeError('ScriptPubKey not in filter')
# create a GetDataMessage
getdata = GetDataMessage()
# add the BLOCK_DATA_TYPE with the block hash
getdata.add_data(BLOCK_DATA_TYPE, block_hash)
# send the GetDataMessage
node.send(getdata)
# wait for the Block
b = node.wait_for(Block)
# use the get_transactions(script_pubkey) method of Block to get transactions
txs = b.get_transactions(script_pubkey)
# print the first one serialized and hexadecimal
print(txs[0].serialize().hex())
# -
# ### Exercise 2
#
#
#
#
# #### Make [this test](/edit/session8/network.py) pass: `network.py:SimpleNodeTest:test_get_block`
# +
# Exercise 2
reload(network)
run(network.SimpleNodeTest('test_get_block'))
# -
from block import Block
from compactfilter import GetCFCheckPointMessage, CFCheckPointMessage, GetCFHeadersMessage, CFHeadersMessage, GetCFiltersMessage, CFilterMessage
from helper import hash256
from network import SimpleNode
num_checkpoints = 20
with open('block_headers.testnet', 'rb') as f:
headers = [Block.parse_header(f) for _ in range(num_checkpoints * 1000)]
block_hashes = [b.hash() for b in headers]
node = SimpleNode('testnet.programmingbitcoin.com', network="testnet")
node.handshake()
get_cfcheckpoint = GetCFCheckPointMessage(stop_hash=block_hashes[-1])
node.send(get_cfcheckpoint)
cfcheckpoint = node.wait_for(CFCheckPointMessage)
height = 0
for checkpoint in cfcheckpoint.filter_headers:
get_cfheaders = GetCFHeadersMessage(start_height=height, stop_hash=block_hashes[height+1000])
node.send(get_cfheaders)
cfheaders = node.wait_for(CFHeadersMessage)
if cfheaders.last_header != checkpoint:
raise RuntimeError(f'checkpoint mismatch {cfheaders.last_header.hex()} vs {checkpoint.hex()}')
node.send(GetCFiltersMessage(start_height=height, stop_hash=block_hashes[height+999]))
for i in range(1000):
fb = node.wait_for(CFilterMessage).filter_bytes
if hash256(fb) != cfheaders.filter_hashes[i]:
raise RuntimeError(f'{i}: filter does not match hash {hash256(fb).hex()} vs {cfheaders.filter_hashes[i].hex()}')
height += 1000
print(cfheaders.last_header.hex())
# ### Exercise 3
# You have been sent some unknown number of sats to your address on signet.
#
# Send all of it back (minus fees) to `mqYz6JpuKukHzPg94y4XNDdPCEJrNkLQcv` using only the networking protocol.
#
# This should be a many input, 1 output transaction.
#
# Turn on logging in `SimpleNode` if you need to debug
#
# +
# Exercise 3
from block import Block
from compactfilter import GetCFiltersMessage, CFilterMessage
from ecc import PrivateKey
from helper import decode_base58, hash160, hash256, little_endian_to_int
from network import GetHeadersMessage, HeadersMessage, SimpleNode, BLOCK_DATA_TYPE
from script import p2pkh_script
from tx import Tx, TxIn, TxOut
start_block_hex = '00000031144d96f3d297c17b092c7bed5acd3d027e37dd4a866f3313614bd4ca'
start_block = bytes.fromhex(start_block_hex)
start_height = 76218
end_block = b'\x00' * 32
passphrase = b'<PASSWORD>'
secret = little_endian_to_int(hash256(passphrase))
private_key = PrivateKey(secret=secret)
addr = private_key.point.address(network="signet")
print(addr)
h160 = decode_base58(addr)
my_script_pubkey = p2pkh_script(h160)
target_address = 'mqYz6JpuKukHzPg94y4XNDdPCEJrNkLQcv'
target_h160 = decode_base58(target_address)
target_script = p2pkh_script(target_h160)
fee = 200 # fee in satoshis
# connect to signet.programmingbitcoin.com in signet mode
node = SimpleNode('signet.programmingbitcoin.com', network="signet")
# complete the handshake
node.handshake()
# create GetHeadersMessage with the start_block as the start_block and end_block as the end block
get_headers = GetHeadersMessage(start_block=start_block, end_block=end_block)
# send the GetHeadersMessage
node.send(get_headers)
# wait for the headers message
headers = node.wait_for(HeadersMessage)
# check that the headers are valid
if not headers.is_valid():
raise RuntimeError('bad headers')
# get the 20th hash (index 19) from the header.headers array
stop_hash = headers.headers[19].hash()
# create a GetCFiltersMessage
get_cfilters = GetCFiltersMessage(start_height=start_height, stop_hash=stop_hash)
# send the GetCFiltersMessage
node.send(get_cfilters)
# loop 100 times
for _ in range(100):
# wait for the CFilterMessage
cfilter = node.wait_for(CFilterMessage)
# check to see if your ScriptPubKey is in the filter
if my_script_pubkey in cfilter:
# set block_hash to cfilter's block hash and break
block_hash = cfilter.block_hash
print(block_hash.hex())
break
# get the block object using the get_block method of node
block_obj = node.get_block(block_hash)
# initialize the utxos array
utxos = []
# grab the txs from the block using get_transactions(my_script_pubkey) method
txs = block_obj.get_transactions(my_script_pubkey)
# there should be one transaction
if len(txs) != 1:
raise RuntimeError("incorrect number of transactions")
# set utxos to the tx's utxos for our address using find_utxos(addr) method of the first tx
utxos = txs[0].find_utxos(addr)
# there should be one utxo
if len(utxos) != 1:
raise RuntimeError("incorrect number of utxos")
# initialize the tx_ins array
tx_ins = []
# prev_tx, prev_index, prev_amount are what we get in the first utxo
prev_tx, prev_index, prev_amount = utxos[0]
# create TxIn and add to array
tx_ins.append(TxIn(prev_tx, prev_index))
# calculate the output amount (prev_amount - fee)
output_amount = prev_amount - fee
# create TxOut
tx_out = TxOut(output_amount, target_script)
# create transaction on signet
tx_obj = Tx(1, tx_ins, [tx_out], 0, network="signet")
# sign the only input in the tx
tx_obj.sign_input(0, private_key)
# print the tx's id
print(tx_obj.id())
# send this signed transaction on the network
node.send(tx_obj)
| session8/complete/session8.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Q#
# language: qsharp
# name: iqsharp
# ---
# # Quantum Fourier Transforms
#
# The **"QFT (Quantum Fourier Transform)"** quantum kata is a series of exercises designed
# to teach you the basics of quantum Fourier transform (QFT). It covers implementing QFT and using
# it to perform simple state transformations.
#
# Each task is wrapped in one operation preceded by the description of the task.
# Your goal is to fill in the blank (marked with the `// ...` comments)
# with some Q# code that solves the task. To verify your answer, run the cell using Ctrl/⌘+Enter.
#
# Within each section, tasks are given in approximate order of increasing difficulty;
# harder ones are marked with asterisks.
# To begin, first prepare this notebook for execution (if you skip this step, you'll get "Syntax does not match any known patterns" error when you try to execute Q# code in the next cells):
%package Microsoft.Quantum.Katas::0.11.2006.403
# > The package versions in the output of the cell above should always match. If you are running the Notebooks locally and the versions do not match, please install the IQ# version that matches the version of the `Microsoft.Quantum.Katas` package.
# > <details>
# > <summary><u>How to install the right IQ# version</u></summary>
# > For example, if the version of `Microsoft.Quantum.Katas` package above is 0.1.2.3, the installation steps are as follows:
# >
# > 1. Stop the kernel.
# > 2. Uninstall the existing version of IQ#:
# > dotnet tool uninstall microsoft.quantum.iqsharp -g
# > 3. Install the matching version:
# > dotnet tool install microsoft.quantum.iqsharp -g --version 0.1.2.3
# > 4. Reinstall the kernel:
# > dotnet iqsharp install
# > 5. Restart the Notebook.
# > </details>
#
# ## Part I. Implementing Quantum Fourier Transform
#
# This sequence of tasks uses the implementation of QFT described in Nielsen & Chuang.
# All numbers in this kata use big endian encoding: most significant bit of the number
# is stored in the first (leftmost) bit/qubit.
# ### Task 1.1. 1-qubit QFT
#
# **Input:**
#
# A qubit in state $|\psi\rangle = x_0 |0\rangle + x_1 |1\rangle$.
#
# **Goal:**
#
# Apply QFT to this qubit, i.e., transform it to a state
# $\frac{1}{\sqrt{2}} \big((x_0 + x_1) |0\rangle + (x_0 - x_1) |1\rangle\big)$.
#
# In other words, transform a basis state $|j\rangle$ into a state $\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot \frac{j}{2}}|1\rangle\big)$ .
#
# +
%kata T11_OneQubitQFT_Test
operation OneQubitQFT (q : Qubit) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 1.2. Rotation gate
#
# **Inputs:**
#
# 1. A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.
#
# 2. An integer k $\geq$ 0.
#
# **Goal:**
#
# Change the state of the qubit to $\alpha |0\rangle + \beta \cdot e^{\frac{2\pi i}{2^{k}}} |1\rangle$.
#
# > Be careful about not introducing an extra global phase!
# This is going to be important in the later tasks.
# +
%kata T12_Rotation_Test
operation Rotation (q : Qubit, k : Int) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 1.3. Prepare binary fraction exponent (classical input)
#
# **Inputs:**
#
# 1. A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.
#
# 2. An array of $n$ bits $[j_1, j_2, ..., j_n]$, stored as `Int[]` ($ j_k \in \{0,1\}$).
#
# **Goal:**
#
# Change the state of the qubit to $\alpha |0\rangle + \beta \cdot e^{2\pi i \cdot 0.j_1 j_2 ... j_n} |1\rangle$,
# where $0.j_1 j_2 ... j_n$ is a binary fraction, similar to decimal fractions:
#
# $$0.j_1 j_2 ... j_n = j_1 \cdot \frac{1}{2^1} + j_2 \cdot \frac{1}{2^2} + ... j_n \cdot \frac{1}{2^n}$$
#
# +
%kata T13_BinaryFractionClassical_Test
operation BinaryFractionClassical (q : Qubit, j : Int[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 1.4. Prepare binary fraction exponent (quantum input)
#
# **Inputs:**
#
# 1. A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.
#
# 2. A register of $n$ qubits in state $|j_1 j_2 ... j_n\rangle$.
#
# **Goal:**
#
# Change the state of the input
# from $(\alpha |0\rangle + \beta |1\rangle) \otimes |j_1 j_2 ... j_n\rangle$
# to $(\alpha |0\rangle + \beta \cdot e^{2\pi i \cdot 0.j_1 j_2 ... j_n} |1\rangle) \otimes |j_1 j_2 ... j_n\rangle$,
#
# where $0.j_1 j_2 ... j_n$ is a binary fraction corresponding to the basis state $j_1 j_2 ... j_n$ of the register.
#
# > The register of qubits can be in superposition as well;
# the behavior in this case is defined by behavior on the basis states and the linearity of unitary transformations.
# +
%kata T14_BinaryFractionQuantum_Test
operation BinaryFractionQuantum (q : Qubit, jRegister : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 1.5. Prepare binary fraction exponent in place (quantum input)
#
# **Input:**
#
# A register of $n$ qubits in state $|j_1 j_2 ... j_n \rangle$.
#
# **Goal:**
#
# Change the state of the register
# from $|j_1\rangle \otimes |j_2 ... j_n\rangle$
# to $\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot 0.j_1 j_2 ... j_n} |1\rangle \otimes |j_2 ... j_n\rangle\big)$.
#
# > The register of qubits can be in superposition as well;
# the behavior in this case is defined by behavior on the basis states and the linearity of unitary transformations.
#
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# This task is very similar to task 1.4, but the digit $j_1$ is encoded in-place, using task 1.1.
# </details>
# +
%kata T15_BinaryFractionQuantumInPlace_Test
operation BinaryFractionQuantumInPlace (register : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 1.6. Reverse the order of qubits
#
# **Input:**
#
# A register of $n$ qubits in state $|x_1 x_2 ... x_n \rangle$.
#
# **Goal:**
#
# Reverse the order of qubits, i.e., convert the state of the register to $|x_n ... x_2 x_1\rangle$.
# +
%kata T16_ReverseRegister_Test
operation ReverseRegister (register : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 1.7. Quantum Fourier transform
#
# **Input:**
#
# A register of $n$ qubits in state $|j_1 j_2 ... j_n \rangle$.
#
# **Goal:**
#
# Apply quantum Fourier transform to the input register, i.e., transform it to a state
#
# $$\frac{1}{\sqrt{2^{n}}} \sum_{k=0}^{n-1} e^{2\pi i \cdot \frac{jk}{2^{n}}} |k\rangle = $$
# $$\begin{align}= &\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot 0.j_n} |1\rangle\big) \otimes \\
# \otimes &\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot 0.j_{n-1} j_n} |1\rangle\big) \otimes ... \\
# \otimes &\frac{1}{\sqrt{2}} \big(|0\rangle + e^{2\pi i \cdot 0.j_1 j_2 ... j_{n-1} j_n} |1\rangle\big)\end{align}$$
#
# > The register of qubits can be in superposition as well;
# the behavior in this case is defined by behavior on the basis states and the linearity of unitary transformations.
#
# > You can do this with a library call, but we recommend
# implementing the algorithm yourself for learning purposes, using the previous tasks.
#
#
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# Consider preparing a different state first and transforming it to the goal state:
#
# $\frac{1}{\sqrt{2}} (|0\rangle + exp(2\pi i * 0.j_1 j_2 ... j_{n-1} j_n) |1\rangle) \otimes ... \otimes$
# $\otimes \frac{1}{\sqrt{2}} (|0\rangle + exp(2\pi i * 0.j_{n-1} j_n) |1\rangle) \otimes$
# $\otimes \frac{1}{\sqrt{2}} (|0\rangle + exp(2\pi i * 0.j_n) |1\rangle) \otimes$
# </details>
# +
%kata T17_QuantumFourierTransform_Test
operation QuantumFourierTransform (register : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 1.8. Inverse QFT
#
# **Input:**
#
# A register of $n$ qubits in state $|j_1 j_2 ... j_n \rangle$.
#
# **Goal:**
#
# Apply inverse QFT to the input register, i.e., transform it to a state
# $\frac{1}{\sqrt{2^{n}}} \sum_{k=0}^{n-1} e^{-2\pi i \cdot \frac{jk}{2^{n}}} |k\rangle$.
#
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# Inverse QFT is literally the inverse transformation of QFT.
# Do you know a quantum way to express this?
# </details>
# +
%kata T18_InverseQFT_Test
operation InverseQFT (register : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ## Part II. Using the Quantum Fourier Transform
#
# This section offers you tasks on state preparation and state analysis
# that can be solved using QFT (or inverse QFT). It is possible to solve them
# without QFT, but we recommend that you to try and come up with a QFT-based solution.
# ### Task 2.1. Prepare an equal superposition of all basis states
#
# **Input:**
#
# A register of $n$ qubits in state $|0...0\rangle$.
#
# **Goal:**
#
# Prepare an equal superposition of all basis vectors from $|0...0\rangle$ to $|1...1\rangle$
# (i.e., state $\frac{1}{\sqrt{2^{n}}} \big(|0...0\rangle + ... + |1...1\rangle\big)$.
# +
%kata T21_PrepareEqualSuperposition_Test
operation PrepareEqualSuperposition (register : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 2.2. Prepare a periodic state
#
# **Inputs:**
#
# 1. A register of $n$ qubits in state $|0...0\rangle$.
#
# 2. An integer frequency F ($0 \leq F \leq 2^{n}-1$).
#
# **Goal:**
#
# Prepare a periodic state with frequency F on this register:
#
# $$\frac{1}{\sqrt{2^{n}}} \sum_k e^{2\pi i \cdot \frac{Fk}{2^{n}}} |k\rangle$$
#
# > For example, for $n = 2$ and $F = 1$ the goal state is $\frac{1}{2}\big(|0\rangle + i|1\rangle - |2\rangle - i|3\rangle\big)$.
#
# > If you're using `DumpMachine` to debug your solution,
# remember that this kata uses big endian encoding of states,
# while `DumpMachine` uses little endian encoding.
#
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# Which basis state can be mapped to this state using QFT?
# </details>
# +
%kata T22_PreparePeriodicState_Test
operation PreparePeriodicState (register : Qubit[], F : Int) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 2.3. Prepare a periodic state with alternating $1$ and $-1$ amplitudes
#
# **Input:**
#
# A register of $n$ qubits in state $|0...0\rangle$.
#
# **Goal:**
#
# Prepare a periodic state with alternating $1$ and $-1$ amplitudes of basis states:
#
# $$\frac{1}{\sqrt{2^{n}}} \big(|0\rangle - |1\rangle + |2\rangle - |3\rangle + ... - |2^{n}-1\rangle\big)$$
#
# > For example, for $n = 2$ the goal state is $\frac{1}{2} \big(|0\rangle - |1\rangle + |2\rangle - |3\rangle\big)$.
#
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# Which basis state can be mapped to this state using QFT? Which frequency would allow you to use task 2.2 here?
# </details>
# +
%kata T23_PrepareAlternatingState_Test
operation PrepareAlternatingState (register : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 2.4. Prepare an equal superposition of all even basis states
#
# **Input:**
#
# A register of $n$ qubits in state $|0...0\rangle$.
#
# **Goal:**
#
# Prepare an equal superposition of all even basis vectors:
# $\frac{1}{\sqrt{2^{n-1}}} \big(|0\rangle + |2\rangle + ... + |2^{n}-2\rangle\big)$.
#
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# Which superposition of two basis states can be mapped to this state using QFT?
# Use the solutions to tasks 2.1 and 2.3 to figure out the answer.
# </details>
# +
%kata T24_PrepareEqualSuperpositionOfEvenStates_Test
operation PrepareEqualSuperpositionOfEvenStates (register : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 2.5. Prepare a square-wave signal
#
# **Input:**
#
# A register of $n\geq2$ qubits in state $|0...0\rangle$.
#
# **Goal:**
#
# Prepare a periodic state with alternating $1, 1, -1, -1$ amplitudes of basis states:
# $$\frac{1}{\sqrt{2^{n}}} \big(|0\rangle + |1\rangle - |2\rangle - |3\rangle + ... - |2^{n}-2\rangle - |2^{n}-1\rangle\big)$$
#
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# Which superposition of two basis states can be mapped to this state using QFT?
# Remember that sum of two complex amplitudes can be a real number if their imaginary parts cancel out.
# </details>
# +
%kata T25_PrepareSquareWaveSignal_Test
operation PrepareSquareWaveSignal (register : Qubit[]) : Unit is Adj+Ctl {
// ...
}
# -
# ### Task 2.6. Get the frequency of a signal
#
# **Input:**
#
# A register of $n\geq2$ qubits in state
# $\frac{1}{\sqrt{2^{n}}} \sum_k e^{2\pi i \cdot \frac{Fk}{2^{n}}} |k\rangle$, $0\leq F\leq 2^{n}-1$.
#
# **Goal:**
#
# Return the frequency F of the "signal" encoded in this state.
# +
%kata T26_Frequency_Test
operation Frequency (register : Qubit[]) : Int {
// ...
return -1;
}
| QFT/QFT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/bala-codes/SENTIMENT-ANALYSIS-ON-TWITTER-POSTS-USING-ML-AND-DL/blob/master/codes%20(ML)/Part-3%20-%20TWITTER%20-%20Sentiment%20Analysis%20-%20Single%20Prediction%20Check.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="vAuR7lxYWS3v" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 125} outputId="95d453b6-13f1-41a9-e420-e6917d85414e"
from google.colab import drive
drive.mount('/content/drive')
# + id="z-2nvjesWUEy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="f12a34b5-e986-45d7-f81e-b746a8dc923e"
import pickle
import string
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import nltk
import re
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('wordnet')
# + id="XBNrKH_sWenx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="fee7a63e-37b9-4950-8d63-6db908d071ed"
# %%time
with open("/content/drive/My Drive/Machine Learning Projects/SENTIMENT ANALYSIS - TWITTER POSTS REVIEWS/SOURCE CODES AND DATASETS/PACKAGE 1 - SOURCE CODES AND FILES/Pretrained Models/PA_classifier.pkl", "rb") as fin:
vectorizer, PA_classifier = pickle.load(fin)
with open("/content/drive/My Drive/Machine Learning Projects/SENTIMENT ANALYSIS - TWITTER POSTS REVIEWS/SOURCE CODES AND DATASETS/PACKAGE 1 - SOURCE CODES AND FILES/Pretrained Models/calibrator_classifier.pkl", "rb") as fin:
vectorizer, calibrator_classifier = pickle.load(fin)
with open("/content/drive/My Drive/Machine Learning Projects/SENTIMENT ANALYSIS - TWITTER POSTS REVIEWS/SOURCE CODES AND DATASETS/PACKAGE 1 - SOURCE CODES AND FILES/Pretrained Models/SVC_classifier.pkl", "rb") as fin:
vectorizer, SVC_classifier = pickle.load(fin)
with open("/content/drive/My Drive/Machine Learning Projects/SENTIMENT ANALYSIS - TWITTER POSTS REVIEWS/SOURCE CODES AND DATASETS/PACKAGE 1 - SOURCE CODES AND FILES/Pretrained Models/bb_classifier.pkl", "rb") as fin:
vectorizer, bb_classifier = pickle.load(fin)
with open("/content/drive/My Drive/Machine Learning Projects/SENTIMENT ANALYSIS - TWITTER POSTS REVIEWS/SOURCE CODES AND DATASETS/PACKAGE 1 - SOURCE CODES AND FILES/Pretrained Models/nb_classifier.pkl", "rb") as fin:
vectorizer, nb_classifier = pickle.load(fin)
with open("/content/drive/My Drive/Machine Learning Projects/SENTIMENT ANALYSIS - TWITTER POSTS REVIEWS/SOURCE CODES AND DATASETS/PACKAGE 1 - SOURCE CODES AND FILES/Pretrained Models/logreg_classifier.pkl", "rb") as fin:
vectorizer, logreg_classifier = pickle.load(fin)
print("SUCCESS ALL MODELS LOADED")
# + id="jMZguPi01f6v" colab_type="code" colab={}
contraction_mapping = {"ain't": "is not", "aren't": "are not","can't": "cannot", "'cause": "because", "could've": "could have", "couldn't": "could not",
"didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hasn't": "has not", "haven't": "have not",
"he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is",
"I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would",
"i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would",
"it'd've": "it would have", "it'll": "it will", "it'll've": "it will have","it's": "it is", "let's": "let us", "ma'am": "madam",
"mayn't": "may not", "might've": "might have","mightn't": "might not","mightn't've": "might not have", "must've": "must have",
"mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have","o'clock": "of the clock",
"oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have",
"she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is",
"should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have","so's": "so as",
"this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would",
"there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have",
"they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have",
"wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are",
"we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are",
"what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is",
"where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have",
"why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have",
"would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all",
"y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have",
"you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have",
"you're": "you are", "you've": "you have", "gonna" : "going to"}
all_punctuations = string.punctuation + '‘’,:”][],'
from bs4 import BeautifulSoup
lemmer = nltk.stem.WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
def tweet_cleaner(text):
newString = text.lower()
newString = BeautifulSoup(newString, "lxml").text
newString = ' '.join([contraction_mapping[t] if t in contraction_mapping else t for t in newString.split(" ")])
newString = re.sub(r'\&\w*;', '', newString)
newString = re.sub('@[^\s]+','',newString)
newString = re.sub(r'\([^)]*\)', '', newString)
newString = re.sub(r'\$\w*', '', newString)
newString = re.sub(r'(https|http)?:\/\/(\w|\.|\/|\?|\=|\&|\%)*\b', '', newString, flags=re.MULTILINE)
newString = re.sub(r'#\w*', '', newString)
newString = re.sub(r'[' + all_punctuations.replace('@', '') + ']+', ' ', newString)
newString = re.sub(r'\b\w{1,2}\b', '', newString)
newString = re.sub(r'\s\s+', ' ', newString)
newString = newString.lstrip(' ')
newString = re.sub('"','', newString)
newString = ' '.join([lemmer.lemmatize(word,'v') for word in newString.split()])
newString = re.sub(r"'s\b","",newString)
newString = re.sub("[^a-zA-Z]", " ", newString)
tokens = [w for w in newString.split() if not w in stop_words]
long_words=[]
for i in tokens:
if len(i)>=3:
long_words.append(i)
return (" ".join(long_words)).strip()
# + id="7XhWq3nFW6J8" colab_type="code" colab={}
# Required Functions to predict the News
def prediction(text):
test = vectorizer.transform(text)
graph,output = ensemble(test)
print("output", output)
if output == 1:
value = 'POSITIVE SENTIMENT'
else:
value = 'NEGATIVE SENTIMENT'
class_labels = ['NEGATIVE','POSITIVE']
j = [graph[0][0],graph[0][1]]
y_pos = np.arange(len(class_labels))
plt.barh(y_pos,j)
plt.yticks(y_pos,class_labels)
plt.title('PREDICTION FOR BEING POS VS NEG')
plt.ylabel('Percentage')
plt.xlabel('Labels')
plt.show()
print()
start = "\033[1m"
end = "\033[0;0m"
print('THE GIVEN TEXT IS ' + start + str(value) + end)
def ensemble(x):
pred1 = bb_classifier.predict_proba(x)
pred2 = nb_classifier.predict_proba(x)
pred3 = PA_classifier._predict_proba_lr(x)
pred4 = logreg_classifier.predict_proba(x)
pred5 = calibrator_classifier.predict_proba(x)
pred6 = SVC_classifier._predict_proba_lr(x)
test_pred_prob = np.mean([pred1, pred2, pred3, pred4, pred5, pred6], axis=0)
pred = np.argmax(test_pred_prob, axis=1)
return test_pred_prob,pred
# + [markdown] id="yGhQUc02vlb9" colab_type="text"
# # Give your Input Here
# + id="OvLtam-iXJzI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 405} outputId="f2a9cb0b-82b3-423c-da18-aade3009382d"
# %%time
#String input
x = input("ENTER THE TEXT HERE : ")
x = tweet_cleaner(x)
x=[x,]
prediction(x)
| codes (ML)/Part-3 - TWITTER - Sentiment Analysis - Single Prediction Check.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PySpark
# language: ''
# name: pysparkkernel
# ---
sc.install_pypi_package('boto3')
sc.install_pypi_package('pandas')
sc.install_pypi_package('scipy')
import json
import numpy as np
import pandas as pd
pd.set_option("display.max_columns", 100)
# +
import boto3
BUCKET_NAME = 'ui1-tfm-data'
# Credencniales de autentificación de AWS
s3 = boto3.client('s3', aws_access_key_id = "xxx",
aws_secret_access_key = "xxx",
aws_session_token="xxx")
# Descargamos los ficheros con los que vamos a trabajar desde el bucket previamente creado en S3
s3.download_file(BUCKET_NAME, 'air-data/stations.json', '/tmp/stations.json')
s3.download_file(BUCKET_NAME, 'air-data/spain_airQuality.csv', '/tmp/spain_airQuality.csv')
# -
# ---
# <br>
#
# # Estaciones
# +
# Cargamos el archivo que contiene las estaciones
data = json.load(open('/tmp/stations.json'))
# Creamos el dataframe
df_stations = pd.DataFrame(data["data"])
# +
# Filtramos las estaciones en España
df_stations_spain = df_stations.loc[df_stations['CountryOrTerritory'] == 'Spain']
# Eliminamos las estaciones de fuera de la península (Islas Canarias)
# lat > 35.512 AND lat < 44.512 AND lon > -10.415 AND lon < 5.054
df_stations_spain = df_stations_spain[(df_stations_spain.SamplingPoint_Latitude > 35.512) & (df_stations_spain.SamplingPoint_Latitude < 44.512) & (df_stations_spain.SamplingPoint_Longitude > -10.415) & (df_stations_spain.SamplingPoint_Longitude < 5.054)]
# Seleccionamos los campos relevantes
df_stations_spain = df_stations_spain[['StationLocalId', 'SamplingPoint_Latitude', 'SamplingPoint_Longitude']]
# Renombramos los campos
df_stations_spain.columns = ['Station', 'Latitude', 'Longitude']
# Reseteamos el índice del dataframe
df_stations_spain = df_stations_spain.reset_index(drop=True)
# -
print(len(df_stations_spain))
df_stations_spain.head()
# ---
# El dataset final con la información sobre las estaciones de medición en España contiene los siguientes campos:
#
# * **Station**: Código identificativo de la estación
# * **Latitude**: Coordenada de latitud de la estación
# * **Longitude**: Coordenada de longitud de la estación
#
# ---
# <br>
#
# # Mediciones
# Cargamos el archivo y creamos el dataframe con las observaciones de los contaminantes
df = pd.read_csv('/tmp/spain_airQuality.csv')
# +
# Seleccionamos los campos relevantes
df_measurements = df[['AirQualityStation', 'AirPollutant', 'Concentration', 'UnitOfMeasurement', 'DatetimeBegin']]
# Renombramos los campos
df_measurements.columns =['Station', 'AirPollutant', 'Concentration', 'UnitOfMeasurement', 'Datetime']
# Eliminamos las entradas cuyas mediciones son nulas
df_measurements = df_measurements[df_measurements['Concentration'].notna()]
# Convertimos el campo de la fecha a formato Datetime
df_measurements['Datetime'] = pd.to_datetime(df_measurements['Datetime'])
# Seleccinamos las entradas que se corresponden al mes de enero
df_measurements = df_measurements.loc[df_measurements['Datetime'] < '2020-2-1']
# Reseteamos el índice del dataframe
df_measurements = df_measurements.reset_index(drop=True)
# -
# join con el dataframe de las estaciones para obtener su localizacion
combined_df = df_measurements.merge(df_stations_spain, left_on='Station', right_on='Station')
combined_df = combined_df[['AirPollutant', 'Concentration', 'UnitOfMeasurement', 'Station', 'Latitude', 'Longitude', 'Datetime']]
print(len(combined_df))
combined_df.head()
# ---
# Así, el dataset final en relación a las mediciones de calidad del aire queda con los siguientes campos:
#
#
# * **AirPollutant**: Nombre identificactivo del contaminante medido.
# * **Concentration**: Concentración medida para este tipo de contaminante.
# * **UnitOfMeasurement**: Unidad de medida para este tipo de contaminante.
# * **Station**: Código identificactivo de la estación de medición que ha recopilado la información.
# * **Latitude**: Coordenada de latitud de la estación de medición.
# * **Longitude**: Coordenada de longitud de la estación de medición.
# * **Datetime**: Hora inicial de la medición.
#
# ---
# <br>
#
# # Preparación del dataset final
# <br>
#
# ## Funciones utilizadas
# +
def get_grid(lon_steps, lat_steps, n):
'''
Función que genera un diccionario con la posición de las celdas resultantes de dividir el área en nxn celdas
'''
grid_dict = {}
lat_stride = lat_steps[1] - lat_steps[0]
lon_stride = lon_steps[1] - lon_steps[0]
count = 0
for lat in lat_steps[:-1]:
for lon in lon_steps[:-1]:
count = count + 1
# Define dimensions of box in grid
upper_left = [lon, lat + lat_stride]
upper_right = [lon + lon_stride, lat + lat_stride]
lower_right = [lon + lon_stride, lat]
lower_left = [lon, lat]
grid_dict[count] = [upper_left[0], upper_left[1], lower_right[0], lower_right[1]]
return grid_dict
N_DIVISIONS = 100 # Número de divisiones horizontales y verticales
# lat > 35.512 AND lat < 44.512 AND lon > -10.415 AND lon < 5.054
x_steps = np.linspace(-10.415, 5.054, N_DIVISIONS + 1) # Longitude
y_steps = np.linspace(35.512, 44.512, N_DIVISIONS + 1) # Latitude
grid_dict = get_grid(x_steps, y_steps, N_DIVISIONS) # Diccionario que contiene las coordenadas de cada celda
# +
def remove_outliers(df):
'''
Identifica y elimina los outliers del campo 'Concentration'
'''
return df[((df.Concentration - df.Concentration.mean()) / df.Concentration.std()).abs() < 3]
def group_by_day_station(df):
'''
Para cada día del mes, se obtiene la media de las mediciones en cada estación
'''
return df[['Datetime', 'Concentration', 'Station', 'Longitude', 'Latitude']].groupby([df['Datetime'].dt.day, 'Station']).mean().reset_index()
from scipy.interpolate import griddata
def interpolate(df, lon_steps, lat_steps, n):
'''
Crea una red con puntos cada 100Km y genera nuevos datos para estos puntos interpolando los datos ya conocidos
'''
x = df["Longitude"].to_numpy()
y = df["Latitude"].to_numpy()
z = df["Concentration"].to_numpy()
xi, yi = np.meshgrid(lon_steps, lat_steps)
# interpolate
zi = griddata((x,y),z,(xi,yi),method='linear')
# Utilizamos los nuevos valores para crear un nuevo dataframe
x_column = []
for i in xi:
for j in i:
x_column.append(j)
y_column = []
for i in yi:
for j in i:
y_column.append(j)
z_column = []
for i in zi:
for j in i:
z_column.append(j)
data = [x_column, y_column, z_column]
columns = ['x', 'y', 'z']
return pd.DataFrame(np.array(data).T, columns=columns)
def interpolateAll(df, n):
'''
Tiene como resultado un dataframe en el que se guardan los datos generados cada día según el contaminante
'''
interpolated_pollutant_df = pd.DataFrame()
for n_day in range(1,31):
day_df = df.loc[df['Datetime'] == n_day]
interpolated_day_df = interpolate(day_df, x_steps, y_steps, n)
interpolated_day_df['Day'] = n_day
interpolated_pollutant_df = interpolated_pollutant_df.append(interpolated_day_df)
return interpolated_pollutant_df
def locate_point(grid_x, grid_y, point_x, point_y):
'''
Localiza la celda a la que pertenece un punto y devuelve sus índices
'''
x_step = grid_x[1]-grid_x[0]
y_step = grid_y[1]-grid_y[0]
cell_x = ((point_x - grid_x[0])//x_step) + 1
cell_y = ((point_y - grid_y[0])//y_step) + 1
return cell_x, cell_y
def get_cell_num(cell_x, cell_y, n):
'''
Devuelve el número de una celda dados sus índices X e Y
'''
return (((cell_y - 1) * (n-1)) + cell_x)
def addCellToAll(df, n):
'''
Función que itera sobre todas las entradas y añade la celda en la que se encuentra el vuelo en cada momento
'''
df['Cell'] = ''
for i, row in df.iterrows():
point_x = row[0] # Longitude
point_y = row[1] # Latitude
cell_x, cell_y = locate_point(x_steps, y_steps, point_x, point_y)
cell_num = get_cell_num(cell_x, cell_y, n+1)
df.at[i,'Cell'] = int(cell_num)
return df
# -
print(len(combined_df))
combined_df.sample(5)
# +
# Separamos por tipo de contaminante
df_NO = combined_df.loc[combined_df['AirPollutant'] == 'NO']
df_SO2 = combined_df.loc[combined_df['AirPollutant'] == 'SO2']
df_NO2 = combined_df.loc[combined_df['AirPollutant'] == 'NO2']
df_NOX = combined_df.loc[combined_df['AirPollutant'] == 'NOX as NO2']
df_CO = combined_df.loc[combined_df['AirPollutant'] == 'CO']
df_O3 = combined_df.loc[combined_df['AirPollutant'] == 'O3']
df_PM25 = combined_df.loc[combined_df['AirPollutant'] == 'PM2.5']
df_PM10 = combined_df.loc[combined_df['AirPollutant'] == 'PM10']
df_C6H6 = combined_df.loc[combined_df['AirPollutant'] == 'C6H6']
# Identificamos y eliminamos outliers de cada nuevo dataframe
df_NO = remove_outliers(df_NO)
df_SO2 = remove_outliers(df_SO2)
df_NO2 = remove_outliers(df_NO2)
df_NOX = remove_outliers(df_NOX)
df_CO = remove_outliers(df_CO)
df_O3 = remove_outliers(df_O3)
df_PM25 = remove_outliers(df_PM25)
df_PM10 = remove_outliers(df_PM10)
df_C6H6 = remove_outliers(df_C6H6)
# Agrupamos cada contaminante por día y estación y hacemos la media
df_NO_by_day = group_by_day_station(df_NO)
df_SO2_by_day = group_by_day_station(df_SO2)
df_NO2_by_day = group_by_day_station(df_NO2)
df_NOX_by_day = group_by_day_station(df_NOX)
df_CO_by_day = group_by_day_station(df_CO)
df_O3_by_day = group_by_day_station(df_O3)
df_PM25_by_day = group_by_day_station(df_PM25)
df_PM10_by_day = group_by_day_station(df_PM10)
df_C6H6_by_day = group_by_day_station(df_C6H6)
# Utilizamos los datos actuales para obtener un valor interpolado de los anteriores para cada 100Km
interpollated_df_NO = interpolateAll(df_NO_by_day, N_DIVISIONS)
interpollated_df_SO2 = interpolateAll(df_SO2_by_day, N_DIVISIONS)
interpollated_df_NO2 = interpolateAll(df_NO2_by_day, N_DIVISIONS)
interpollated_df_NOX = interpolateAll(df_NOX_by_day, N_DIVISIONS)
interpollated_df_CO = interpolateAll(df_CO_by_day, N_DIVISIONS)
interpollated_df_O3 = interpolateAll(df_O3_by_day, N_DIVISIONS)
interpollated_df_PM25 = interpolateAll(df_PM25_by_day, N_DIVISIONS)
interpollated_df_PM10 = interpolateAll(df_PM10_by_day, N_DIVISIONS)
interpollated_df_C6H6 = interpolateAll(df_C6H6_by_day, N_DIVISIONS)
# Renombramos la columna 'Concentration' en función del contaminante antes de unificar todos los dataframes un uno solo
interpollated_df_NO.columns = ['Longitude', 'Latitude', 'NO', 'Day']
interpollated_df_SO2.columns = ['Longitude', 'Latitude', 'SO2', 'Day']
interpollated_df_NO2.columns = ['Longitude', 'Latitude', 'NO2', 'Day']
interpollated_df_NOX.columns = ['Longitude', 'Latitude', 'NOX', 'Day']
interpollated_df_CO.columns = ['Longitude', 'Latitude', 'CO', 'Day']
interpollated_df_O3.columns = ['Longitude', 'Latitude', 'O3', 'Day']
interpollated_df_PM25.columns = ['Longitude', 'Latitude', 'PM2.5', 'Day']
interpollated_df_PM10.columns = ['Longitude', 'Latitude', 'PM10', 'Day']
interpollated_df_C6H6.columns = ['Longitude', 'Latitude', 'C6H6', 'Day']
# Por último, hacemos un join de todos los contaminantes para tenerlos en un único dataframe
from functools import reduce
final_df = reduce(lambda x,y: pd.merge(x,y, on=['Longitude' , 'Latitude', 'Day'], how='outer'), [interpollated_df_NO, interpollated_df_SO2, interpollated_df_NO2, interpollated_df_NOX, interpollated_df_CO, interpollated_df_O3, interpollated_df_PM25, interpollated_df_PM10, interpollated_df_C6H6])
# Eliminamos las entradas cuyas mediciones son nulas para todos los contaminantes
final_df = final_df.dropna(thresh=9)
# Rellenamos los valores nulos que que quedan con la media de las filas anteriores y las siguientes
final_df = final_df.where(final_df.notnull(), other=(final_df.fillna(method='ffill')+final_df.fillna(method='bfill'))/2)
# Lo nulos que no se han podido calcular, se rellenan con la media de toda la columna
for i in final_df.columns[final_df.isnull().any(axis=0)]:
final_df[i].fillna(final_df[i].mean(), inplace=True)
# Reseteamos el índice del dataframe
final_df = final_df.reset_index(drop=True)
# Asignamos una celda a cada entrada del dataframe
final_df = addCellToAll(final_df, N_DIVISIONS)
# Ordenamos las columnas para mejor legibilidad
final_df = final_df[['Day', 'Cell', 'NO', 'SO2', 'NO2', 'NOX', 'CO', 'O3', 'PM2.5', 'PM10', 'C6H6']]
# -
# Comprobamos si existen entradas nulas
final_df.isnull().sum(axis = 0)
print(len(final_df))
final_df.sample(15)
# ---
# El dataset final contiene los siguientes campos:
#
# * **Day**: Día (número) al que se corresponde la medición
# * **Cell**:Celda a la que se corresponde la medición
# * **NO**: Media de las mediciones de Óxido de nitrógeno para ese día en ese punto
# * **SO2**: Media de las mediciones de Dióxido de azufre para ese día en ese punto
# * **NO2**: Media de las mediciones de Dióxido de nitrógeno para ese día en ese punto
# * **NOX**: Media de las mediciones de otros compuestos de oxígeno y nitrógeno para ese día en ese punto
# * **CO**: Media de las mediciones de Monóxido de carbono para ese día en ese punto
# * **O3**: Media de las mediciones de Ozono para ese día en ese punto
# * **PM2.5**: Media de las mediciones de materia particulada 2.5 para ese día en ese punto
# * **PM10**: Media de las mediciones de materia particulada 10 para ese día en ese punto
# * **C6H6**: Media de las mediciones de Benceno para ese día en ese punto
#
# ---
# <br>
# Guardamos el dataframe final como un archivo json y lo almacenamos en S3
final_df.to_json(r'/tmp/final_airQuality_dataset.json')
s3.upload_file('/tmp/final_airQuality_dataset.json', BUCKET_NAME, 'air-data/final_airQuality_dataset.json')
| Notebooks & scripts/AirQualityNotebook.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.1
# language: julia
# name: julia-1.7
# ---
# # MATH50003 Numerical Analysis (2021–2022) Practice Computer-based Exam
#
#
# For each problem, replace the `# TODO` to complete the question.
# The unit tests are provided to help you test your answers.
# You have 1 hour to complete the exam, as well as 1 hour for downloading/uploading.
#
# Problems are marked A/B/C to indicate difficulty ("A" being most difficult).
# Partial credit will be awarded for reasonable attempts even if the tests
# are not passed.
#
# You may use existing code from anywhere
# but you are **REQUIRED** to cite the source if it is not part of the module material,
# ideally by including a weblink in a comment. You **MUST NOT** ask for help online or
# communicate with others within or outside the module.
# Failure to follow these rules will be considered misconduct.
#
#
#
# You should use the following packages:
using LinearAlgebra, SetRounding, Test
# **WARNING** It may be necessary to restart the kernel if issues arise. Remember to reload the packages
# when you do so.
#
# ## 1. Numbers
#
# **Problem 1.1 (C)** Complete the following function `divideby3(x)` that
# returns a tuple `a,b` such that `a` is the largest `Float64` less
# than or equal to `x/3` and `b` is the smallest `Float64` greater than or equal to `x/3`.
function divideby3(x)
# TODO: assign a,b so that a ≤ x ≤ b where b is either equal to a or the next float
a = setrounding(Float64, RoundDown) do
x/3
end
b = setrounding(Float64, RoundUp) do
x/3
end
a,b
end;
x = 0.1 # arbitary x
a,b = divideby3(x)
@test a ≤ big(x)/3 ≤ b
@test b == a || b == nextfloat(a)
# ## 2. Differentiation
#
# **Problem 2.1 (C)** Use the following off-center finite-difference approximation
# $$
# f'(x) ≈ {f(x+2h) - f(x-h) \over 3h}
# $$
# with an appropriately chosen $h$ to approximate
# $$
# f(x) = \cos(x^2)
# $$
# at $x = 0.1$ to 5 digits accuracy.
function fd(x)
# TODO: implement a finite-difference rule
# to approximate f'(x)
# for f(x) = cos(x^2)
# with step-size h chosen to get sufficient accuracy
f = x -> cos(x^2)
h = 2^-16
(f(x + 2h) - f(x - h))/3h
end;
@test abs(fd(0.1) + 2*0.1*sin(0.1^2)) ≤ 1E-5
# **Problem 2.2 (A)** Consider a 2D version of a dual number:
# $$
# a + b ϵ_x + c ϵ_y
# $$
# such that
# $$
# ϵ_x^2 = ϵ_y^2 = ϵ_x ϵ_y = 0.
# $$
# Complete the following implementation supporting `+` and `*` (and
# assuming `a,b,c` are `Float64`). Hint: you may need to work out on paper
# how to multiply `(s.a + s.b ϵ_x + s.c ϵ_y)*(t.a + t.b ϵ_x + t.c ϵ_y)` using the
# relationship above.
# +
import Base: *, +, ^
struct Dual2D
a::Float64
b::Float64
c::Float64
end
function +(s::Dual2D, t::Dual2D)
## TODO: Implement +, returning a Dual2D
Dual2D(s.a + t.a, s.b + t.b, s.c + t.c)
end
function *(c::Number, s::Dual2D)
## TODO: Implement c * Dual2D(...), returning a Dual2D
Dual2D(c*s.a, c*s.b, c*s.c)
end
function *(s::Dual2D, t::Dual2D)
## TODO: Implement Dual2D(...) * Dual2D(...), returning a Dual2D
s.a*t + Dual2D(0, s.b*t.a, s.c*t.a)
end
# +
f = function (x, y) # (x+2y^2)^3 using only * and +
z = (x + 2y * y)
z * z * z
end
x,y = 1., 2.
@test f(Dual2D(x,1.,0.), Dual2D(y,0.,1.)) == Dual2D(f(x,y), 3(x+2y^2)^2, 12y*(x+2y^2)^2)
# This has computed the gradient as f(x,y) + f_x*ϵ_x + f_y*ϵ_y
# == (x+2y^2)^3 + 3(x+2y^2)^2*ϵ_x + 12y(x+2y^2)^2*ϵ_y
# -
# ## 3. Structured Matrices
#
# **Problem 3.1 (C)** Add an implementation of `inv(::PermutationMatrix)`
# to return the inverse permutation as a `PermutationMatrix`. Hint: use
# `invperm`.
# +
import Base: getindex, size, *, inv
struct PermutationMatrix <: AbstractMatrix{Int}
p::Vector{Int} # represents the permutation whose action is v[p]
function PermutationMatrix(p::Vector)
sort(p) == 1:length(p) || error("input is not a valid permutation")
new(p)
end
end
size(P::PermutationMatrix) = (length(P.p),length(P.p))
getindex(P::PermutationMatrix, k::Int, j::Int) = P.p[k] == j ? 1 : 0
*(P::PermutationMatrix, x::AbstractVector) = x[P.p]
function inv(P::PermutationMatrix)
# TODO: return a PermutationMatrix representing the inverse permutation
p⁻¹ = invperm(P.p)
PermutationMatrix(p⁻¹)
end;
# -
P = PermutationMatrix([3,4,2,1])
@test inv(P) isa PermutationMatrix
@test P*inv(P) == I
# ## 4. Decompositions
#
# **Problem 4.1 (C)** For $𝐱 ∈ ℝ^n$ implement the reflection defined by
# $$
# \begin{align*}
# 𝐲 &:= 𝐱 + \|𝐱\| 𝐞_n \\
# 𝐰 &:= 𝐲/\|𝐲\| \\
# Q_𝐱 &:= I - 2𝐰𝐰^⊤
# \end{align*}
# $$
# in `lowerhouseholderreflection(x)`, which should return a `Matrix{Float64}`.
# You may assume that `x` is a `Vector{Float64}`.
function lowerhouseholderreflection(x)
## TODO: implement the householder reflector defined above
y = copy(x)
y[end] += norm(x)
w = y/norm(y)
I - 2w*w'
end;
x = [1.0,2,3,4]
Q = lowerhouseholderreflection(x)
@test Q*x ≈ [zeros(3); -norm(x)]
@test Q'Q ≈ I
@test Q ≈ Q'
# **Problem 4.2 (A)** Complete the function `ql(A)` that
# returns a QL decomposition, that is, `Q` is an orthogonal
# matrix and `L` is lower triangular. You may assume that `A`
# is a square `Matrix{Float64}`. Hint: use Problem 4.1 to lower triangularise.
# +
#function truelowerhouseholderreflection(x)
# y = copy(x)
# y[end] += (x[1] ≥ 0 ? 1 : -1)*norm(x)
# w = y/norm(y)
# I - 2w*w'
#end
function ql(A)
m,n = size(A)
m == n || error("not square")
## TODO Create Q and L such that Q'Q == I and L is lower triangular
L = copy(A)
Q = Matrix(1.0I, m, m)
for j = n:-1:1
Qⱼ = lowerhouseholderreflection(L[1:j, j])
L[1:j, :] = Qⱼ * L[1:j, :]
Q[:, 1:j] = Q[:, 1:j] * Qⱼ
end
Q, L
end;
# -
A = [1.0 2 3; 1 4 9; 1 1 1]
Q,L = ql(A)
@test Q'Q ≈ I
@test Q*L ≈ A
@test L ≈ tril(L) # it is acceptable to have small non-zero entries in L
# ## 5. Singular Value Decomposition
#
# **Problem 5.1 (C)** Find the best rank-4 approximation (in the $2$-norm) to
# $$
# f(x,y) = \cos(x - y)/(x+y+1)
# $$
# evaluated at an evenly spaced 100 × 100 grid on the square $[0,1]^2$.
# +
function bestrank4()
# TODO: return best rank-4 approximation
k = 4
n = 100
f = (x, y) -> cos(x - y)/(x + y + 1)
grid = range(0, 1; length=100)
A = f.(grid, grid')
U, σ, V = svd(A)
U[:, 1:k]*Diagonal(σ[1:k])*V[:, 1:k]'
end;
Fr = bestrank4();
# -
x = 9/99
y = 10/99
@test rank(Fr) == 4
@test abs(Fr[10,11] - cos(x - y)/(x + y + 1)) ≤ 2E-6
# ## 6. Differential Equations
#
# **Problem 6.1 (A)** Complete the function `airyai(n)` that returns a length-$n$ `Vector{Float64}`
# $$
# \begin{bmatrix}
# u_1 \\
# ⋮ \\
# u_n
# \end{bmatrix}
# $$
# such that $u_k$ approximates the solution to the equation
# $$
# \begin{align*}
# u(0) &= 1 \\
# u(1) &= 0 \\
# u'' - x u &= 0
# \end{align*}
# $$
# at the point $x_k = (k-1)/(n-1)$ using the second order finite-difference approximation:
# $$
# u''(x_k) ≈ {u_{k-1} - 2u_k + u_{k+1} \over h^2}
# $$
# for $k = 2, …, n-1$, in $O(n)$ operations.
function airy(n)
# TODO: return a Vector{Float64} approximating the solution to the ODE
x = range(0, 1; length=n)
h = step(x)
diag = -(2 .+ h^2 .* x)
Δ = SymTridiagonal(diag, fill(1, n-2))
Nothing
end
u = airy(1000)
@test u[1] == 1
@test u[end] == 0
# this compares agianst the exact formula
@test abs(u[500] - 0.4757167332829094) ≤ 2E-8
| practice_midterm/practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:memri] *
# language: python
# name: conda-env-memri-py
# ---
# %load_ext autoreload
# %autoreload 2
# default_exp importers.email
# export
import imaplib, email, math
from pyintegrators.data.schema import Account, EmailMessage, MessageChannel
from pyintegrators.pod.client import PodClient
from pyintegrators.importers.util import *
from pyintegrators.data.basic import *
from email import policy
from email.utils import getaddresses
from pyintegrators.imports import *
from nbdev.showdoc import show_doc
# # Email importer
# This importers fetches your emails and accounts over IMAP, it uses the python built-in imap client and some convenience functions for easier usage, batching and importing to the pod. This importer requires you to login with your email address and an app password. It is tested on gmail, but should work for other IMAP-servers.
# > Note: **The recommended usage for Gmail is to enable two-factor authentication. In this case, make sure you allow [SMTP-connections](https://www.gmass.co/blog/gmail-smtp/) and set an application password (explained in the same link)**
# ## ImapClient
#
# The `EmailImporter` communicates with email providers over imap. We created a convenience class around pythons imaplib , called the `ImapClient` that lets you list your mailboxes, retriev your mails and get their content.
# +
# export
DEFAULT_GMAIL_HOST = 'imap.gmail.com'
DEFAULT_GMAIL_INBOX = '"[Gmail]/All Mail"' # Note the double quotes here
DEFAULT_PORT = 993
class IMAPClient():
def __init__(self, username, app_pw, host=DEFAULT_GMAIL_HOST, port=DEFAULT_PORT, inbox=DEFAULT_GMAIL_INBOX):
assert username is not None and app_pw is not None
self.client = imaplib.IMAP4_SSL(host, port=port)
self.client.login(username, app_pw)
self.client.select(inbox) # connect to inbox.
def list_mailboxes(self):
"""Lists all available mailboxes"""
return self.client.list()
def get_all_mail_uids(self):
"""retrieves all mail uids from the selected mailbox"""
result, data = self.client.uid('search', None, "ALL") # search and return uids instead
return data[0].split()
def get_mails(self, uids):
return [self.get_mail(uid) for uid in uids]
def get_mail(self, uid):
"""Fetches a mail given a uid, returns (raw_mail, thread_id)"""
if self.client.host == DEFAULT_GMAIL_HOST:
# Use Google's threading method, in which every thread has an ID
result, (data, _) = self.client.uid('fetch', uid, '(RFC822 X-GM-THRID)')
thread_id = data[0].decode("utf-8").split(" ")[2]
raw_email = data[1]
return (raw_email, thread_id)
else:
# Threading not yet implemented for IMAP threading
result, (data, _) = self.client.uid('fetch', uid, '(RFC822)')
raw_email = data[1]
return (raw_email, None)
def part_to_str(part):
# hide
bytes_ = part.get_payload(decode=True)
charset = part.get_content_charset('iso-8859-1')
chars = bytes_.decode(charset, 'replace')
return chars
def _get_all_parts(part):
# hide
payload = part.get_payload()
if isinstance(payload, list):
return [x for p in payload for x in _get_all_parts(p)]
else:
return [part]
# -
show_doc(IMAPClient)
show_doc(IMAPClient.list_mailboxes)
show_doc(IMAPClient.get_all_mail_uids)
show_doc(IMAPClient.get_mail)
# +
# export
# TODO: should probably become a general utility function
def get_unique_accounts(all_mails):
# hide
all_accounts = {}
for email_item in all_mails:
for edge in email_item.get_all_edges():
account = edge.traverse(email_item)
if not account.externalId in all_accounts:
all_accounts[account.externalId] = account
for email_item in all_mails:
for edge in email_item.get_all_edges():
edge.target = all_accounts[edge.target.externalId]
return list(all_accounts.values())
# TODO: should probably become a general utility function
def get_g_attr(item, name, data_type, default_value=None):
# hide
first_or_default = next((att for att in item.genericAttribute if att.name == name), None)
if first_or_default == None:
return default_value
else:
if data_type == 'int':
return first_or_default.intValue
elif data_type == 'bool':
return first_or_default.boolValue
elif data_type == 'float':
return first_or_default.floatValue
elif data_type == 'string':
return first_or_default.stringValue
elif data_type == 'datetime':
return first_or_default.stringValue
else:
raise Exception(f"datatype {data_type} is not supported")
# -
# ## EmailImporter
# +
# export
from pyintegrators.data.schema import *
from pyintegrators.imports import *
from pyintegrators.indexers.indexer import test_registration
from pyintegrators.importers.importer import ImporterBase
class EmailImporter(ImporterBase):
"""Imports emails over imap."""
def __init__(self, *args, **kwargs):
self.private = ["imap_client"]
super().__init__(*args, **kwargs)
self.imap_client = None
def get_data(self, client, indexer_run):
print('this function is a workaround (this Importer is an Indexer temporarily)')
def set_imap_client(self, importer_run):
imap_host = get_g_attr(importer_run, 'host', 'string', DEFAULT_GMAIL_HOST)
port = get_g_attr(importer_run, 'port', 'int', DEFAULT_PORT)
assert imap_host is not None and port is not None
print(f'Using, HOST: {imap_host}, PORT: {port}')
self.imap_client = IMAPClient(username=importer_run.username,
app_pw=importer_run.password,
host=imap_host,
port=993)
@staticmethod
def get_timestamp_from_message(message):
date = message["date"]
parsed_time = email.utils.parsedate(date)
dt = email.utils.parsedate_to_datetime(date)
timestamp = int(dt.timestamp() * 1000)
return timestamp
@staticmethod
def get_accounts(message, field):
addresses = getaddresses(message.get_all(field, []))
return [Account(externalId=address) for name, address in addresses]
@staticmethod
def get_content(message):
"""Extracts content from a python email message"""
maintype = message.get_content_maintype()
if maintype == 'multipart':
parts = _get_all_parts(message)
res = None
html_parts = [part_to_str(part) for part in parts if part.get_content_type() == "text/html"]
if len(html_parts) > 0:
if len(html_parts) > 1:
error_msg = "\n AND \n".join(html_parts)
print(f"WARNING: FOUND MULTIPLE HTML PARTS IN ONE MESSAGE {error_msg}")
return html_parts[0]
else:
return parts[0].get_payload()
elif maintype == 'text':
return message.get_payload()
@staticmethod
def get_attachments(message): return list(message.iter_attachments())
def create_item_from_mail(self, mail, thread_id=None):
"""Creates a schema-item from an existing mail"""
message = email.message_from_bytes(mail, policy=policy.SMTP)
message_id, subject = message["message-id"], message["subject"]
timestamp = self.get_timestamp_from_message(message)
content = self.get_content(message)
attachments = self.get_attachments(message)
email_item = EmailMessage(externalId=message_id, subject=subject, dateSent=timestamp, content=content)
for a in self.get_accounts(message, 'from'): email_item.add_edge('sender', a)
for a in self.get_accounts(message, 'to'): email_item.add_edge('receiver', a)
for a in self.get_accounts(message, 'reply-to'): email_item.add_edge('replyTo', a)
if thread_id != None:
email_item.add_edge('messageChannel', MessageChannel(externalId=thread_id))
return email_item
def get_mails(self, mail_ids, batch_size=5, importer_run=None, verbose=True, pod_client=None):
"""Gets mails from a list of mail uids. You can pass an importer run and podclient
to update the progress of the process"""
mails = []
n_batches = math.ceil(len(mail_ids) / batch_size)
for i, batch_ids in enumerate(batch(mail_ids, n=batch_size)):
for mail, thread_id in self.imap_client.get_mails(mail_ids):
item = self.create_item_from_mail(mail, thread_id=thread_id)
if pod_client is not None:
if not pod_client.external_id_exists(item):
pod_client.create(item)
mails.append(item)
else:
mails.append(item)
progress = (i + 1) / n_batches * 1.0
self.update_progress(pod_client, importer_run, progress, total=len(mail_ids))
return mails
def run(self, importer_run, pod_client=None, verbose=True):
"""This is the main function of the Email importer. It runs the importer given information
provided in the importer run. if you pass a pod client it will add the new items to the graph."""
self.set_imap_client(importer_run)
self.update_run_status(pod_client, importer_run, "running")
stop_early_at = get_g_attr(importer_run, 'max_number', 'int', 10)
self.update_progress_message(pod_client, importer_run, "downloading emails", verbose=verbose)
mail_ids = self.imap_client.get_all_mail_uids()
all_mails = self.get_mails(mail_ids[:int(stop_early_at)],
importer_run=importer_run,
pod_client=pod_client)
# TODO: create better way to do this
self.update_progress_message(pod_client, importer_run, "merging duplicate items", verbose=verbose)
all_accounts = get_unique_accounts(all_mails)
self.update_progress_message(pod_client, importer_run, "creating accounts", verbose=verbose)
for item in all_accounts: pod_client.create(item)
self.update_progress_message(pod_client, importer_run, "creating threads", verbose=verbose)
for email_item in all_mails: pod_client.create_edges(email_item.get_all_edges())
print(f"Finished running {self}")
self.update_run_status(pod_client, importer_run, "done")
# -
# The email importer has the following parameters
#
# - **username** Your email address
# - **password** <PASSWORD>. In case you're using gmail, use your application password
# - _generic attributes_
# - **host** The URL of the host (defaults to imap.gmail.com)
# - **port** The port of the server (defaults to 993 for gmail)
# - **max_number** Max number of emails to download. Leave unset for unlimited
show_doc(EmailImporter.get_content)
show_doc(EmailImporter.create_item_from_mail)
show_doc(EmailImporter.run)
# ## Usage
# ### Download all mails from your account
# hide
def get_importer_run(imap_user, imap_pw):
importer_run = ImporterRun.from_data(progress=0, username=imap_user, password=<PASSWORD>)
importer_run.add_edge('genericAttribute', GenericAttribute(name='host', stringValue=DEFAULT_GMAIL_HOST))
importer_run.add_edge('genericAttribute', GenericAttribute(name='port', intValue=993))
importer_run.add_edge('genericAttribute', GenericAttribute(name='max_number', intValue=10))
return importer_run
pod_client = PodClient()
# +
# slow
# This cell is meant to be able to test the importer locally
def get_gmail_creds():
return read_file(HOME_DIR / '.memri' / 'credentials_gmail.txt').split("\n")[:2]
imap_user, imap_pw = get_gmail_creds()
importer = EmailImporter.from_data()
importer_run = get_importer_run(imap_user, imap_pw)
importer_run.add_edge('importer', importer)
pod_client.create(importer_run)
importer.run(importer_run=importer_run, pod_client=pod_client)
assert importer_run.progress == 1.0
assert importer_run.runStatus == "done"
pod_client.delete_all()
# +
# hide
# TODO: Test incremental updates
# -
# ### Parse emails
# +
test = b"""\
Message-id: 1234\r
From: user1 <<EMAIL>>\r
To: user1 <<EMAIL>>\r
Reply-to: user1 <<EMAIL>>\r
Subject: the subject\r
Date: Mon, 04 May 2020 00:37:44 -0700\r
This is content"""
email_importer = EmailImporter()
mail_item = email_importer.create_item_from_mail(test, 'message_channel_id')
assert mail_item.externalId == '1234'
assert mail_item.sender[0].externalId == '<EMAIL>'
assert mail_item.receiver[0].externalId == '<EMAIL>'
assert mail_item.replyTo[0].externalId == '<EMAIL>'
assert mail_item.subject == 'the subject'
assert mail_item.content == 'This is content'
assert mail_item.dateSent == email_importer.get_timestamp_from_message(email.message_from_bytes(test))
assert mail_item.messageChannel[0].externalId == 'message_channel_id'
# -
# ### Attachments
# +
# Test attachment parsing (basic support)
email_importer = EmailImporter()
message = email.message.EmailMessage()
message.set_content('aa')
message.add_attachment(b'bb', maintype='image', subtype='jpeg', filename='sample.jpg')
message.add_attachment(b'cc', maintype='image', subtype='jpeg', filename='sample2.jpg')
content = email_importer.get_content(message)
attachments = email_importer.get_attachments(message)
assert content == 'aa\n'
assert attachments[0].get_content() == b'bb'
assert attachments[1].get_content() == b'cc'
# +
# hide
### Calling the importer from the pod
# +
# hide
#importer
# +
# hide
# slow
# This cell is meant to be able to call the importer locally (simulating the front-end)
# pod_client = PodClient(url='http://0.0.0.0:3030')
# pod_client.create(importer_run)
# pod_client.create(importer)
# pod_client.create(host_item)
# pod_client.create(port_item)
# pod_client.create(max_number_item)
# pod_client.create_edges(importer_run.get_all_edges())
# json = {
# 'databaseKey':pod_client.database_key,
# 'payload':{
# 'uid':importer_run.uid,
# 'servicePayload': {
# 'databaseKey': pod_client.database_key,
# 'ownerKey': pod_client.owner_key
# }
# }
# }
# print(importer_run.uid)
# print(requests.post(f'http://0.0.0.0:3030/v2/{pod_client.owner_key}/run_importer',
# json=json).content)
# -
# hide
from nbdev.export import *
notebook2script()
| nbs/importers.EmailImporter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
class MapNode:
def __init__(self, key, value):
self.key = key
self.value = value
self.next = None
# +
class Map:
def __init__(self):
self.bucketsize = 5
self.buckets = [None for i in range(self.bucketsize)]
self.count = 0
def size(self):
return self.count
def search(self, key):
hc = hash(key)
index = self.getbucketindex(hc)
head = self.buckets[index]
while head is not None:
if head.key == key:
return head.value
head = head.next
return None
def getbucketindex(self, hc):
return (abs(hc) % (self.bucketsize))
def remove(self, key):
hc = hash(key)
index = self.getbucketindex(hc)
head = self.buckets[index]
previous = None
while head is not None:
if head.key == key:
if previous is None:
self.buckets[index] = head.next
else:
previous.next = head.next
self.count -= 1
return head.value
previous = head
head = head.next
return None
def rehash(self):
temp = self.buckets
self.buckets = [None for i in range(2*self.bucketsize)]
self.bucketsize = 2*self.bucketsize
self.count = 0
for head in temp:
while head is not None:
self.insert(head.key, head.value)
head = head.next
def loadFactor(self):
return self.count/self.bucketsize
def insert(self, key, value):
hc = hash(key)
index = self.getbucketindex(hc)
head = self.buckets[index]
while head is not None:
if head.key == key:
head.value = value
return
head = head.next
head = self.buckets[index]
newnode = MapNode(key, value)
newnode.next = head
self.buckets[index] = newnode
self.count += 1
loadFactor = self.count/self.bucketsize
if loadFactor >= 0.7:
self.rehash()
m = Map()
for i in range(4):
m.insert(str(i), i+1)
print(m.loadFactor())
for i in range(1, 10):
print(str(i) + ':', m.search('abc' + str(i)))
# -
a = list('12')
b = list(a)
print(a)
| hashmap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, classification_report
import math
import numpy as np
from sklearn.preprocessing import StandardScaler
#from yellowbrick.regressor import ResidualsPlot
# -
# ## Carrega dados
# +
df = pd.read_csv("data/apartamentos_completo.csv")
df = df.drop(['Unnamed: 0'],axis=1)
df.reset_index(inplace=True,drop=True)
print(df.shape)
df.head()
# -
df.dtypes
# ### Dropa imoveis que não tem latitude, longitude e área útil
df.dropna(subset=['lat', 'long','area_util','bairro'],inplace=True)
df.shape
# ### Quantidade de valores nulos por coluna
df.isna().sum().plot.bar()
df.isna().sum()
# ### Remove outliers do preço de venda
print(df.shape)
df.valor_venda.describe()
# ### 6 bilhões parece muito para um imóvel
# Vamos remover aqueles que custam a partir de 12 milhões
df = df.loc[df.valor_venda < 5000000]
print(df.shape)
df.valor_venda.describe()
# ### Remove outliers da area util
df.area_util = df.area_util.str.replace('m²','')
df.area_util = pd.to_numeric(df.area_util)
print(df.shape)
df.area_util.describe()
# ## 143mil m2 parece muito para um apartamento
# Vamos remover aqueles que sao maiores que 1000 m2
df = df.loc[(df.area_util < 1000) & (df.area_util > 18)]
print(df.shape)
df.area_util.describe()
df.idade_anuncio = df.idade_anuncio.str.replace("\r\n Publicado desde ontem\r\n","1")
df.idade_anuncio = df.idade_anuncio.str.replace("\r\n Publicado hoje\r\n","0")
df.idade_anuncio.value_counts()
# ### Checa distribuição dos Bairros
df.bairro = df.bairro.str.replace(", ","")
df.bairro.value_counts()
bairros_goiania = ['Setor Bueno, Goiânia','Setor Marista, Goiânia','Jardim Goiás, Goiânia','Setor Oeste, Goiânia',
'Parque Amazônia, Goiânia','Jardim América, Goiânia']
df = df[~df['bairro'].isin(bairros_goiania)]
df.bairro.value_counts()
df.bairro.value_counts()[:25].plot.bar()
# ### Remove bairros com menos imóveis (pega so 25 +)
df.bairro.value_counts()[25:]
bairros_reject = df.bairro.value_counts()[25:].index.values
bairros_reject
df = df[~df['bairro'].isin(bairros_reject)]
print(df.shape)
df.head()
df.bairro.value_counts()
df.quartos = df.quartos.fillna(df.quartos.median())
#df_regression = df[['bairro', 'area_util', 'quartos','valor_venda','lat','long',]]
#df_regression = df[['area_util', 'quartos','valor_venda','lat','long','idade_anuncio']]
df_regression = df[['area_util', 'quartos','valor_venda','lat','long']]
print(df_regression.shape)
df_regression.head()
df_regression.isna().sum()
df_regression.head()
# +
#one_hot = pd.get_dummies(df_regression['bairro'], prefix="bairro")
#df_regression = df_regression.join(one_hot)
#df_regression = df_regression.drop(['bairro'], axis=1)
#df_regression.shape
#df_regression.head()
# -
df_regression = df_regression.apply(pd.to_numeric)
#, errors='coerce'
df_regression.dtypes
# +
#zscore
#mean = df_regression.mean(axis=0)
#std = df_regression.std(axis=0)
# zscore normalization
#df_regression = ( df_regression - mean ) / std
# -
X = df_regression.drop(['valor_venda'],axis=1)
y = df_regression['valor_venda']
X.shape, y.shape
# +
#X_columns = X.columns.values
#X = pd.DataFrame(StandardScaler().fit_transform(X),columns=X_columns)
# -
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
reg = RandomForestRegressor(n_estimators=100)
reg
reg.fit(X_train, y_train)
train_predict = reg.predict(X_train)
test_predict = reg.predict(X_test)
# +
print("Mean squared error: %.2f"
% mean_squared_error(y_test, test_predict))
rmse = math.sqrt(mean_squared_error(y_test, test_predict))
print("Root Mean squared error: %.2f" % rmse)
print('R2 score: %.2f' % r2_score(y_test, test_predict))
# +
plt.rcParams['figure.figsize'] = [10, 5]
importances = reg.feature_importances_
std = np.std([tree.feature_importances_ for tree in reg.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
#Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %s (%f)" % (f + 1, X_train.columns[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), X_train.columns[indices],rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.show()
# +
#reg2 = RandomForestRegressor(n_estimators=100)
#visualizer = ResidualsPlot(reg2)
#visualizer.fit(X_train, y_train) # Fit the training data to the model
#visualizer.score(X_test, y_test) # Evaluate the model on the test data
#visualizer.poof() # Draw/show/poof the data
# -
# ### TODO: scatterplot valor_venda x area util
# +
ax1 = df.plot.scatter(x='area_util',
y='valor_venda',
c='DarkBlue')
# -
df_zoom = df.loc[df.area_util < 2001]
ax2 = df_zoom.plot.scatter(x='area_util',
y='valor_venda',
c='DarkBlue')
df_zoom = df.loc[df.area_util < 751]
ax2 = df_zoom.plot.scatter(x='area_util',
y='valor_venda',
c='DarkBlue')
df_zoom = df.loc[df.area_util < 301]
ax2 = df_zoom.plot.scatter(x='area_util',
y='valor_venda',
c='DarkBlue')
df_zoom = df.loc[(df.area_util < 300) & (df.valor_venda < 4000000)]
print(df_zoom.shape)
ax2 = df_zoom.plot.scatter(x='area_util',
y='valor_venda',
c='DarkBlue')
df_zoom = df.loc[(df.area_util < 100) & (df.valor_venda < 1000000)]
print(df_zoom.shape)
ax2 = df_zoom.plot.scatter(x='area_util',
y='valor_venda',
c='DarkBlue')
# +
lat = -15.752353
long = -47.8830672
area = 87
quartos = 3.0
apto_411_n = [[area,quartos,lat,long]]
previsao = reg.predict(apto_411_n)[0]
correcao = previsao * 0.21
print("previsao de preço: ", previsao - correcao)
# +
lat = -15.738213539123535
long = -47.897647857666015
area = 89
quartos = 3.0
apto_316_n = [[area,quartos,lat,long]]
previsao = reg.predict(apto_316_n)[0]
correcao = previsao * 0.21
print("previsao de preço: ", previsao - correcao)
# -
| regressao apartamentos.ipynb |
Subsets and Splits