code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Customer Churn Prediction with XGBoost
_**Using Gradient Boosted Trees to Predict Mobile Customer Departure**_
---
---
## Contents
1. [Background](#Background)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. [Host](#Host)
1. [Evaluate](#Evaluate)
1. [Relative cost of errors](#Relative-cost-of-errors)
1. [Extensions](#Extensions)
---
## Background
_This notebook has been adapted from an [AWS blog post](https://aws.amazon.com/blogs/ai/predicting-customer-churn-with-amazon-machine-learning/)_
Losing customers is costly for any business. Identifying unhappy customers early on gives you a chance to offer them incentives to stay. This notebook describes using machine learning (ML) for the automated identification of unhappy customers, also known as customer churn prediction. ML models rarely give perfect predictions though, so this notebook is also about how to incorporate the relative costs of prediction mistakes when determining the financial outcome of using ML.
We use an example of churn that is familiar to all of us–leaving a mobile phone operator. Seems like I can always find fault with my provider du jour! And if my provider knows that I’m thinking of leaving, it can offer timely incentives–I can always use a phone upgrade or perhaps have a new feature activated–and I might just stick around. Incentives are often much more cost effective than losing and reacquiring a customer.
---
## Setup
_This notebook was created and tested on an ml.m4.xlarge notebook instance._
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
```
import sagemaker
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = "sagemaker/DEMO-xgboost-churn"
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
```
Next, we'll import the Python libraries we'll need for the remainder of the exercise.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import io
import os
import sys
import time
import json
from IPython.display import display
from time import strftime, gmtime
from sagemaker.inputs import TrainingInput
from sagemaker.serializers import CSVSerializer
```
---
## Data
Mobile operators have historical records on which customers ultimately ended up churning and which continued using the service. We can use this historical information to construct an ML model of one mobile operator’s churn using a process called training. After training the model, we can pass the profile information of an arbitrary customer (the same profile information that we used to train the model) to the model, and have the model predict whether this customer is going to churn. Of course, we expect the model to make mistakes–after all, predicting the future is tricky business! But I’ll also show how to deal with prediction errors.
The dataset we use is publicly available and was mentioned in the book [Discovering Knowledge in Data](https://www.amazon.com/dp/0470908742/) by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets. Let's download and read that dataset in now:
```
!wget https://s3.amazonaws.com/sagemaker-workshop/2018_04_southerndatascienceconference/DKD2e_data_sets.zip
!unzip -o DKD2e_data_sets.zip
churn = pd.read_csv('./Data sets/churn.txt')
pd.set_option('display.max_columns', 500)
churn
```
By modern standards, it’s a relatively small dataset, with only 5,000 records, where each record uses 21 attributes to describe the profile of a customer of an unknown US mobile operator. The attributes are:
- `State`: the US state in which the customer resides, indicated by a two-letter abbreviation; for example, OH or NJ
- `Account Length`: the number of days that this account has been active
- `Area Code`: the three-digit area code of the corresponding customer’s phone number
- `Phone`: the remaining seven-digit phone number
- `Int’l Plan`: whether the customer has an international calling plan: yes/no
- `VMail Plan`: whether the customer has a voice mail feature: yes/no
- `VMail Message`: presumably the average number of voice mail messages per month
- `Day Mins`: the total number of calling minutes used during the day
- `Day Calls`: the total number of calls placed during the day
- `Day Charge`: the billed cost of daytime calls
- `Eve Mins, Eve Calls, Eve Charge`: the billed cost for calls placed during the evening
- `Night Mins`, `Night Calls`, `Night Charge`: the billed cost for calls placed during nighttime
- `Intl Mins`, `Intl Calls`, `Intl Charge`: the billed cost for international calls
- `CustServ Calls`: the number of calls placed to Customer Service
- `Churn?`: whether the customer left the service: true/false
The last attribute, `Churn?`, is known as the target attribute–the attribute that we want the ML model to predict. Because the target attribute is binary, our model will be performing binary prediction, also known as binary classification.
Let's begin exploring the data:
```
# Frequency tables for each categorical feature
for column in churn.select_dtypes(include=["object"]).columns:
display(pd.crosstab(index=churn[column], columns="% observations", normalize="columns"))
# Histograms for each numeric features
display(churn.describe())
%matplotlib inline
hist = churn.hist(bins=30, sharey=True, figsize=(10, 10))
```
We can see immediately that:
- `State` appears to be quite evenly distributed
- `Phone` takes on too many unique values to be of any practical use. It's possible parsing out the prefix could have some value, but without more context on how these are allocated, we should avoid using it.
- Most of the numeric features are surprisingly nicely distributed, with many showing bell-like gaussianity. `VMail Message` being a notable exception (and `Area Code` showing up as a feature we should convert to non-numeric).
```
churn = churn.drop("Phone", axis=1)
churn["Area Code"] = churn["Area Code"].astype(object)
```
Next let's look at the relationship between each of the features and our target variable.
```
for column in churn.select_dtypes(include=["object"]).columns:
if column != "Churn?":
display(pd.crosstab(index=churn[column], columns=churn["Churn?"], normalize="columns"))
for column in churn.select_dtypes(exclude=["object"]).columns:
print(column)
hist = churn[[column, "Churn?"]].hist(by="Churn?", bins=30)
plt.show()
display(churn.corr())
pd.plotting.scatter_matrix(churn, figsize=(12, 12))
plt.show()
```
We see several features that essentially have 100% correlation with one another. Including these feature pairs in some machine learning algorithms can create catastrophic problems, while in others it will only introduce minor redundancy and bias. Let's remove one feature from each of the highly correlated pairs: Day Charge from the pair with Day Mins, Night Charge from the pair with Night Mins, Intl Charge from the pair with Intl Mins:
```
churn = churn.drop(["Day Charge", "Eve Charge", "Night Charge", "Intl Charge"], axis=1)
```
Now that we've cleaned up our dataset, let's determine which algorithm to use. As mentioned above, there appear to be some variables where both high and low (but not intermediate) values are predictive of churn. In order to accommodate this in an algorithm like linear regression, we'd need to generate polynomial (or bucketed) terms. Instead, let's attempt to model this problem using gradient boosted trees. Amazon SageMaker provides an XGBoost container that we can use to train in a managed, distributed setting, and then host as a real-time prediction endpoint. XGBoost uses gradient boosted trees which naturally account for non-linear relationships between features and the target variable, as well as accommodating complex interactions between features.
Amazon SageMaker XGBoost can train on data in either a CSV or LibSVM format. For this example, we'll stick with CSV. It should:
- Have the predictor variable in the first column
- Not have a header row
But first, let's convert our categorical features into numeric features.
```
model_data = pd.get_dummies(churn)
model_data = pd.concat(
[model_data["Churn?_True."], model_data.drop(["Churn?_False.", "Churn?_True."], axis=1)], axis=1
)
```
And now let's split the data into training, validation, and test sets. This will help prevent us from overfitting the model, and allow us to test the models accuracy on data it hasn't already seen.
```
train_data, validation_data, test_data = np.split(
model_data.sample(frac=1, random_state=1729),
[int(0.7 * len(model_data)), int(0.9 * len(model_data))],
)
train_data.to_csv("train.csv", header=False, index=False)
validation_data.to_csv("validation.csv", header=False, index=False)
```
Now we'll upload these files to S3.
```
boto3.Session().resource("s3").Bucket(bucket).Object(
os.path.join(prefix, "train/train.csv")
).upload_file("train.csv")
boto3.Session().resource("s3").Bucket(bucket).Object(
os.path.join(prefix, "validation/validation.csv")
).upload_file("validation.csv")
```
---
## Train
Moving onto training, first we'll need to specify the locations of the XGBoost algorithm containers.
```
container = sagemaker.image_uris.retrieve("xgboost", boto3.Session().region_name, "1")
display(container)
```
Then, because we're training with the CSV file format, we'll create `TrainingInput`s that our training function can use as a pointer to the files in S3.
```
s3_input_train = TrainingInput(
s3_data="s3://{}/{}/train".format(bucket, prefix), content_type="csv"
)
s3_input_validation = TrainingInput(
s3_data="s3://{}/{}/validation/".format(bucket, prefix), content_type="csv"
)
```
Now, we can specify a few parameters like what type of training instances we'd like to use and how many, as well as our XGBoost hyperparameters. A few key hyperparameters are:
- `max_depth` controls how deep each tree within the algorithm can be built. Deeper trees can lead to better fit, but are more computationally expensive and can lead to overfitting. There is typically some trade-off in model performance that needs to be explored between a large number of shallow trees and a smaller number of deeper trees.
- `subsample` controls sampling of the training data. This technique can help reduce overfitting, but setting it too low can also starve the model of data.
- `num_round` controls the number of boosting rounds. This is essentially the subsequent models that are trained using the residuals of previous iterations. Again, more rounds should produce a better fit on the training data, but can be computationally expensive or lead to overfitting.
- `eta` controls how aggressive each round of boosting is. Larger values lead to more conservative boosting.
- `gamma` controls how aggressively trees are grown. Larger values lead to more conservative models.
More detail on XGBoost's hyperparmeters can be found on their GitHub [page](https://github.com/dmlc/xgboost/blob/master/doc/parameter.md).
```
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(
container,
role,
instance_count=1,
instance_type="ml.m5.xlarge",
use_spot_instances=True,
max_run= 1000,
max_wait=3200,
output_path="s3://{}/{}/output".format(bucket, prefix),
sagemaker_session=sess,
)
xgb.set_hyperparameters(
max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective="binary:logistic",
num_round=100,
)
xgb.fit({"train": s3_input_train, "validation": s3_input_validation})
```
---
## Host
Now that we've trained the algorithm, let's create a model and deploy it to a hosted endpoint.
```
xgb_predictor = xgb.deploy(
initial_instance_count=1, instance_type="ml.m5.xlarge", serializer=CSVSerializer()
)
```
### Evaluate
Now that we have a hosted endpoint running, we can make real-time predictions from our model very easily, simply by making an http POST request. But first, we'll need to setup serializers and deserializers for passing our `test_data` NumPy arrays to the model behind the endpoint.
Now, we'll use a simple function to:
1. Loop over our test dataset
1. Split it into mini-batches of rows
1. Convert those mini-batchs to CSV string payloads
1. Retrieve mini-batch predictions by invoking the XGBoost endpoint
1. Collect predictions and convert from the CSV output our model provides into a NumPy array
```
def predict(data, rows=500):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = ""
for array in split_array:
predictions = ",".join([predictions, xgb_predictor.predict(array).decode("utf-8")])
return np.fromstring(predictions[1:], sep=",")
predictions = predict(test_data.to_numpy()[:, 1:])
```
There are many ways to compare the performance of a machine learning model, but let's start by simply by comparing actual to predicted values. In this case, we're simply predicting whether the customer churned (`1`) or not (`0`), which produces a simple confusion matrix.
```
pd.crosstab(
index=test_data.iloc[:, 0],
columns=np.round(predictions),
rownames=["actual"],
colnames=["predictions"],
)
```
_Note, due to randomized elements of the algorithm, you results may differ slightly._
Of the 48 churners, we've correctly predicted 39 of them (true positives). And, we incorrectly predicted 4 customers would churn who then ended up not doing so (false positives). There are also 9 customers who ended up churning, that we predicted would not (false negatives).
An important point here is that because of the `np.round()` function above we are using a simple threshold (or cutoff) of 0.5. Our predictions from `xgboost` come out as continuous values between 0 and 1 and we force them into the binary classes that we began with. However, because a customer that churns is expected to cost the company more than proactively trying to retain a customer who we think might churn, we should consider adjusting this cutoff. That will almost certainly increase the number of false positives, but it can also be expected to increase the number of true positives and reduce the number of false negatives.
To get a rough intuition here, let's look at the continuous values of our predictions.
```
plt.hist(predictions)
plt.show()
```
The continuous valued predictions coming from our model tend to skew toward 0 or 1, but there is sufficient mass between 0.1 and 0.9 that adjusting the cutoff should indeed shift a number of customers' predictions. For example...
```
pd.crosstab(index=test_data.iloc[:, 0], columns=np.where(predictions > 0.3, 1, 0))
```
We can see that changing the cutoff from 0.5 to 0.3 results in 1 more true positives, 3 more false positives, and 1 fewer false negatives. The numbers are small overall here, but that's 6-10% of customers overall that are shifting because of a change to the cutoff. Was this the right decision? We may end up retaining 3 extra customers, but we also unnecessarily incentivized 5 more customers who would have stayed. Determining optimal cutoffs is a key step in properly applying machine learning in a real-world setting. Let's discuss this more broadly and then apply a specific, hypothetical solution for our current problem.
### Relative cost of errors
Any practical binary classification problem is likely to produce a similarly sensitive cutoff. That by itself isn’t a problem. After all, if the scores for two classes are really easy to separate, the problem probably isn’t very hard to begin with and might even be solvable with simple rules instead of ML.
More important, if I put an ML model into production, there are costs associated with the model erroneously assigning false positives and false negatives. I also need to look at similar costs associated with correct predictions of true positives and true negatives. Because the choice of the cutoff affects all four of these statistics, I need to consider the relative costs to the business for each of these four outcomes for each prediction.
#### Assigning costs
What are the costs for our problem of mobile operator churn? The costs, of course, depend on the specific actions that the business takes. Let's make some assumptions here.
First, assign the true negatives the cost of \$0. Our model essentially correctly identified a happy customer in this case, and we don’t need to do anything.
False negatives are the most problematic, because they incorrectly predict that a churning customer will stay. We lose the customer and will have to pay all the costs of acquiring a replacement customer, including foregone revenue, advertising costs, administrative costs, point of sale costs, and likely a phone hardware subsidy. A quick search on the Internet reveals that such costs typically run in the hundreds of dollars so, for the purposes of this example, let's assume \$500. This is the cost of false negatives.
Finally, for customers that our model identifies as churning, let's assume a retention incentive in the amount of \$100. If my provider offered me such a concession, I’d certainly think twice before leaving. This is the cost of both true positive and false positive outcomes. In the case of false positives (the customer is happy, but the model mistakenly predicted churn), we will “waste” the \$100 concession. We probably could have spent that \$100 more effectively, but it's possible we increased the loyalty of an already loyal customer, so that’s not so bad.
#### Finding the optimal cutoff
It’s clear that false negatives are substantially more costly than false positives. Instead of optimizing for error based on the number of customers, we should be minimizing a cost function that looks like this:
```txt
$500 * FN(C) + $0 * TN(C) + $100 * FP(C) + $100 * TP(C)
```
FN(C) means that the false negative percentage is a function of the cutoff, C, and similar for TN, FP, and TP. We need to find the cutoff, C, where the result of the expression is smallest.
A straightforward way to do this, is to simply run a simulation over a large number of possible cutoffs. We test 100 possible values in the for loop below.
```
cutoffs = np.arange(0.01, 1, 0.01)
costs = []
for c in cutoffs:
costs.append(
np.sum(
np.sum(
np.array([[0, 100], [500, 100]])
* pd.crosstab(index=test_data.iloc[:, 0], columns=np.where(predictions > c, 1, 0))
)
)
)
costs = np.array(costs)
plt.plot(cutoffs, costs)
plt.show()
print(
"Cost is minimized near a cutoff of:",
cutoffs[np.argmin(costs)],
"for a cost of:",
np.min(costs),
)
```
The above chart shows how picking a threshold too low results in costs skyrocketing as all customers are given a retention incentive. Meanwhile, setting the threshold too high results in too many lost customers, which ultimately grows to be nearly as costly. The overall cost can be minimized at \$8400 by setting the cutoff to 0.46, which is substantially better than the \$20k+ I would expect to lose by not taking any action.
---
## Extensions
This notebook showcased how to build a model that predicts whether a customer is likely to churn, and then how to optimally set a threshold that accounts for the cost of true positives, false positives, and false negatives. There are several means of extending it including:
- Some customers who receive retention incentives will still churn. Including a probability of churning despite receiving an incentive in our cost function would provide a better ROI on our retention programs.
- Customers who switch to a lower-priced plan or who deactivate a paid feature represent different kinds of churn that could be modeled separately.
- Modeling the evolution of customer behavior. If usage is dropping and the number of calls placed to Customer Service is increasing, you are more likely to experience churn then if the trend is the opposite. A customer profile should incorporate behavior trends.
- Actual training data and monetary cost assignments could be more complex.
- Multiple models for each type of churn could be needed.
Regardless of additional complexity, similar principles described in this notebook are likely apply.
### (Optional) Clean-up
If you're ready to be done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on.
```
xgb_predictor.delete_endpoint()
```
|
github_jupyter
|
import sagemaker
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = "sagemaker/DEMO-xgboost-churn"
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import io
import os
import sys
import time
import json
from IPython.display import display
from time import strftime, gmtime
from sagemaker.inputs import TrainingInput
from sagemaker.serializers import CSVSerializer
!wget https://s3.amazonaws.com/sagemaker-workshop/2018_04_southerndatascienceconference/DKD2e_data_sets.zip
!unzip -o DKD2e_data_sets.zip
churn = pd.read_csv('./Data sets/churn.txt')
pd.set_option('display.max_columns', 500)
churn
# Frequency tables for each categorical feature
for column in churn.select_dtypes(include=["object"]).columns:
display(pd.crosstab(index=churn[column], columns="% observations", normalize="columns"))
# Histograms for each numeric features
display(churn.describe())
%matplotlib inline
hist = churn.hist(bins=30, sharey=True, figsize=(10, 10))
churn = churn.drop("Phone", axis=1)
churn["Area Code"] = churn["Area Code"].astype(object)
for column in churn.select_dtypes(include=["object"]).columns:
if column != "Churn?":
display(pd.crosstab(index=churn[column], columns=churn["Churn?"], normalize="columns"))
for column in churn.select_dtypes(exclude=["object"]).columns:
print(column)
hist = churn[[column, "Churn?"]].hist(by="Churn?", bins=30)
plt.show()
display(churn.corr())
pd.plotting.scatter_matrix(churn, figsize=(12, 12))
plt.show()
churn = churn.drop(["Day Charge", "Eve Charge", "Night Charge", "Intl Charge"], axis=1)
model_data = pd.get_dummies(churn)
model_data = pd.concat(
[model_data["Churn?_True."], model_data.drop(["Churn?_False.", "Churn?_True."], axis=1)], axis=1
)
train_data, validation_data, test_data = np.split(
model_data.sample(frac=1, random_state=1729),
[int(0.7 * len(model_data)), int(0.9 * len(model_data))],
)
train_data.to_csv("train.csv", header=False, index=False)
validation_data.to_csv("validation.csv", header=False, index=False)
boto3.Session().resource("s3").Bucket(bucket).Object(
os.path.join(prefix, "train/train.csv")
).upload_file("train.csv")
boto3.Session().resource("s3").Bucket(bucket).Object(
os.path.join(prefix, "validation/validation.csv")
).upload_file("validation.csv")
container = sagemaker.image_uris.retrieve("xgboost", boto3.Session().region_name, "1")
display(container)
s3_input_train = TrainingInput(
s3_data="s3://{}/{}/train".format(bucket, prefix), content_type="csv"
)
s3_input_validation = TrainingInput(
s3_data="s3://{}/{}/validation/".format(bucket, prefix), content_type="csv"
)
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(
container,
role,
instance_count=1,
instance_type="ml.m5.xlarge",
use_spot_instances=True,
max_run= 1000,
max_wait=3200,
output_path="s3://{}/{}/output".format(bucket, prefix),
sagemaker_session=sess,
)
xgb.set_hyperparameters(
max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
silent=0,
objective="binary:logistic",
num_round=100,
)
xgb.fit({"train": s3_input_train, "validation": s3_input_validation})
xgb_predictor = xgb.deploy(
initial_instance_count=1, instance_type="ml.m5.xlarge", serializer=CSVSerializer()
)
def predict(data, rows=500):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = ""
for array in split_array:
predictions = ",".join([predictions, xgb_predictor.predict(array).decode("utf-8")])
return np.fromstring(predictions[1:], sep=",")
predictions = predict(test_data.to_numpy()[:, 1:])
pd.crosstab(
index=test_data.iloc[:, 0],
columns=np.round(predictions),
rownames=["actual"],
colnames=["predictions"],
)
plt.hist(predictions)
plt.show()
pd.crosstab(index=test_data.iloc[:, 0], columns=np.where(predictions > 0.3, 1, 0))
$500 * FN(C) + $0 * TN(C) + $100 * FP(C) + $100 * TP(C)
cutoffs = np.arange(0.01, 1, 0.01)
costs = []
for c in cutoffs:
costs.append(
np.sum(
np.sum(
np.array([[0, 100], [500, 100]])
* pd.crosstab(index=test_data.iloc[:, 0], columns=np.where(predictions > c, 1, 0))
)
)
)
costs = np.array(costs)
plt.plot(cutoffs, costs)
plt.show()
print(
"Cost is minimized near a cutoff of:",
cutoffs[np.argmin(costs)],
"for a cost of:",
np.min(costs),
)
xgb_predictor.delete_endpoint()
| 0.286269 | 0.993301 |
This notebook is part of the $\omega radlib$ documentation: https://docs.wradlib.org.
Copyright (c) $\omega radlib$ developers.
Distributed under the MIT License. See LICENSE.txt for more info.
# Plot data via xarray
```
import numpy as np
import matplotlib.pyplot as pl
import wradlib
import warnings
warnings.filterwarnings('ignore')
try:
get_ipython().magic("matplotlib inline")
except:
pl.ion()
```
## Read a polar data set from the German Weather Service
```
filename = wradlib.util.get_wradlib_data_file('dx/raa00-dx_10908-0806021735-fbg---bin.gz')
print(filename)
img, meta = wradlib.io.read_dx(filename)
```
Inspect the data set a little
```
print("Shape of polar array: %r\n" % (img.shape,))
print("Some meta data of the DX file:")
print("\tdatetime: %r" % (meta["datetime"],))
print("\tRadar ID: %s" % (meta["radarid"],))
```
## transform to xarray DataArray
```
r = np.arange(img.shape[1], dtype=np.float)
r += (r[1] - r[0]) / 2.
az = meta['azim']
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, r=r, phi=az, theta=meta['elev'])
da = wradlib.georef.georeference_dataset(da)
da
```
## The simplest way to plot this dataset
Use the `wradlib` xarray DataArray Accessor
```
pm = da.wradlib.plot_ppi()
txt = pl.title('Simple PPI')
pm = da.wradlib.contourf()
txt = pl.title('Simple Filled Contour PPI')
pm = da.wradlib.contour()
txt = pl.title('Simple Contour PPI')
da = wradlib.georef.create_xarray_dataarray(img, r=r, phi=az, theta=meta['elev'])
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.contour(proj='cg')
txt = pl.title('Simple CG Contour PPI', y=1.05)
```
## create DataArray with proper azimuth/range dimensions
Using ranges in meters and correct site-location in (lon, lat, alt)
```
r = np.arange(img.shape[1], dtype=np.float) * 1000.
r += (r[1] - r[0]) / 2.
az = meta['azim']
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=(10, 45, 0))
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.plot_ppi()
txt = pl.title('Simple PPI')
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=(10, 45, 0), rf=1e3)
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.plot_ppi()
txt = pl.title('Simple PPI with adjusted range axis')
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
proj=wradlib.georef.get_default_projection(),
site=(10, 45, 0))
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.plot_ppi()
txt = pl.title('Simple projected PPI (WGS84)')
```
## Plotting just one sector
For this purpose, we slice azimuth/range...
```
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'], rf=1e3)
da = wradlib.georef.georeference_dataset(da)
da_sel = da.sel(azimuth=slice(200,250),
range=slice(40, 80))
pm = da_sel.wradlib.plot_ppi()
txt = pl.title('Sector PPI')
```
## Adding a crosshair to the PPI
```
# We introduce a site offset...
site = (10., 45., 0)
r = np.arange(img.shape[1], dtype=np.float)
r += (r[1] - r[0]) / 2.
r *= 1000
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=(10, 45, 0))
da = wradlib.georef.georeference_dataset(da)
da.wradlib.plot_ppi()
# ... plot a crosshair over our data...
wradlib.vis.plot_ppi_crosshair(site=site, ranges=[50e3, 100e3, 128e3],
angles=[0, 90, 180, 270],
line=dict(color='white'),
circle={'edgecolor': 'white'})
pl.title('Offset and Custom Crosshair')
pl.axis("tight")
pl.axes().set_aspect('equal')
```
## Placing the polar data in a projected Cartesian reference system
Using the `proj` keyword we tell the function to:
- interpret the site coordinates as longitude/latitude
- reproject the coordinates to the given projection (here: dwd-radolan composite coordinate system)
```
site=(10., 45., 0)
r = np.arange(img.shape[1], dtype=np.float)
r += (r[1] - r[0]) / 2.
r *= 1000
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
proj_rad = wradlib.georef.create_osr("dwd-radolan")
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=site)
da = wradlib.georef.georeference_dataset(da, proj=proj_rad)
pm = da.wradlib.plot_ppi()
ax = pl.gca()
# Now the crosshair ranges must be given in meters
wradlib.vis.plot_ppi_crosshair(site=site,
ax=ax,
ranges=[40e3, 80e3, 128e3],
line=dict(color='white'),
circle={'edgecolor':'white'},
proj=proj_rad
)
pl.title('Georeferenced/Projected PPI')
pl.axis("tight")
pl.axes().set_aspect('equal')
```
## Some side effects of georeferencing
Transplanting the radar virtually moves it away from the central meridian of the projection (which is 10 degrees east). Due north now does not point straight upwards on the map.
The crosshair shows this: for the case that the lines should actually become curved, they are implemented as a piecewise linear curve with 10 vertices. The same is true for the range circles, but with more vertices, of course.
```
site=(45., 7., 0.)
r = np.arange(img.shape[1], dtype=np.float) * 1000.
r += (r[1] - r[0]) / 2.
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'], site=site)
da = wradlib.georef.georeference_dataset(da, proj=proj_rad)
pm = da.wradlib.plot_ppi()
ax = wradlib.vis.plot_ppi_crosshair(site=site,
ranges=[64e3, 128e3],
line=dict(color='red'),
circle={'edgecolor': 'red'},
proj=proj_rad
)
txt = pl.title('Projection Side Effects')
```
## Simple Plot on Mercator-Map using cartopy
```
import cartopy.crs as ccrs
ccrs
site=(7, 45, 0.)
map_proj = ccrs.Mercator(central_longitude=site[1])
r = np.arange(img.shape[1], dtype=np.float) * 1000.
r += (r[1] - r[0]) / 2.
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=site)
da = wradlib.georef.georeference_dataset(da)
fig = pl.figure(figsize=(10, 10))
pm = da.wradlib.plot_ppi(proj=map_proj, fig=fig)
ax = pl.gca()
ax.gridlines(draw_labels=True)
```
## More decorations and annotations
You can annotate these plots by using standard matplotlib methods.
```
r = np.arange(img.shape[1], dtype=np.float)
r += (r[1] - r[0]) / 2.
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'])
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.plot_ppi()
ax = pl.gca()
ylabel = ax.set_xlabel('easting [km]')
ylabel = ax.set_ylabel('northing [km]')
title = ax.set_title('PPI manipulations/colorbar')
# you can now also zoom - either programmatically or interactively
xlim = ax.set_xlim(-80, -20)
ylim = ax.set_ylim(-80, 0)
# as the function returns the axes- and 'mappable'-objects colorbar needs, adding a colorbar is easy
cb = pl.colorbar(pm, ax=ax)
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as pl
import wradlib
import warnings
warnings.filterwarnings('ignore')
try:
get_ipython().magic("matplotlib inline")
except:
pl.ion()
filename = wradlib.util.get_wradlib_data_file('dx/raa00-dx_10908-0806021735-fbg---bin.gz')
print(filename)
img, meta = wradlib.io.read_dx(filename)
print("Shape of polar array: %r\n" % (img.shape,))
print("Some meta data of the DX file:")
print("\tdatetime: %r" % (meta["datetime"],))
print("\tRadar ID: %s" % (meta["radarid"],))
r = np.arange(img.shape[1], dtype=np.float)
r += (r[1] - r[0]) / 2.
az = meta['azim']
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, r=r, phi=az, theta=meta['elev'])
da = wradlib.georef.georeference_dataset(da)
da
pm = da.wradlib.plot_ppi()
txt = pl.title('Simple PPI')
pm = da.wradlib.contourf()
txt = pl.title('Simple Filled Contour PPI')
pm = da.wradlib.contour()
txt = pl.title('Simple Contour PPI')
da = wradlib.georef.create_xarray_dataarray(img, r=r, phi=az, theta=meta['elev'])
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.contour(proj='cg')
txt = pl.title('Simple CG Contour PPI', y=1.05)
r = np.arange(img.shape[1], dtype=np.float) * 1000.
r += (r[1] - r[0]) / 2.
az = meta['azim']
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=(10, 45, 0))
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.plot_ppi()
txt = pl.title('Simple PPI')
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=(10, 45, 0), rf=1e3)
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.plot_ppi()
txt = pl.title('Simple PPI with adjusted range axis')
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
proj=wradlib.georef.get_default_projection(),
site=(10, 45, 0))
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.plot_ppi()
txt = pl.title('Simple projected PPI (WGS84)')
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'], rf=1e3)
da = wradlib.georef.georeference_dataset(da)
da_sel = da.sel(azimuth=slice(200,250),
range=slice(40, 80))
pm = da_sel.wradlib.plot_ppi()
txt = pl.title('Sector PPI')
# We introduce a site offset...
site = (10., 45., 0)
r = np.arange(img.shape[1], dtype=np.float)
r += (r[1] - r[0]) / 2.
r *= 1000
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=(10, 45, 0))
da = wradlib.georef.georeference_dataset(da)
da.wradlib.plot_ppi()
# ... plot a crosshair over our data...
wradlib.vis.plot_ppi_crosshair(site=site, ranges=[50e3, 100e3, 128e3],
angles=[0, 90, 180, 270],
line=dict(color='white'),
circle={'edgecolor': 'white'})
pl.title('Offset and Custom Crosshair')
pl.axis("tight")
pl.axes().set_aspect('equal')
site=(10., 45., 0)
r = np.arange(img.shape[1], dtype=np.float)
r += (r[1] - r[0]) / 2.
r *= 1000
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
proj_rad = wradlib.georef.create_osr("dwd-radolan")
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=site)
da = wradlib.georef.georeference_dataset(da, proj=proj_rad)
pm = da.wradlib.plot_ppi()
ax = pl.gca()
# Now the crosshair ranges must be given in meters
wradlib.vis.plot_ppi_crosshair(site=site,
ax=ax,
ranges=[40e3, 80e3, 128e3],
line=dict(color='white'),
circle={'edgecolor':'white'},
proj=proj_rad
)
pl.title('Georeferenced/Projected PPI')
pl.axis("tight")
pl.axes().set_aspect('equal')
site=(45., 7., 0.)
r = np.arange(img.shape[1], dtype=np.float) * 1000.
r += (r[1] - r[0]) / 2.
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'], site=site)
da = wradlib.georef.georeference_dataset(da, proj=proj_rad)
pm = da.wradlib.plot_ppi()
ax = wradlib.vis.plot_ppi_crosshair(site=site,
ranges=[64e3, 128e3],
line=dict(color='red'),
circle={'edgecolor': 'red'},
proj=proj_rad
)
txt = pl.title('Projection Side Effects')
import cartopy.crs as ccrs
ccrs
site=(7, 45, 0.)
map_proj = ccrs.Mercator(central_longitude=site[1])
r = np.arange(img.shape[1], dtype=np.float) * 1000.
r += (r[1] - r[0]) / 2.
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'],
site=site)
da = wradlib.georef.georeference_dataset(da)
fig = pl.figure(figsize=(10, 10))
pm = da.wradlib.plot_ppi(proj=map_proj, fig=fig)
ax = pl.gca()
ax.gridlines(draw_labels=True)
r = np.arange(img.shape[1], dtype=np.float)
r += (r[1] - r[0]) / 2.
az = np.arange(img.shape[0], dtype=np.float)
az += (az[1] - az[0]) / 2.
da = wradlib.georef.create_xarray_dataarray(img, phi=az, r=r, theta=meta['elev'])
da = wradlib.georef.georeference_dataset(da)
pm = da.wradlib.plot_ppi()
ax = pl.gca()
ylabel = ax.set_xlabel('easting [km]')
ylabel = ax.set_ylabel('northing [km]')
title = ax.set_title('PPI manipulations/colorbar')
# you can now also zoom - either programmatically or interactively
xlim = ax.set_xlim(-80, -20)
ylim = ax.set_ylim(-80, 0)
# as the function returns the axes- and 'mappable'-objects colorbar needs, adding a colorbar is easy
cb = pl.colorbar(pm, ax=ax)
| 0.291687 | 0.918261 |
```
from google.colab import files
files.upload()
from os import *
system('unzip rio.zip')
```
####Importando o arquivo de dados meteorológicos do Rio de Janeiro.
```
import pandas as pd
import numpy as py
weather = pd.read_csv('rio.csv')
```
####Criando o dataframe "weather" a partir do aquivo .rio.csv importado
```
weather.head()
```
###Renomeando as colunas:
```
weather.rename(columns={'City':'Station Name','gust':'Wind Gust','prov':'State', 'dmax':'Max Dew Point', 'dmin':'Min Dew Point', 'dewp':'Dew Point', 'yr':'Year', 'mo':'Month', 'da':'Day', 'hr':'Hour', 'date':'Date' }, inplace=True)
weather.head()
weather.shape
```
####Com o código abaixo, mostramos a quantidade de dados ausentes em cada variável.
```
print(weather.isnull().sum())
```
####Agora eliminaremos as colunas com mais de 70% de dados ausentes
```
weathermod = weather.dropna(thresh=0.7*len(weather), axis=1)
weathermod.shape
```
####Nosso dataframe agora tem as seguintes entradas nulas:
```
print(weathermod.isnull().sum())
```
####Eliminando agora as instâncias com 1 ou 2 dados faltantes.
```
weathermod1 = weathermod.dropna(subset=list(filter(lambda x: (x != 'Wind Speed' and x != 'Wind Gust' and x != 'Min Humidity'), weathermod.columns)), axis=0)
print(weathermod1.isnull().sum())
weathermod1.shape
weathermod1.dtypes
```
####As instâncias "Min Humidity", "Wind Speed" e "Wind Gust" contém uma quantidade de instâncias faltantes passíveis de algum processo de inputação. Faremos a inputação dos dados com base na mediana da distribuição, dado a característica assimétrica das mesmas.
```
from matplotlib import pyplot as plt
import numpy as np
from scipy import stats
import random
weather2 = weathermod1
weather2['Min Humidity'].fillna((weather2['Min Humidity'].median()), inplace=True)
print(weather2.isnull().sum())
```
####Histograma da varíavel "Min Humidity" pré-inputação:
```
fig, ax = plt.subplots()
ax.set_xlim([weathermod1['Min Humidity'].min(),weathermod1['Min Humidity'].max()])
ax.set_ylim(0,10000)
weathermod1['Min Humidity'].plot.hist(bins=100);
```
####Histograma pós-inputação:
```
fig, ax = plt.subplots()
ax.set_xlim([weather2['Min Humidity'].min(),weather2['Min Humidity'].max()])
ax.set_ylim(0,10000)
weather2['Min Humidity'].plot.hist(bins=100);
```
####Analogamente, agora a visualização e inputação para a variável "Wind Speed":
#####Histograma pré-inputação:
```
fig, ax = plt.subplots()
ax.set_xlim([weathermod1['Wind Speed'].min(),weathermod1['Wind Speed'].max()])
ax.set_ylim(0,10000)
weathermod1['Wind Speed'].plot.hist(bins=100);
```
#####Inputando os dados:
```
weather2['Wind Speed'].fillna((weather2['Wind Speed'].median()), inplace=True)
```
#####Histograma pós-processamento:
```
fig, ax = plt.subplots()
ax.set_xlim([weather2['Wind Speed'].min(),weather2['Wind Speed'].max()])
ax.set_ylim(0,20000)
weather2['Wind Speed'].plot.hist(bins=100);
```
####Adicionando agora dados nas linhas faltantes da coluna "Wind Gust", segundo uma distribuição uniforme entre o máximo e o mínimo dos dados.
#####Histograma pré-inputação:
```
fig, ax = plt.subplots()
ax.set_xlim([weathermod1['Wind Gust'].min(), weathermod1['Wind Gust'].max()])
ax.set_ylim(0, 15000)
weathermod1['Wind Gust'].plot.hist(bins=100);
```
#####Agora, analogamente, adicionado os dados na coluna "Wind Gust"
```
weather2['Wind Gust'].fillna((weather2['Wind Gust'].median()), inplace=True)
```
#####Histograma pós-inputação:
```
fig, ax = plt.subplots()
ax.set_xlim([weather2['Wind Gust'].min(),weather2['Wind Gust'].max()])
ax.set_ylim(0, 12000)
weather2['Wind Gust'].plot.hist(bins=100);
```
####Como vemos agora, o banco de dados está completamente preenchido.
```
print(weather2.isnull().sum())
```
####Observando agora a correlação e descrição entre as variáveis quantitativas do dataset:
```
weather3 = weather2
weather3 = weather3.drop(['Station Name', 'State', 'city', 'inme', 'Date', 'Datetime', 'Lat', 'Lon', 'Year', 'Month', 'Day','Hour','Weather station id'], axis=1)
weather3.corr()
weather3.describe()
```
###Faremos agora modelos lineares para a predição da temperatura:
####Para garantir a independância das observações, selecionaremos uma amostra aleatória simples sem reposição de 1000 observações. Gerando uma base de dados com maior independência entre as observações e os erros.
```
df = weather3.sample(n=1000, frac=None, replace=False, weights=None, random_state=2, axis=0)
df.describe()
import matplotlib.pyplot as plt
import seaborn as sns
import math
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error
from scipy.stats import linregress, t
```
####Fazendo agora um modelo linear simples utilizando scikit-learn, regredindo a Temperatura (em Celsius) em relação à Presssão do Ar (em hPa).
```
simple_model = LinearRegression(fit_intercept=True)
y = df['Temperature']
x = df[['Air Pressure']]
simple_model.fit(x, y)
predicts = simple_model.predict(x)
simple_model.get_params(deep=True)
mse = mean_squared_error(y, predict) #Erro Médio Quadrático
r2 = r2_score(y, predict)
print("Modelo: Temperatura = {:.3f} + {:.3f}*Pressão do Ar".format(simple_model.intercept_, simple_model.coef_[0]))
print("MSE: %.3f" %mse)
print("R2: % .4f" %r2 )
xfit = x
yfit = simple_model.predict(x)
plt.xlabel('Pressão do Ar')
plt.ylabel('Temperatura')
plt.title('Plot da Regressão Temperatura x Pressão do Ar')
plt.scatter(x, y)
plt.xlim(995,1030)
plt.plot(xfit, yfit);
```
####Temos então um modelo com grau de explicação (79,08 %) e pequeno erro médio quadrático. Avaliando também os pressupostos de normalidade dos erros.
```
# Plot the residuals after fitting a linear model
sns.set(style="whitegrid")
plt.xlim(22.9, 23.7)
sns.residplot(predicts, y, color="g")
res = (y - predicts)
import scipy as sp
fig, ax = plt.subplots()
_, (__, ___, r) = sp.stats.probplot(res, plot=ax, fit=True)
```
#####Os resíduos não parecem se dispersar aleatoriamente em torno de 0, mostrando uma tendência e afastando a hipótese de normalidade, algo também visto no gráfico de dispersão de quantis.
#####Avaliando a normalidade com o teste de Shapiro-Wilk:
```
print("Teste de Shapiro-Wilk para os resíduos: ", stats.shapiro(res))
```
#####Ao nível *a* = 0,05, rejeitamos a hipótese de normalidade dos resíduos.
####Utilizando agora Scitk-Learn para um modelo de Regressão Múltipla:
```
# MULTIPLA
model = LinearRegression()
y = df['Temperature']
x1 = df['Air Pressure']
x2 = df['Elevation']
x3 = df['Humidity']
x4 = df['Wind Speed']
xm = np.column_stack((x1,x2,x3,x4)) #Agrupa as variaveis preditoras
model.fit(xm, y)
predict = model.predict(xm)
model.get_params(deep=True)
print("Model: Temperature = {:.3f} + {:.3f}*Air Pressure + {:.3f}*Elevation {:.3f}*Humidity + {:.3f}*Wind Speed ".format(model.intercept_, model.coef_[0], model.coef_[1], model.coef_[2], model.coef_[3]))
mse = mean_squared_error(y, predict)
print("MSE: %.3f" %math.sqrt(mse))
print("R2: % .4f" % model.score(xm,y))
# Plot the residuals after fitting a linear model
x1 = predict
y1 = y - predict
plt.xlabel('Valor Ajustado')
plt.ylabel('Resíduo')
plt.title('Plot de Valores Ajustados x Resíduos: Regressão Múltipla')
plt.xlim(19,30)
plt.scatter(x1, y1)
```
#####Temos um modelo com igual poder de explicação semelhante, e como visto abaixo, o pressuposto de normalidade dos resíduos também é rejeitado.
```
import scipy as sp
fig, ax = plt.subplots()
_, (__, ___, r) = sp.stats.probplot((y-predict), plot=ax, fit=True)
print("Teste de Shapiro-Wilk para os resíduos: ", stats.shapiro(y1))
```
####Utilizando agora o método de Random Forest para um modelo de regressão linear com as mesmas variáveis anteriores:
```
from sklearn.ensemble import RandomForestRegressor
xDF = pd.DataFrame(xm)
regr = RandomForestRegressor()
regr.fit(xm, y)
predict1 = regr.predict(xm)
mse = mean_squared_error(y, predict1)
r2 = r2_score(y, predict1)
print("R2: {:.3f}".format(r2))
print("MSE: {:.3f}".format(mse))
print('\n')
print('Importância das variáveis:\n Air Pressure: {:.3f}\n Elevation: {:.3f}\n Humidity: {:.3f}\n Wind Speed: {:.3f}'.format(regr.feature_importances_[0], regr.feature_importances_[1], regr.feature_importances_[2], regr.feature_importances_[3]))
```
#####Conseguimos um modelo com altíssimo poder de explicação e baixo erro quadrático médio. Além de resíduos com dispersão semelhante.
```
sns.set(style="whitegrid")
sns.residplot(predict1, y, color="g")
```
|
github_jupyter
|
from google.colab import files
files.upload()
from os import *
system('unzip rio.zip')
import pandas as pd
import numpy as py
weather = pd.read_csv('rio.csv')
weather.head()
weather.rename(columns={'City':'Station Name','gust':'Wind Gust','prov':'State', 'dmax':'Max Dew Point', 'dmin':'Min Dew Point', 'dewp':'Dew Point', 'yr':'Year', 'mo':'Month', 'da':'Day', 'hr':'Hour', 'date':'Date' }, inplace=True)
weather.head()
weather.shape
print(weather.isnull().sum())
weathermod = weather.dropna(thresh=0.7*len(weather), axis=1)
weathermod.shape
print(weathermod.isnull().sum())
weathermod1 = weathermod.dropna(subset=list(filter(lambda x: (x != 'Wind Speed' and x != 'Wind Gust' and x != 'Min Humidity'), weathermod.columns)), axis=0)
print(weathermod1.isnull().sum())
weathermod1.shape
weathermod1.dtypes
from matplotlib import pyplot as plt
import numpy as np
from scipy import stats
import random
weather2 = weathermod1
weather2['Min Humidity'].fillna((weather2['Min Humidity'].median()), inplace=True)
print(weather2.isnull().sum())
fig, ax = plt.subplots()
ax.set_xlim([weathermod1['Min Humidity'].min(),weathermod1['Min Humidity'].max()])
ax.set_ylim(0,10000)
weathermod1['Min Humidity'].plot.hist(bins=100);
fig, ax = plt.subplots()
ax.set_xlim([weather2['Min Humidity'].min(),weather2['Min Humidity'].max()])
ax.set_ylim(0,10000)
weather2['Min Humidity'].plot.hist(bins=100);
fig, ax = plt.subplots()
ax.set_xlim([weathermod1['Wind Speed'].min(),weathermod1['Wind Speed'].max()])
ax.set_ylim(0,10000)
weathermod1['Wind Speed'].plot.hist(bins=100);
weather2['Wind Speed'].fillna((weather2['Wind Speed'].median()), inplace=True)
fig, ax = plt.subplots()
ax.set_xlim([weather2['Wind Speed'].min(),weather2['Wind Speed'].max()])
ax.set_ylim(0,20000)
weather2['Wind Speed'].plot.hist(bins=100);
fig, ax = plt.subplots()
ax.set_xlim([weathermod1['Wind Gust'].min(), weathermod1['Wind Gust'].max()])
ax.set_ylim(0, 15000)
weathermod1['Wind Gust'].plot.hist(bins=100);
weather2['Wind Gust'].fillna((weather2['Wind Gust'].median()), inplace=True)
fig, ax = plt.subplots()
ax.set_xlim([weather2['Wind Gust'].min(),weather2['Wind Gust'].max()])
ax.set_ylim(0, 12000)
weather2['Wind Gust'].plot.hist(bins=100);
print(weather2.isnull().sum())
weather3 = weather2
weather3 = weather3.drop(['Station Name', 'State', 'city', 'inme', 'Date', 'Datetime', 'Lat', 'Lon', 'Year', 'Month', 'Day','Hour','Weather station id'], axis=1)
weather3.corr()
weather3.describe()
df = weather3.sample(n=1000, frac=None, replace=False, weights=None, random_state=2, axis=0)
df.describe()
import matplotlib.pyplot as plt
import seaborn as sns
import math
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error
from scipy.stats import linregress, t
simple_model = LinearRegression(fit_intercept=True)
y = df['Temperature']
x = df[['Air Pressure']]
simple_model.fit(x, y)
predicts = simple_model.predict(x)
simple_model.get_params(deep=True)
mse = mean_squared_error(y, predict) #Erro Médio Quadrático
r2 = r2_score(y, predict)
print("Modelo: Temperatura = {:.3f} + {:.3f}*Pressão do Ar".format(simple_model.intercept_, simple_model.coef_[0]))
print("MSE: %.3f" %mse)
print("R2: % .4f" %r2 )
xfit = x
yfit = simple_model.predict(x)
plt.xlabel('Pressão do Ar')
plt.ylabel('Temperatura')
plt.title('Plot da Regressão Temperatura x Pressão do Ar')
plt.scatter(x, y)
plt.xlim(995,1030)
plt.plot(xfit, yfit);
# Plot the residuals after fitting a linear model
sns.set(style="whitegrid")
plt.xlim(22.9, 23.7)
sns.residplot(predicts, y, color="g")
res = (y - predicts)
import scipy as sp
fig, ax = plt.subplots()
_, (__, ___, r) = sp.stats.probplot(res, plot=ax, fit=True)
print("Teste de Shapiro-Wilk para os resíduos: ", stats.shapiro(res))
# MULTIPLA
model = LinearRegression()
y = df['Temperature']
x1 = df['Air Pressure']
x2 = df['Elevation']
x3 = df['Humidity']
x4 = df['Wind Speed']
xm = np.column_stack((x1,x2,x3,x4)) #Agrupa as variaveis preditoras
model.fit(xm, y)
predict = model.predict(xm)
model.get_params(deep=True)
print("Model: Temperature = {:.3f} + {:.3f}*Air Pressure + {:.3f}*Elevation {:.3f}*Humidity + {:.3f}*Wind Speed ".format(model.intercept_, model.coef_[0], model.coef_[1], model.coef_[2], model.coef_[3]))
mse = mean_squared_error(y, predict)
print("MSE: %.3f" %math.sqrt(mse))
print("R2: % .4f" % model.score(xm,y))
# Plot the residuals after fitting a linear model
x1 = predict
y1 = y - predict
plt.xlabel('Valor Ajustado')
plt.ylabel('Resíduo')
plt.title('Plot de Valores Ajustados x Resíduos: Regressão Múltipla')
plt.xlim(19,30)
plt.scatter(x1, y1)
import scipy as sp
fig, ax = plt.subplots()
_, (__, ___, r) = sp.stats.probplot((y-predict), plot=ax, fit=True)
print("Teste de Shapiro-Wilk para os resíduos: ", stats.shapiro(y1))
from sklearn.ensemble import RandomForestRegressor
xDF = pd.DataFrame(xm)
regr = RandomForestRegressor()
regr.fit(xm, y)
predict1 = regr.predict(xm)
mse = mean_squared_error(y, predict1)
r2 = r2_score(y, predict1)
print("R2: {:.3f}".format(r2))
print("MSE: {:.3f}".format(mse))
print('\n')
print('Importância das variáveis:\n Air Pressure: {:.3f}\n Elevation: {:.3f}\n Humidity: {:.3f}\n Wind Speed: {:.3f}'.format(regr.feature_importances_[0], regr.feature_importances_[1], regr.feature_importances_[2], regr.feature_importances_[3]))
sns.set(style="whitegrid")
sns.residplot(predict1, y, color="g")
| 0.651244 | 0.885434 |
<a id='s3'></a>
## TREE COVER LOSS WIDGET
The UMD/Hansen Tree cover loss widget should be a bar chart, with time as the x-axis, and loss (ha) as the y-axis, on hover it should show the year, ha loss, and % loss relative to 2000 tree cover extent *for the data table of interest*.
** Notes **
* Loss data tables have loss units in both area (ha) and emissions (t co2/ha). The emissions units will only be used in the loss widget to add contextual info to the dynamic sentence.
* It is probably best to always request the full time period of data from the table, and then subset it client-side, as we will always need to know the last year of loss to construct the dynamic sentence.
* In settings, the users should be able to change the data sources, when they do you will need to query different data tables both for calculating loss and extent (to calculate relative loss):
- All region (default) view: gadm28 table
- all other data tables currently created should be selectable too
*more info - not needed for front-end devs*
- for refrence in testing adm0 = BRA, adm1 = 12, adm2 = 1434 is Mato Grosso, Cáceres
** WHen plantations are selected the user should see a stacked bar chart from 2013 onwards **
More details of this to follow below.
```
#Import Global Metadata etc
%run '0.Importable_Globals.ipynb'
# Variables
threshold=30
adm0 = 'BRA'
adm1 = None
adm2 = None
extent_year = 2000 #extent data (2000 or 2010)
start=2000
end=2016
location = "All Region"
tags = ["forest_change", "land_cover", "conservation", "people", "land_use"]
selectable_polynames = ['gadm28',
'bra_biomes',
'mining',
'wdpa',
'primary_forest',
'ifl_2013']
# get a human readable {id: name} json for either admin 1 or 2 level as needed:
areaId_to_name = None
if adm2:
tmp = get_admin2_json(iso=adm0, adm1=adm1)
areaId_to_name ={}
for row in tmp:
areaId_to_name[row.get('adm2')] = row.get('name')
if adm1 and not adm2:
tmp = get_admin1_json(iso=adm0)
areaId_to_name={}
for row in tmp:
areaId_to_name[row.get('adm1')] = row.get('name')
def loss_queries(p_name, adm0, adm1=None, adm2=None, threshold=30):
if adm2:
print(f'Request for adm2 area')
sql = (f"SELECT polyname, year_data.year as year, "
f"SUM(year_data.area_loss) as area, "
f"SUM(year_data.emissions) as emissions "
f"FROM data "
f"WHERE polyname = '{p_name}' "
f"AND iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND adm2 = {adm2} "
f"AND thresh= {threshold} "
f"GROUP BY polyname, iso, nested(year_data.year)")
return sql
elif adm1:
print('Request for adm1 area')
sql = (f"SELECT polyname, year_data.year as year, "
f"SUM(year_data.area_loss) as area, "
f"SUM(year_data.emissions) as emissions "
f"FROM data "
f"WHERE polyname = '{p_name}' "
f"AND iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND thresh= {threshold} "
f"GROUP BY polyname, iso, nested(year_data.year)")
return sql
elif adm0:
print('Request for adm0 area')
sql = (f"SELECT polyname, year_data.year as year, "
f"SUM(year_data.area_loss) as area, "
f"SUM(year_data.emissions) as emissions "
f"FROM data "
f"WHERE polyname = '{p_name}' "
f"AND iso = '{adm0}' "
f"AND thresh= {threshold} "
f"GROUP BY polyname, iso, nested(year_data.year)")
return sql
def extent_queries(p_name, year, adm0, adm1=None, adm2 = None, threshold=30):
if adm2:
print('Request for adm2 area')
sql = (f"SELECT SUM({year}) as value, "
f"SUM(area_gadm28) as total_area "
f"FROM data "
f"WHERE iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND adm2 = {adm2} "
"AND thresh = {threshold} "
f"AND polyname = '{p_name}'")
return sql
elif adm1:
print('Request for adm1 area')
sql = (f"SELECT SUM({year}) as value, "
f"SUM(area_gadm28) as total_area "
f"FROM data "
f"WHERE iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND thresh = {threshold} "
f"AND polyname = '{p_name}'")
return sql
elif adm0:
print('Request for adm0 area')
sql = (f"SELECT SUM({year}) as value, "
f"SUM(area_gadm28) as total_area "
f"FROM data "
f"WHERE iso = '{adm0}' "
f"AND thresh = {threshold} "
f"AND polyname = '{p_name}'")
return sql
```
Loss on hover should show % loss relative to Tree cover extent of the location selected in the year 2010. I.e. if someone is interested in the 'All region' default view, then the loss should come from the `gadm28 loss` table, and the extent (to calculate the relative loss) should come from the `gadm28 extent` table.
Therefore we must calculate loss % also, for this we will need the tree cover extent data too.
```
# First, get the extent of tree cover over your area of interest (to work out relative loss)
sql = extent_queries(p_name=polynames[location], year=extent_year_dict[extent_year],
adm0=adm0, adm1=adm1, adm2=adm2, threshold=threshold)
print(sql) # loss query
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
pprint(r.json())
y2010_relative_extent = r.json().get('data')[0].get('total_area')
#ds = '499682b1-3174-493f-ba1a-368b4636708e'
url = f"https://production-api.globalforestwatch.org/v1/query/{ds}"
print(adm0)
# Next, get the loss data grouped by year
sql = loss_queries(p_name=polynames[location], adm0=adm0,
adm1=adm1, adm2=adm2, threshold=threshold)
print(sql) # loss query
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
pprint(r.json())
# # Extract the year, and loss in hectares, and emissions units, and calculate the relative loss in %
d = {}
for row in r.json().get('data'):
tmp_yr = float(row.get('year'))
if tmp_yr > 2000:
try:
tmp_area = float(row.get('area'))
except:
tmp_area = None
try:
tmp_area_pcnt = (tmp_area / y2010_relative_extent) * 100
except:
tmp_area_pcnt = None
try:
tmp_emiss = float(row.get('emissions'))
except:
tmp_emiss = None
d[int(tmp_yr)] = {'area_ha': tmp_area,
'area_%': tmp_area_pcnt,
'emissions': tmp_emiss,
}
pprint(d)
```
Use the `start` and `end` variables to filter the time-range on the front-end.
```
if adm0 and not adm1 and not adm2:
dynamic_title = (f'{iso_to_countries[adm0]} tree cover loss for {location.lower()}')
if adm0 and adm1 and not adm2:
dynamic_title = (f"{areaId_to_name[adm1]} tree cover loss for {location.lower()}")
if adm0 and adm1 and adm2:
dynamic_title = (f"{areaId_to_name[adm2]} tree cover loss for {location.lower()}")
loss = []
for val in d.values():
if val.get('area_ha'):
loss.append(val.get('area_ha'))
else:
loss.append(0)
years = d.keys()
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(years, loss, width, color='#FE5A8D')
# add some text for labels, title and axes ticks
ax.set_ylabel('Loss extent (ha)')
ax.set_title(dynamic_title)
plt.show()
```
#### Dynamic sentence
For the loss widget we also need a dynamic sentence.
```
for year in d:
print(year, d.get(year).get('emissions'), d.get(year).get('area_ha'))
# First find total emissions, and loss, and also the last year of emissions and loss
total_emissions = 0
total_loss = 0
for year in d:
total_loss += d.get(year).get('area_ha')
total_emissions += d.get(year).get('emissions')
print([total_emissions, total_loss])
# Dynamic sentence construction
if adm0 and not adm1 and not adm2:
print(f"Between {start} and {end}, {iso_to_countries[adm0]} ({location.lower()}) ", end="")
if adm0 and adm1 and not adm2:
print(f"Between {start} and {end}, {location} of {areaId_to_name[adm1].lower()} ", end="")
if adm0 and adm1 and adm2:
print(f"Between {start} and {end}, {location} of {areaId_to_name[adm2].lower()} ", end="")
print(f"lost {total_loss:,.0f} ha of tree cover: ", end="")
print(f"This loss is equal to {total_loss / y2010_relative_extent * 100:3.2f}% of the total ", end="")
print(f"{location.lower()} tree cover extent in {extent_year}, ", end="")
print(f"and equivalent to {total_emissions:,.0f} tonnes of CO\u2082 emissions. ", end="")
```
<a id='s4'></a>
## Loss extra: stacked bars plantation widget
If a user has selected a country where Plantation data is available, then they should see an extra plot - it will be a stacked bar that distinguishes loss inside and outside of plantations.
**Again, this is only possible for certain countries, the Plantation option should not appear in the menu unless a user is in a country with Plantation data**
Also, a key thing to note here. The raw data has values back to 2000, HOWEVER, for plantation data, we must not show anything earlier than 2013 in the figure.
Note the below area for BRA has the largest extent of plantations:
* adm1: 16,
* adm2: 3135,
```
# Variables
threshold=30
adm0 = 'BRA'
adm1 = 16
adm2 = None
start=2013
end=2016
tags = ["forest_change", "land_cover", "conservation", "people", "land_use"]
selectable_polynames = ['gadm28', 'plantations']
# get a human readable {id: name} json for either admin 1 or 2 level as needed:
areaId_to_name = None
if adm2:
tmp = get_admin2_json(iso=adm0, adm1=adm1)
areaId_to_name ={}
for row in tmp:
areaId_to_name[row.get('adm2')] = row.get('name')
if adm1 and not adm2:
tmp = get_admin1_json(iso=adm0)
areaId_to_name={}
for row in tmp:
areaId_to_name[row.get('adm1')] = row.get('name')
# We need two sets of loss data to calculate this widget (plantation and gadm28)
sql = loss_queries(p_name=polynames['Plantations'], adm0=adm0,
adm1=adm1, adm2=adm2, threshold=threshold)
print(sql) # loss query
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
plantation_loss_raw = r.json()
sql = loss_queries(p_name=polynames['All Region'], adm0=adm0,
adm1=adm1, adm2=adm2, threshold=threshold)
print(sql) # loss query
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
all_loss_raw = r.json()
def find_values_for_year(year, raw_json, keys):
"""Look in a returned raw json object for a row of
a specific year, and return some elements of data.
"""
tmp = {}
for row in raw_json.get('data'):
if year == row.get('year'):
for key in keys:
tmp[key]=row.get(key)
return tmp
plantation_loss_d = {}
for year in range(start, end+1):
tmp_gadm28_loss = find_values_for_year(year, all_loss_raw, ['area','emissions'])
tmp_plantation_loss = find_values_for_year(year, plantation_loss_raw, ['area','emissions'])
plantation_loss_d[year] = {'plantation_ha':tmp_plantation_loss.get('area'),
'plantation_co2':tmp_plantation_loss.get('emissions'),
'outside_ha':tmp_gadm28_loss.get('area') - tmp_plantation_loss.get('area'),
'outside_co2':tmp_gadm28_loss.get('emissions') -tmp_plantation_loss.get('emissions'),
}
plantation_loss_d
#print(outside_loss)
#print(plantation_loss)
if adm0 and not adm1 and not adm2:
dynamic_title = (f'{iso_to_countries[adm0]} tree cover loss and tree plantations')
if adm0 and adm1 and not adm2:
dynamic_title = (f"{areaId_to_name[adm1]} tree cover loss and tree plantations")
if adm0 and adm1 and adm2:
dynamic_title = (f"{areaId_to_name[adm2]} tree cover loss and tree plantations")
plantation_loss = []
for val in plantation_loss_d.values():
if val.get('plantation_ha'):
plantation_loss.append(val.get('plantation_ha'))
else:
plantation_loss.append(0)
plantation_loss = np.array(plantation_loss)
outside_loss = []
for val in plantation_loss_d.values():
if val.get('outside_ha'):
outside_loss.append(val.get('outside_ha'))
else:
outside_loss.append(0)
outside_loss = np.array(outside_loss)
years = plantation_loss_d.keys()
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(years, plantation_loss, width, color='#FE5A8D')
rects2 = ax.bar(years, outside_loss, width, bottom=plantation_loss, color='#ffc9e7')
# add some text for labels, title and axes ticks
ax.set_ylabel('Loss extent (ha)')
ax.set_title(dynamic_title)
plt.show()
```
<a id='s5'></a>
#### Dynamic sentence for plantation loss widget:
The majority of tree cover loss from xxx-year to xxx-year within X-location is driven by loss xx inside/outside
xx of tree plantations. Over that time, loss outside of plantations was equivalent to an estimated XX tones of CO2 emissions.
```
total_plantation_loss = np.sum(plantation_loss)
total_outside_loss = np.sum(outside_loss)
total_co2_over_start_end = 0
for row in plantation_loss_d:
total_co2_over_start_end += plantation_loss_d[row].get('plantation_co2')
total_co2_over_start_end += plantation_loss_d[row].get('outside_co2')
print('total plantation loss ',total_plantation_loss)
print('total outside loss ',total_outside_loss)
print('total co2 over start end', total_co2_over_start_end)
# Dynamic sentence construction
print(f"The majority of tree cover loss from {start} to {end} ", end="")
if adm2:
print(f"in {areaId_to_name[adm2]} ", end="")
elif adm1:
print(f"in {areaId_to_name[adm1]} ", end="")
elif adm0:
print(f"in {iso_to_countries[adm0]} ", end="")
if total_plantation_loss > total_outside_loss:
print(f"occured within plantations. ", end="")
else:
print(f"occured outside of plantations. ")
print(f"The total loss is roughly equivalent to {total_co2_over_start_end:,.0f} ", end="")
print(f"tonnes of CO\u2082 emissions. ", end="")
```
|
github_jupyter
|
#Import Global Metadata etc
%run '0.Importable_Globals.ipynb'
# Variables
threshold=30
adm0 = 'BRA'
adm1 = None
adm2 = None
extent_year = 2000 #extent data (2000 or 2010)
start=2000
end=2016
location = "All Region"
tags = ["forest_change", "land_cover", "conservation", "people", "land_use"]
selectable_polynames = ['gadm28',
'bra_biomes',
'mining',
'wdpa',
'primary_forest',
'ifl_2013']
# get a human readable {id: name} json for either admin 1 or 2 level as needed:
areaId_to_name = None
if adm2:
tmp = get_admin2_json(iso=adm0, adm1=adm1)
areaId_to_name ={}
for row in tmp:
areaId_to_name[row.get('adm2')] = row.get('name')
if adm1 and not adm2:
tmp = get_admin1_json(iso=adm0)
areaId_to_name={}
for row in tmp:
areaId_to_name[row.get('adm1')] = row.get('name')
def loss_queries(p_name, adm0, adm1=None, adm2=None, threshold=30):
if adm2:
print(f'Request for adm2 area')
sql = (f"SELECT polyname, year_data.year as year, "
f"SUM(year_data.area_loss) as area, "
f"SUM(year_data.emissions) as emissions "
f"FROM data "
f"WHERE polyname = '{p_name}' "
f"AND iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND adm2 = {adm2} "
f"AND thresh= {threshold} "
f"GROUP BY polyname, iso, nested(year_data.year)")
return sql
elif adm1:
print('Request for adm1 area')
sql = (f"SELECT polyname, year_data.year as year, "
f"SUM(year_data.area_loss) as area, "
f"SUM(year_data.emissions) as emissions "
f"FROM data "
f"WHERE polyname = '{p_name}' "
f"AND iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND thresh= {threshold} "
f"GROUP BY polyname, iso, nested(year_data.year)")
return sql
elif adm0:
print('Request for adm0 area')
sql = (f"SELECT polyname, year_data.year as year, "
f"SUM(year_data.area_loss) as area, "
f"SUM(year_data.emissions) as emissions "
f"FROM data "
f"WHERE polyname = '{p_name}' "
f"AND iso = '{adm0}' "
f"AND thresh= {threshold} "
f"GROUP BY polyname, iso, nested(year_data.year)")
return sql
def extent_queries(p_name, year, adm0, adm1=None, adm2 = None, threshold=30):
if adm2:
print('Request for adm2 area')
sql = (f"SELECT SUM({year}) as value, "
f"SUM(area_gadm28) as total_area "
f"FROM data "
f"WHERE iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND adm2 = {adm2} "
"AND thresh = {threshold} "
f"AND polyname = '{p_name}'")
return sql
elif adm1:
print('Request for adm1 area')
sql = (f"SELECT SUM({year}) as value, "
f"SUM(area_gadm28) as total_area "
f"FROM data "
f"WHERE iso = '{adm0}' "
f"AND adm1 = {adm1} "
f"AND thresh = {threshold} "
f"AND polyname = '{p_name}'")
return sql
elif adm0:
print('Request for adm0 area')
sql = (f"SELECT SUM({year}) as value, "
f"SUM(area_gadm28) as total_area "
f"FROM data "
f"WHERE iso = '{adm0}' "
f"AND thresh = {threshold} "
f"AND polyname = '{p_name}'")
return sql
# First, get the extent of tree cover over your area of interest (to work out relative loss)
sql = extent_queries(p_name=polynames[location], year=extent_year_dict[extent_year],
adm0=adm0, adm1=adm1, adm2=adm2, threshold=threshold)
print(sql) # loss query
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
pprint(r.json())
y2010_relative_extent = r.json().get('data')[0].get('total_area')
#ds = '499682b1-3174-493f-ba1a-368b4636708e'
url = f"https://production-api.globalforestwatch.org/v1/query/{ds}"
print(adm0)
# Next, get the loss data grouped by year
sql = loss_queries(p_name=polynames[location], adm0=adm0,
adm1=adm1, adm2=adm2, threshold=threshold)
print(sql) # loss query
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
pprint(r.json())
# # Extract the year, and loss in hectares, and emissions units, and calculate the relative loss in %
d = {}
for row in r.json().get('data'):
tmp_yr = float(row.get('year'))
if tmp_yr > 2000:
try:
tmp_area = float(row.get('area'))
except:
tmp_area = None
try:
tmp_area_pcnt = (tmp_area / y2010_relative_extent) * 100
except:
tmp_area_pcnt = None
try:
tmp_emiss = float(row.get('emissions'))
except:
tmp_emiss = None
d[int(tmp_yr)] = {'area_ha': tmp_area,
'area_%': tmp_area_pcnt,
'emissions': tmp_emiss,
}
pprint(d)
if adm0 and not adm1 and not adm2:
dynamic_title = (f'{iso_to_countries[adm0]} tree cover loss for {location.lower()}')
if adm0 and adm1 and not adm2:
dynamic_title = (f"{areaId_to_name[adm1]} tree cover loss for {location.lower()}")
if adm0 and adm1 and adm2:
dynamic_title = (f"{areaId_to_name[adm2]} tree cover loss for {location.lower()}")
loss = []
for val in d.values():
if val.get('area_ha'):
loss.append(val.get('area_ha'))
else:
loss.append(0)
years = d.keys()
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(years, loss, width, color='#FE5A8D')
# add some text for labels, title and axes ticks
ax.set_ylabel('Loss extent (ha)')
ax.set_title(dynamic_title)
plt.show()
for year in d:
print(year, d.get(year).get('emissions'), d.get(year).get('area_ha'))
# First find total emissions, and loss, and also the last year of emissions and loss
total_emissions = 0
total_loss = 0
for year in d:
total_loss += d.get(year).get('area_ha')
total_emissions += d.get(year).get('emissions')
print([total_emissions, total_loss])
# Dynamic sentence construction
if adm0 and not adm1 and not adm2:
print(f"Between {start} and {end}, {iso_to_countries[adm0]} ({location.lower()}) ", end="")
if adm0 and adm1 and not adm2:
print(f"Between {start} and {end}, {location} of {areaId_to_name[adm1].lower()} ", end="")
if adm0 and adm1 and adm2:
print(f"Between {start} and {end}, {location} of {areaId_to_name[adm2].lower()} ", end="")
print(f"lost {total_loss:,.0f} ha of tree cover: ", end="")
print(f"This loss is equal to {total_loss / y2010_relative_extent * 100:3.2f}% of the total ", end="")
print(f"{location.lower()} tree cover extent in {extent_year}, ", end="")
print(f"and equivalent to {total_emissions:,.0f} tonnes of CO\u2082 emissions. ", end="")
# Variables
threshold=30
adm0 = 'BRA'
adm1 = 16
adm2 = None
start=2013
end=2016
tags = ["forest_change", "land_cover", "conservation", "people", "land_use"]
selectable_polynames = ['gadm28', 'plantations']
# get a human readable {id: name} json for either admin 1 or 2 level as needed:
areaId_to_name = None
if adm2:
tmp = get_admin2_json(iso=adm0, adm1=adm1)
areaId_to_name ={}
for row in tmp:
areaId_to_name[row.get('adm2')] = row.get('name')
if adm1 and not adm2:
tmp = get_admin1_json(iso=adm0)
areaId_to_name={}
for row in tmp:
areaId_to_name[row.get('adm1')] = row.get('name')
# We need two sets of loss data to calculate this widget (plantation and gadm28)
sql = loss_queries(p_name=polynames['Plantations'], adm0=adm0,
adm1=adm1, adm2=adm2, threshold=threshold)
print(sql) # loss query
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
plantation_loss_raw = r.json()
sql = loss_queries(p_name=polynames['All Region'], adm0=adm0,
adm1=adm1, adm2=adm2, threshold=threshold)
print(sql) # loss query
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
all_loss_raw = r.json()
def find_values_for_year(year, raw_json, keys):
"""Look in a returned raw json object for a row of
a specific year, and return some elements of data.
"""
tmp = {}
for row in raw_json.get('data'):
if year == row.get('year'):
for key in keys:
tmp[key]=row.get(key)
return tmp
plantation_loss_d = {}
for year in range(start, end+1):
tmp_gadm28_loss = find_values_for_year(year, all_loss_raw, ['area','emissions'])
tmp_plantation_loss = find_values_for_year(year, plantation_loss_raw, ['area','emissions'])
plantation_loss_d[year] = {'plantation_ha':tmp_plantation_loss.get('area'),
'plantation_co2':tmp_plantation_loss.get('emissions'),
'outside_ha':tmp_gadm28_loss.get('area') - tmp_plantation_loss.get('area'),
'outside_co2':tmp_gadm28_loss.get('emissions') -tmp_plantation_loss.get('emissions'),
}
plantation_loss_d
#print(outside_loss)
#print(plantation_loss)
if adm0 and not adm1 and not adm2:
dynamic_title = (f'{iso_to_countries[adm0]} tree cover loss and tree plantations')
if adm0 and adm1 and not adm2:
dynamic_title = (f"{areaId_to_name[adm1]} tree cover loss and tree plantations")
if adm0 and adm1 and adm2:
dynamic_title = (f"{areaId_to_name[adm2]} tree cover loss and tree plantations")
plantation_loss = []
for val in plantation_loss_d.values():
if val.get('plantation_ha'):
plantation_loss.append(val.get('plantation_ha'))
else:
plantation_loss.append(0)
plantation_loss = np.array(plantation_loss)
outside_loss = []
for val in plantation_loss_d.values():
if val.get('outside_ha'):
outside_loss.append(val.get('outside_ha'))
else:
outside_loss.append(0)
outside_loss = np.array(outside_loss)
years = plantation_loss_d.keys()
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(years, plantation_loss, width, color='#FE5A8D')
rects2 = ax.bar(years, outside_loss, width, bottom=plantation_loss, color='#ffc9e7')
# add some text for labels, title and axes ticks
ax.set_ylabel('Loss extent (ha)')
ax.set_title(dynamic_title)
plt.show()
total_plantation_loss = np.sum(plantation_loss)
total_outside_loss = np.sum(outside_loss)
total_co2_over_start_end = 0
for row in plantation_loss_d:
total_co2_over_start_end += plantation_loss_d[row].get('plantation_co2')
total_co2_over_start_end += plantation_loss_d[row].get('outside_co2')
print('total plantation loss ',total_plantation_loss)
print('total outside loss ',total_outside_loss)
print('total co2 over start end', total_co2_over_start_end)
# Dynamic sentence construction
print(f"The majority of tree cover loss from {start} to {end} ", end="")
if adm2:
print(f"in {areaId_to_name[adm2]} ", end="")
elif adm1:
print(f"in {areaId_to_name[adm1]} ", end="")
elif adm0:
print(f"in {iso_to_countries[adm0]} ", end="")
if total_plantation_loss > total_outside_loss:
print(f"occured within plantations. ", end="")
else:
print(f"occured outside of plantations. ")
print(f"The total loss is roughly equivalent to {total_co2_over_start_end:,.0f} ", end="")
print(f"tonnes of CO\u2082 emissions. ", end="")
| 0.355439 | 0.884439 |
# Finetuning PyTorch vision models to work with CIFAR-10 dataset
### Author: Huy Phan
### Github: https://github.com/huyvnphan/PyTorch-CIFAR10
## 1. Import required libraries
```
import copy
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from tqdm import tqdm as pbar
from torch.utils.tensorboard import SummaryWriter
from cifar10_models import *
```
## 2. Prepare datasets
```
def make_dataloaders(params):
"""
Make a Pytorch dataloader object that can be used for traing and valiation
Input:
- params dict with key 'path' (string): path of the dataset folder
- params dict with key 'batch_size' (int): mini-batch size
- params dict with key 'num_workers' (int): number of workers for dataloader
Output:
- trainloader and testloader (pytorch dataloader object)
"""
transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
transform_validation = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
transform_validation = transforms.Compose([transforms.ToTensor()])
trainset = torchvision.datasets.CIFAR10(root=params['path'], train=True, transform=transform_train)
testset = torchvision.datasets.CIFAR10(root=params['path'], train=False, transform=transform_validation)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, num_workers=4)
testloader = torch.utils.data.DataLoader(testset, batch_size=params['batch_size'], shuffle=False, num_workers=4)
return trainloader, testloader
```
## 3. Train model
```
def train_model(model, params):
writer = SummaryWriter('runs/' + params['description'])
model = model.to(params['device'])
optimizer = optim.AdamW(model.parameters())
total_updates = params['num_epochs']*len(params['train_loader'])
criterion = nn.CrossEntropyLoss()
best_accuracy = test_model(model, params)
best_model = copy.deepcopy(model.state_dict())
for epoch in pbar(range(params['num_epochs'])):
# Each epoch has a training and validation phase
for phase in ['train', 'validation']:
# Loss accumulator for each epoch
logs = {'Loss': 0.0,
'Accuracy': 0.0}
# Set the model to the correct phase
model.train() if phase == 'train' else model.eval()
# Iterate over data
for image, label in params[phase+'_loader']:
image = image.to(params['device'])
label = label.to(params['device'])
# Zero gradient
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
# Forward pass
prediction = model(image)
loss = criterion(prediction, label)
accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item()
# Update log
logs['Loss'] += image.shape[0]*loss.detach().item()
logs['Accuracy'] += accuracy
# Backward pass
if phase == 'train':
loss.backward()
optimizer.step()
# Normalize and write the data to TensorBoard
logs['Loss'] /= len(params[phase+'_loader'].dataset)
logs['Accuracy'] /= len(params[phase+'_loader'].dataset)
writer.add_scalars('Loss', {phase: logs['Loss']}, epoch)
writer.add_scalars('Accuracy', {phase: logs['Accuracy']}, epoch)
# Save the best weights
if phase == 'validation' and logs['Accuracy'] > best_accuracy:
best_accuracy = logs['Accuracy']
best_model = copy.deepcopy(model.state_dict())
# Write best weights to disk
if epoch % params['check_point'] == 0 or epoch == params['num_epochs']-1:
torch.save(best_model, params['description'] + '.pt')
final_accuracy = test_model(model, params)
writer.add_text('Final_Accuracy', str(final_accuracy), 0)
writer.close()
```
## 4. Test model
```
def test_model(model, params):
model = model.to(params['device']).eval()
phase = 'validation'
logs = {'Accuracy': 0.0}
# Iterate over data
for image, label in pbar(params[phase+'_loader']):
image = image.to(params['device'])
label = label.to(params['device'])
with torch.no_grad():
prediction = model(image)
accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item()
logs['Accuracy'] += accuracy
logs['Accuracy'] /= len(params[phase+'_loader'].dataset)
return logs['Accuracy']
```
## 5. Create PyTorch models
```
model = resnet18()
```
## 6. Put everything together
```
# Train on cuda if available
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print("Using", device)
data_params = {'path': '/raid/data/pytorch_dataset/cifar10', 'batch_size': 256}
train_loader, validation_loader = make_dataloaders(data_params)
train_params = {'description': 'Test',
'num_epochs': 300,
'check_point': 50, 'device': device,
'train_loader': train_loader, 'validation_loader': validation_loader}
train_model(model, train_params)
test_model(model, train_params)
```
|
github_jupyter
|
import copy
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from tqdm import tqdm as pbar
from torch.utils.tensorboard import SummaryWriter
from cifar10_models import *
def make_dataloaders(params):
"""
Make a Pytorch dataloader object that can be used for traing and valiation
Input:
- params dict with key 'path' (string): path of the dataset folder
- params dict with key 'batch_size' (int): mini-batch size
- params dict with key 'num_workers' (int): number of workers for dataloader
Output:
- trainloader and testloader (pytorch dataloader object)
"""
transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
transform_validation = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
transform_validation = transforms.Compose([transforms.ToTensor()])
trainset = torchvision.datasets.CIFAR10(root=params['path'], train=True, transform=transform_train)
testset = torchvision.datasets.CIFAR10(root=params['path'], train=False, transform=transform_validation)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, num_workers=4)
testloader = torch.utils.data.DataLoader(testset, batch_size=params['batch_size'], shuffle=False, num_workers=4)
return trainloader, testloader
def train_model(model, params):
writer = SummaryWriter('runs/' + params['description'])
model = model.to(params['device'])
optimizer = optim.AdamW(model.parameters())
total_updates = params['num_epochs']*len(params['train_loader'])
criterion = nn.CrossEntropyLoss()
best_accuracy = test_model(model, params)
best_model = copy.deepcopy(model.state_dict())
for epoch in pbar(range(params['num_epochs'])):
# Each epoch has a training and validation phase
for phase in ['train', 'validation']:
# Loss accumulator for each epoch
logs = {'Loss': 0.0,
'Accuracy': 0.0}
# Set the model to the correct phase
model.train() if phase == 'train' else model.eval()
# Iterate over data
for image, label in params[phase+'_loader']:
image = image.to(params['device'])
label = label.to(params['device'])
# Zero gradient
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
# Forward pass
prediction = model(image)
loss = criterion(prediction, label)
accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item()
# Update log
logs['Loss'] += image.shape[0]*loss.detach().item()
logs['Accuracy'] += accuracy
# Backward pass
if phase == 'train':
loss.backward()
optimizer.step()
# Normalize and write the data to TensorBoard
logs['Loss'] /= len(params[phase+'_loader'].dataset)
logs['Accuracy'] /= len(params[phase+'_loader'].dataset)
writer.add_scalars('Loss', {phase: logs['Loss']}, epoch)
writer.add_scalars('Accuracy', {phase: logs['Accuracy']}, epoch)
# Save the best weights
if phase == 'validation' and logs['Accuracy'] > best_accuracy:
best_accuracy = logs['Accuracy']
best_model = copy.deepcopy(model.state_dict())
# Write best weights to disk
if epoch % params['check_point'] == 0 or epoch == params['num_epochs']-1:
torch.save(best_model, params['description'] + '.pt')
final_accuracy = test_model(model, params)
writer.add_text('Final_Accuracy', str(final_accuracy), 0)
writer.close()
def test_model(model, params):
model = model.to(params['device']).eval()
phase = 'validation'
logs = {'Accuracy': 0.0}
# Iterate over data
for image, label in pbar(params[phase+'_loader']):
image = image.to(params['device'])
label = label.to(params['device'])
with torch.no_grad():
prediction = model(image)
accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item()
logs['Accuracy'] += accuracy
logs['Accuracy'] /= len(params[phase+'_loader'].dataset)
return logs['Accuracy']
model = resnet18()
# Train on cuda if available
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print("Using", device)
data_params = {'path': '/raid/data/pytorch_dataset/cifar10', 'batch_size': 256}
train_loader, validation_loader = make_dataloaders(data_params)
train_params = {'description': 'Test',
'num_epochs': 300,
'check_point': 50, 'device': device,
'train_loader': train_loader, 'validation_loader': validation_loader}
train_model(model, train_params)
test_model(model, train_params)
| 0.862627 | 0.909063 |
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, GridSearchCV
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.linear_model import Ridge, Lasso, ElasticNet, LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_csv("dataset2.csv")
df = df.drop('Unnamed: 0', axis=1)
df.head()
```
## Classification
```
df['exercise'] = df.price!=0
df = df.drop(['price', 'interest_rates'], axis=1)
df.head()
df.groupby('exercise').count().stock
table = pd.crosstab(df.strike, df.exercise)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True)
plt.title('Stacked Bar Chart depending on the strike')
plt.xlabel('K')
plt.ylabel('Proportion')
plt.savefig('e_vs_ne_strike')
table = pd.crosstab(df.initial_vol, df.exercise)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True)
plt.title('Stacked Bar Chart depending on the initial vol')
plt.xlabel('initial_vol')
plt.ylabel('Proportion')
plt.savefig('e_vs_ne_initial_vole')
table = pd.crosstab(df.maturity, df.exercise)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True)
plt.title('Stacked Bar Chart depending on the maturity')
plt.xlabel('T')
plt.ylabel('Proportion')
plt.savefig('e_vs_ne_strike')
# plot function confusion matrix
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix (confusion_matrix, title):
ax = plt.subplot()
sns.heatmap(confusion_matrix,annot=True,fmt = "d",square = True,ax = ax, linewidths = 1, linecolor = "w", cmap = "Pastel2")
ax.set_xlabel('True labels')
ax.set_ylabel('Predicted labels')
ax.xaxis.set_ticklabels(['Exercised','Not Exercised'])
ax.yaxis.set_ticklabels(['Exercised','Not Exercised'], va="center")
plt.title(title)
plt.show()
# plot function roc curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
def plot_roc_curve(X_test, y_test, y_pred, model, title):
logit_roc_auc = roc_auc_score(y_test, y_pred)
fpr, tpr, thresholds = roc_curve(y_test, logreg.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
titlee = 'Exercise ROC curve ' + title
plt.title(titlee)
plt.legend(loc="lower right")
plt.savefig('Log_ROC')
plt.show()
X = df.drop('exercise', axis = 1)
y = df['exercise']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
```
## Logistic Regression
```
import statsmodels.api as sm
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary2())
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
confusion_matrix_logistic = confusion_matrix(y_test, y_pred, labels=[True, False])
plot_confusion_matrix (confusion_matrix_logistic, 'Logistic Regression')
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
print('Accuracy logistic regression classifier: {:.2f}%'.format(logreg.score(X_test, y_test)*100))
plot_roc_curve(X_test, y_test, y_pred, logreg)
```
## KNN
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
Neighbor_List=[3,5,10,20]
parameters = {'n_neighbors':Neighbor_List}
KNNC = KNeighborsClassifier()
classifier_knn = GridSearchCV(KNNC, parameters, cv=5, verbose=0, scoring ='accuracy')
classifier_knn.fit(X_train, y_train)
y_pred = classifier_knn.predict(X_test)
print('Accuracy KNN: {:.2f}%'.format(logreg.score(X_test, y_test)*100))
confusion_matrix_knn = confusion_matrix(y_test, y_pred, labels=[True, False])
plot_confusion_matrix (confusion_matrix_knn, 'KNN')
plot_roc_curve(X_test, y_test, y_pred, classifier_knn, 'KNN')
```
## XGBoost
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
from xgboost import XGBClassifier
params = {
'n_estimators': [100, 200, 300, 400],
'max_depth': [2, 3, 6],
'learning_rate': [0.005, 0.01, 0.02],
'subsample': [0.4, 0.6, 0.8]
}
classifier_xgboost = GridSearchCV(XGBClassifier(random_state=10), params, scoring ='accuracy')
classifier_xgboost.fit(X_train, y_train)
y_pred = classifier_xgboost.predict(X_test)
from sklearn.metrics import accuracy_score
predictions = [round(value) for value in y_pred]
accuracy = accuracy_score(y_test, predictions)
print("Accuracy XGBoost: %.2f%%" % (accuracy * 100.0))
confusion_matrix_knn = confusion_matrix(y_test, y_pred, labels=[True, False])
plot_confusion_matrix (confusion_matrix_knn, 'XGBoost')
plot_roc_curve(X_test, y_test, y_pred, classifier_xgboost, 'XGBoost')
# feature importance
clf= classifier_xgboost(learning_rate= 0.005, max_depth= 2, n_estimators= 100, subsample= 0.6,random_state=10)
clf.fit(X_train, y_train)
features = X.columns
importances = clf.feature_importances_
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="b", yerr=std[indices], align="center")
plt.xticks(range(len(indices)), [features[i] for i in indices])
plt.xticks(rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.show()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, GridSearchCV
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.linear_model import Ridge, Lasso, ElasticNet, LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_csv("dataset2.csv")
df = df.drop('Unnamed: 0', axis=1)
df.head()
df['exercise'] = df.price!=0
df = df.drop(['price', 'interest_rates'], axis=1)
df.head()
df.groupby('exercise').count().stock
table = pd.crosstab(df.strike, df.exercise)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True)
plt.title('Stacked Bar Chart depending on the strike')
plt.xlabel('K')
plt.ylabel('Proportion')
plt.savefig('e_vs_ne_strike')
table = pd.crosstab(df.initial_vol, df.exercise)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True)
plt.title('Stacked Bar Chart depending on the initial vol')
plt.xlabel('initial_vol')
plt.ylabel('Proportion')
plt.savefig('e_vs_ne_initial_vole')
table = pd.crosstab(df.maturity, df.exercise)
table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True)
plt.title('Stacked Bar Chart depending on the maturity')
plt.xlabel('T')
plt.ylabel('Proportion')
plt.savefig('e_vs_ne_strike')
# plot function confusion matrix
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix (confusion_matrix, title):
ax = plt.subplot()
sns.heatmap(confusion_matrix,annot=True,fmt = "d",square = True,ax = ax, linewidths = 1, linecolor = "w", cmap = "Pastel2")
ax.set_xlabel('True labels')
ax.set_ylabel('Predicted labels')
ax.xaxis.set_ticklabels(['Exercised','Not Exercised'])
ax.yaxis.set_ticklabels(['Exercised','Not Exercised'], va="center")
plt.title(title)
plt.show()
# plot function roc curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
def plot_roc_curve(X_test, y_test, y_pred, model, title):
logit_roc_auc = roc_auc_score(y_test, y_pred)
fpr, tpr, thresholds = roc_curve(y_test, logreg.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
titlee = 'Exercise ROC curve ' + title
plt.title(titlee)
plt.legend(loc="lower right")
plt.savefig('Log_ROC')
plt.show()
X = df.drop('exercise', axis = 1)
y = df['exercise']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
import statsmodels.api as sm
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary2())
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
confusion_matrix_logistic = confusion_matrix(y_test, y_pred, labels=[True, False])
plot_confusion_matrix (confusion_matrix_logistic, 'Logistic Regression')
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
print('Accuracy logistic regression classifier: {:.2f}%'.format(logreg.score(X_test, y_test)*100))
plot_roc_curve(X_test, y_test, y_pred, logreg)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
from sklearn.neighbors import KNeighborsClassifier
Neighbor_List=[3,5,10,20]
parameters = {'n_neighbors':Neighbor_List}
KNNC = KNeighborsClassifier()
classifier_knn = GridSearchCV(KNNC, parameters, cv=5, verbose=0, scoring ='accuracy')
classifier_knn.fit(X_train, y_train)
y_pred = classifier_knn.predict(X_test)
print('Accuracy KNN: {:.2f}%'.format(logreg.score(X_test, y_test)*100))
confusion_matrix_knn = confusion_matrix(y_test, y_pred, labels=[True, False])
plot_confusion_matrix (confusion_matrix_knn, 'KNN')
plot_roc_curve(X_test, y_test, y_pred, classifier_knn, 'KNN')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
from xgboost import XGBClassifier
params = {
'n_estimators': [100, 200, 300, 400],
'max_depth': [2, 3, 6],
'learning_rate': [0.005, 0.01, 0.02],
'subsample': [0.4, 0.6, 0.8]
}
classifier_xgboost = GridSearchCV(XGBClassifier(random_state=10), params, scoring ='accuracy')
classifier_xgboost.fit(X_train, y_train)
y_pred = classifier_xgboost.predict(X_test)
from sklearn.metrics import accuracy_score
predictions = [round(value) for value in y_pred]
accuracy = accuracy_score(y_test, predictions)
print("Accuracy XGBoost: %.2f%%" % (accuracy * 100.0))
confusion_matrix_knn = confusion_matrix(y_test, y_pred, labels=[True, False])
plot_confusion_matrix (confusion_matrix_knn, 'XGBoost')
plot_roc_curve(X_test, y_test, y_pred, classifier_xgboost, 'XGBoost')
# feature importance
clf= classifier_xgboost(learning_rate= 0.005, max_depth= 2, n_estimators= 100, subsample= 0.6,random_state=10)
clf.fit(X_train, y_train)
features = X.columns
importances = clf.feature_importances_
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="b", yerr=std[indices], align="center")
plt.xticks(range(len(indices)), [features[i] for i in indices])
plt.xticks(rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.show()
| 0.67854 | 0.841109 |
# Multiple Coordinated Views
```
import altair as alt
import pandas as pd
import numpy as np
flu = pd.read_csv('flunet2010_11countries.csv', header=[0,1])
cols = flu.columns.tolist()
normed = pd.melt(flu, id_vars=[cols[0]], value_vars=cols[1:], var_name=['continent','country'])
normed = normed.rename(columns={normed.columns[0]: 'week'})
normed.head()
```
## Visualization 1
#### Create Linked Plots Showing Flu Cases per Country and Total Flu Cases per Week
In this example a dropdown selector and a time slider provided by the crosstalk package are linked to two plots.
```
click = alt.selection_multi(encodings=['color'])
brush = alt.selection_interval(encodings=['x'])
line = alt.Chart(normed).mark_line(point=alt.MarkConfig(shape='circle', size=60)).encode(
x='week:N',
y=alt.Y('value:Q', title=None),
color=alt.Color('country:N', legend=None),
tooltip=['week','value']
).transform_filter(
brush
).transform_filter(
click
).properties(
width=800,
title='Flu Cases by Country and Totals',
selection=click
)
hist = alt.Chart(normed).mark_bar(size=10).encode(
x=alt.X('week:N'),
y=alt.Y('sum(value):Q', title=None),
color=alt.value('lightgray'),
tooltip=['week','sum(value)']
).properties(
width=800,
height=120,
).add_selection(
brush
)
legend = alt.Chart(normed).mark_circle(size=150).encode(
y=alt.Y('country', title=None),
color=alt.condition(click, alt.Color('country:N', legend=None), alt.value('lightgray'))
).properties(
selection=click
)
legend | (line & hist)
```
#### Selections:
* Click to select individual countries.
* Hold shift and click to select multiple countries.
* Brush barchart to narrow top view.
## Visualization 2
#### Create an Overview+Detail Plot Showing Flu Cases per Country
In this example a checkbox selection is used to control countries of which continents are shown in a stacked bar chart. The stacked bar chart shows flu cases per week and country. The overview+detail visualization is enabled by using the rangeslider property on the x axis of the plot.
```
click = alt.selection_multi(encodings=['y'])
brush = alt.selection_interval(encodings=['x'])
bar = alt.Chart(normed).mark_bar().encode(
alt.Color('country:N', legend=None),
alt.X('week:N'),
alt.Y('sum(value)', title=None),
tooltip=['country']
).transform_filter(
click
).transform_filter(
brush
).properties(
width=800,
)
bar_overview = alt.Chart(normed).mark_bar(size=10).encode(
alt.X('week:N'),
alt.Y('sum(value)', title=None),
alt.Color('country:N', legend=None)
).properties(
height=100,
width=800,
selection=brush
)
legend = alt.Chart(normed).mark_circle(size=60).encode(
alt.Y('continent:N', title=None),
color=alt.condition(click, alt.value('black'), alt.value('lightgray'))
).properties(
selection=click
)
legend | (bar & bar_overview)
```
## Visualization 3
#### Create Linked Plots Showing Flu Cases per Country per Week and Total Flu Cases per Country
For this visualization we create two linked plots. One that shows flu cases per country per week and a second on that show the total of all flu cases per country.
I created an extra view where you can brush over the line plot and see totals for your selection.
```
barclick = alt.selection_multi(encodings=['color'])
line = alt.Chart(normed).mark_line().encode(
x='week:N',
y=alt.X('value:Q', title=None),
color=alt.Color('country:N', legend=None),
).transform_filter(
barclick
).properties(
width=800,
title='Flu Cases by Country and Totals',
selection=barclick
)
bar = alt.Chart(normed).mark_bar().encode(
x='sum(value):Q',
y=alt.Y('country:N', sort=alt.SortField(field="value", op="sum", order='descending')),
color=alt.condition(barclick, alt.Color('country:N', legend=None), alt.value('lightgray'))
).properties(
width=800,
selection=barclick
)
line & bar
plot_brush = alt.selection_interval(encodings=['x'])
line = alt.Chart(normed).mark_line().encode(
x='week:N',
y=alt.X('value:Q', title=None),
color=alt.Color('country:N', legend=None),
).properties(
width=800,
title='Flu Cases by Country and Totals',
selection=plot_brush
)
bar = alt.Chart(normed).mark_bar().encode(
x=alt.X('sum(value):Q', title=None),
y=alt.Y('country:N', sort=alt.SortField(field="value", op="sum", order='descending')),
color=alt.Color('country:N', legend=None)
).transform_filter(
plot_brush
).properties(
width=800,
)
line & bar
```
## Visualization 4
#### Create a Choropleth Map Showing Flu Cases per Country
Selection in geoplots isn't well supported. I tried several strategies below, and had to manually enter some 'ids' for the countries in question. The last plot has no flu data and was just an experiment.
```
from vega_datasets import data
countries = alt.topo_feature(data.world_110m.url, 'countries')
country = normed.country.unique()
country
# GeoJSON ids (looked up for each country)
ids = [4, 32, 36, 124, 156, 170, 818, 276, 372, 710, 840]
dictionary = dict(zip(country, ids))
dictionary
normed_geo = normed
normed_geo['id'] = normed_geo['country'].map(dictionary)
normed_geo.head()
week = normed_geo[normed_geo['week'] == 1]
click = alt.selection_single(encodings=['x'])
values = alt.Chart(countries).mark_geoshape(
stroke='black'
).encode(
alt.Color(alt.repeat('row'), type='quantitative')
).transform_lookup(
lookup='id',
from_=alt.LookupData(week, 'id', ['value'])
).properties(
width=600,
height=300
).project(
type='equirectangular'
).repeat(
row=['value']
).resolve_scale(
color='independent'
)
# Not functional
hist = alt.Chart(normed).mark_bar(size=10).encode(
x=alt.X('week:N'),
y=alt.Y('sum(value):Q', title=None),
color=alt.condition(click, alt.value('black'), alt.value('lightgray')),
).properties(
width=600,
height=120,
selection=click
)
values & hist
week52 = normed_geo[normed_geo['week'] == 52]
click = alt.selection_single(encodings=['x'])
# background = alt.Chart(countries).mark_geoshape(
# stroke='black'
# ).transform_filter(
# alt.datum.id != 10
# ).properties(
# width=800,
# height=700
# )
filter_set = alt.Chart(
{"values": normed_geo, "name": "test"}
).transform_filter(
click
)
values = alt.Chart(countries).mark_geoshape(
stroke='black'
).encode(
alt.Color(alt.repeat('row'), type='quantitative')
).transform_lookup(
lookup='id',
from_=alt.LookupData(normed_geo, 'id', ['value'])
).properties(
width=800,
height=700
).repeat(
row=['value']
).resolve_scale(
color='independent'
)
# Not functional
hist = alt.Chart(normed_geo).mark_bar(size=10).encode(
x=alt.X('week:N'),
y=alt.Y('sum(value):Q', title=None),
color=alt.condition(click, alt.value('black'), alt.value('lightgray')),
).properties(
width=800,
height=120,
selection=click
)
values & hist
import geopandas as gpd
import json
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
world = world[world.name!="Antarctica"]
# Data transformations to get Quarter Columns
i = 0
for x in range(4):
start = x*13
end = start+13
quarter = normed.query(str(start) + '<= week <=' + str(end))
totals = quarter.groupby('country').sum()['value']
countries2 = normed.country.unique()
col = dict(zip(countries2, totals))
col['United States'] = col.pop('USA')
world['Q'+str(x+1)] = world['name'].map(col)
# world = world.fillna(0)
world.head()
data1 = alt.InlineData(
values = world.to_json(), #geopandas to geojson string
# root object type is "FeatureCollection" but we need its features
format = alt.DataFormat(property='features',type='json')
)
base = alt.Chart(data1).mark_geoshape(
stroke='white',
fill='#666666',
).properties(
width=600,
height=400
)
quarters = ['Q1', 'Q2', 'Q3', 'Q4']
charts = [
base + base.encode(
color=alt.Color('properties.'+str(quarter)+':Q', title=None),
tooltip=['properties.name:N', 'properties.'+str(quarter)+':Q']
).properties(title=quarter)
for quarter in quarters
]
alt.vconcat(
alt.hconcat(*charts[:2]),
alt.hconcat(*charts[2:])
)
world['per_cap'] = world['Q1'] / world['pop_est']
cap_data = alt.InlineData(
values = world.to_json(), #geopandas to geojson string
# root object type is "FeatureCollection" but we need its features
format = alt.DataFormat(property='features',type='json')
)
base = alt.Chart(cap_data).mark_geoshape(
stroke='black'
).properties(
width=800,
height=700
)
pop = base.encode(
color=alt.Color('properties.per_cap:Q', title=None),
)
pop
stocks = data.stocks()
print(stocks.head())
brush = alt.selection_interval(encodings=['x'])
upper = alt.Chart(stocks).mark_line(point=alt.MarkConfig(shape='circle', size=20)).encode(
x=alt.X('date:T'),
y=alt.Y('price:Q'),
color=alt.Color('symbol:N')
).transform_filter(
brush
).properties(
width=800,
height=400
)
lower = alt.Chart(stocks).mark_bar().encode(
x='date:T',
y='sum(price)',
color=alt.Color('symbol:N')
).properties(
width=800,
height=100,
selection=brush
)
upper & lower
```
|
github_jupyter
|
import altair as alt
import pandas as pd
import numpy as np
flu = pd.read_csv('flunet2010_11countries.csv', header=[0,1])
cols = flu.columns.tolist()
normed = pd.melt(flu, id_vars=[cols[0]], value_vars=cols[1:], var_name=['continent','country'])
normed = normed.rename(columns={normed.columns[0]: 'week'})
normed.head()
click = alt.selection_multi(encodings=['color'])
brush = alt.selection_interval(encodings=['x'])
line = alt.Chart(normed).mark_line(point=alt.MarkConfig(shape='circle', size=60)).encode(
x='week:N',
y=alt.Y('value:Q', title=None),
color=alt.Color('country:N', legend=None),
tooltip=['week','value']
).transform_filter(
brush
).transform_filter(
click
).properties(
width=800,
title='Flu Cases by Country and Totals',
selection=click
)
hist = alt.Chart(normed).mark_bar(size=10).encode(
x=alt.X('week:N'),
y=alt.Y('sum(value):Q', title=None),
color=alt.value('lightgray'),
tooltip=['week','sum(value)']
).properties(
width=800,
height=120,
).add_selection(
brush
)
legend = alt.Chart(normed).mark_circle(size=150).encode(
y=alt.Y('country', title=None),
color=alt.condition(click, alt.Color('country:N', legend=None), alt.value('lightgray'))
).properties(
selection=click
)
legend | (line & hist)
click = alt.selection_multi(encodings=['y'])
brush = alt.selection_interval(encodings=['x'])
bar = alt.Chart(normed).mark_bar().encode(
alt.Color('country:N', legend=None),
alt.X('week:N'),
alt.Y('sum(value)', title=None),
tooltip=['country']
).transform_filter(
click
).transform_filter(
brush
).properties(
width=800,
)
bar_overview = alt.Chart(normed).mark_bar(size=10).encode(
alt.X('week:N'),
alt.Y('sum(value)', title=None),
alt.Color('country:N', legend=None)
).properties(
height=100,
width=800,
selection=brush
)
legend = alt.Chart(normed).mark_circle(size=60).encode(
alt.Y('continent:N', title=None),
color=alt.condition(click, alt.value('black'), alt.value('lightgray'))
).properties(
selection=click
)
legend | (bar & bar_overview)
barclick = alt.selection_multi(encodings=['color'])
line = alt.Chart(normed).mark_line().encode(
x='week:N',
y=alt.X('value:Q', title=None),
color=alt.Color('country:N', legend=None),
).transform_filter(
barclick
).properties(
width=800,
title='Flu Cases by Country and Totals',
selection=barclick
)
bar = alt.Chart(normed).mark_bar().encode(
x='sum(value):Q',
y=alt.Y('country:N', sort=alt.SortField(field="value", op="sum", order='descending')),
color=alt.condition(barclick, alt.Color('country:N', legend=None), alt.value('lightgray'))
).properties(
width=800,
selection=barclick
)
line & bar
plot_brush = alt.selection_interval(encodings=['x'])
line = alt.Chart(normed).mark_line().encode(
x='week:N',
y=alt.X('value:Q', title=None),
color=alt.Color('country:N', legend=None),
).properties(
width=800,
title='Flu Cases by Country and Totals',
selection=plot_brush
)
bar = alt.Chart(normed).mark_bar().encode(
x=alt.X('sum(value):Q', title=None),
y=alt.Y('country:N', sort=alt.SortField(field="value", op="sum", order='descending')),
color=alt.Color('country:N', legend=None)
).transform_filter(
plot_brush
).properties(
width=800,
)
line & bar
from vega_datasets import data
countries = alt.topo_feature(data.world_110m.url, 'countries')
country = normed.country.unique()
country
# GeoJSON ids (looked up for each country)
ids = [4, 32, 36, 124, 156, 170, 818, 276, 372, 710, 840]
dictionary = dict(zip(country, ids))
dictionary
normed_geo = normed
normed_geo['id'] = normed_geo['country'].map(dictionary)
normed_geo.head()
week = normed_geo[normed_geo['week'] == 1]
click = alt.selection_single(encodings=['x'])
values = alt.Chart(countries).mark_geoshape(
stroke='black'
).encode(
alt.Color(alt.repeat('row'), type='quantitative')
).transform_lookup(
lookup='id',
from_=alt.LookupData(week, 'id', ['value'])
).properties(
width=600,
height=300
).project(
type='equirectangular'
).repeat(
row=['value']
).resolve_scale(
color='independent'
)
# Not functional
hist = alt.Chart(normed).mark_bar(size=10).encode(
x=alt.X('week:N'),
y=alt.Y('sum(value):Q', title=None),
color=alt.condition(click, alt.value('black'), alt.value('lightgray')),
).properties(
width=600,
height=120,
selection=click
)
values & hist
week52 = normed_geo[normed_geo['week'] == 52]
click = alt.selection_single(encodings=['x'])
# background = alt.Chart(countries).mark_geoshape(
# stroke='black'
# ).transform_filter(
# alt.datum.id != 10
# ).properties(
# width=800,
# height=700
# )
filter_set = alt.Chart(
{"values": normed_geo, "name": "test"}
).transform_filter(
click
)
values = alt.Chart(countries).mark_geoshape(
stroke='black'
).encode(
alt.Color(alt.repeat('row'), type='quantitative')
).transform_lookup(
lookup='id',
from_=alt.LookupData(normed_geo, 'id', ['value'])
).properties(
width=800,
height=700
).repeat(
row=['value']
).resolve_scale(
color='independent'
)
# Not functional
hist = alt.Chart(normed_geo).mark_bar(size=10).encode(
x=alt.X('week:N'),
y=alt.Y('sum(value):Q', title=None),
color=alt.condition(click, alt.value('black'), alt.value('lightgray')),
).properties(
width=800,
height=120,
selection=click
)
values & hist
import geopandas as gpd
import json
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
world = world[world.name!="Antarctica"]
# Data transformations to get Quarter Columns
i = 0
for x in range(4):
start = x*13
end = start+13
quarter = normed.query(str(start) + '<= week <=' + str(end))
totals = quarter.groupby('country').sum()['value']
countries2 = normed.country.unique()
col = dict(zip(countries2, totals))
col['United States'] = col.pop('USA')
world['Q'+str(x+1)] = world['name'].map(col)
# world = world.fillna(0)
world.head()
data1 = alt.InlineData(
values = world.to_json(), #geopandas to geojson string
# root object type is "FeatureCollection" but we need its features
format = alt.DataFormat(property='features',type='json')
)
base = alt.Chart(data1).mark_geoshape(
stroke='white',
fill='#666666',
).properties(
width=600,
height=400
)
quarters = ['Q1', 'Q2', 'Q3', 'Q4']
charts = [
base + base.encode(
color=alt.Color('properties.'+str(quarter)+':Q', title=None),
tooltip=['properties.name:N', 'properties.'+str(quarter)+':Q']
).properties(title=quarter)
for quarter in quarters
]
alt.vconcat(
alt.hconcat(*charts[:2]),
alt.hconcat(*charts[2:])
)
world['per_cap'] = world['Q1'] / world['pop_est']
cap_data = alt.InlineData(
values = world.to_json(), #geopandas to geojson string
# root object type is "FeatureCollection" but we need its features
format = alt.DataFormat(property='features',type='json')
)
base = alt.Chart(cap_data).mark_geoshape(
stroke='black'
).properties(
width=800,
height=700
)
pop = base.encode(
color=alt.Color('properties.per_cap:Q', title=None),
)
pop
stocks = data.stocks()
print(stocks.head())
brush = alt.selection_interval(encodings=['x'])
upper = alt.Chart(stocks).mark_line(point=alt.MarkConfig(shape='circle', size=20)).encode(
x=alt.X('date:T'),
y=alt.Y('price:Q'),
color=alt.Color('symbol:N')
).transform_filter(
brush
).properties(
width=800,
height=400
)
lower = alt.Chart(stocks).mark_bar().encode(
x='date:T',
y='sum(price)',
color=alt.Color('symbol:N')
).properties(
width=800,
height=100,
selection=brush
)
upper & lower
| 0.430147 | 0.866274 |
# TOC
__Chapter 4 - Convolutional neural networks__
1. [Import](#Import)
1. [Introduction to CNNs](#Introduction-to-CNNs)
1. [MNIST - Take 2](#MNIST)
1. [Convolution](#Convolution)
1. [Pooling](#Pooling)
1. [Dropout](#Dropout)
1. [Model](#Model)
1. [CIFAR10](#CIFAR10)
1. [Load data](#Load-data)
1. [Simple model](#Simple-model)
1. [Better model](#Better-model)
1. [Alternative approach](#Alternative-approach)
# Import
<a id = 'Import'></a>
```
# standard libary and settings
import os
import sys
import importlib
import itertools
import warnings
warnings.simplefilter("ignore")
modulePath = os.path.abspath(os.path.join("../../CustomModules"))
sys.path.append(modulePath) if modulePath not in sys.path else None
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# data extensions and settings
import numpy as np
np.set_printoptions(threshold=np.inf, suppress=True)
import pandas as pd
pd.set_option("display.max_rows", 500)
pd.set_option("display.max_columns", 500)
pd.options.display.float_format = "{:,.6f}".format
import tensorflow as tf
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# visualization extensions and settings
import seaborn as sns
import matplotlib.pyplot as plt
# custom extensions and settings
sys.path.append("/home/mlmachine") if "/home/mlmachine" not in sys.path else None
sys.path.append("/home/prettierplot") if "/home/prettierplot" not in sys.path else None
import mlmachine as mlm
from prettierplot.plotter import PrettierPlot
import prettierplot.style as style
# magic functions
%matplotlib inline
```
# Introduction to CNNs
Contrasting with fully connected neural networks, units in CNNs are connected to a (typically small) number of nearby units in the previous layer. Further, all units are connected to the previous layer in the same way, with the exact same weights and structure. This facilitates an operation know as convolution, which can be thought as the application of a 'window' of weights. This windows slides along the surface of the image. This helps to address the fact that an object can appear in many different locations in a picture, and the perspective of an object will certainly differ from image to image. The is known as 'invariance'. The convolutional approach to learning weights addressse this by performing the same exact computation on different parts of the image.
<a id = 'Introduction-to-CNNs'></a>
# MNIST - Take 2
Modeling using the MNIST dataset, this time with a small CNN.
<a id = 'MNIST'></a>
## Convolution
The convolutation operation is the fundamental means by which layers are connected in CNNs. TensorFlow has a build in operation conv2d()
```python
tf.nn.conv2d(x, W, strides = [1,1,1,1], padding = 'SAME'
```
Here, 'x' is the data - which is either the input image or a downstream feature map obtained further along in the network following previous convolutional layers. A feature map is the output of each layer. The output of each layer can also be thought of as a 'processed' image, the result of applying a filter and perhaps some other operations.
The filter is parameterized by W, which is comprised of the learned weights of our network. This convolutional filter is the small 'sliding window' that slides across the face of the image.
The output of this operation will depend on the shape of X and W. In this case, the output is four-dimensional. The image data X is of shape: [None, 28, 28, 1], meaning we have an unknown number of images, each has 28 x 28 pixels, with one color channel (grayscale). The weights W is of shape: [5, 5, 1, 32], where the initial 5 x 5 x 1 represents the size of the 'window' in the image to be convolved, which in this is a 5 by 5 region. The 32 represents the number of feature maps. In other words, we have multiple sets of weights for the convolutional layer. The idea of a convolutional layer is to compute the same feature along the image - we would like to compute many such features and thus use multiple sets of convolutional filters.
The 'strides' argument controls the spatial movement of the filter window W across the image (or feature map) x. The value [1,1,1,1] means that the filter is applied to the input in 1-ixel intervals, which can be thought of as a full convolution. Increasing the stride will result in a smaller feature map.
Lastly, the padding argument is set to 'SAME', which means that the border of x are padded such that the size of the result of the operation is the same as the size of x. This allows the window to give similar attention to the pixels on the border of the image and the pixels in the middle of the image.
<a id = 'Convolution'></a>
## Pooling
Pooling means reducing the size of the data with some local aggregation function, typically within each feature map. The technical aspect of this operation is that pooling reduces the size of the data processed downstream. This drastically reduces the number of parameters in the model, particularly if we use fully connected layers after the convolutional layer. The theoretical aspect of pooling is that we would like our features to not care too much about small changes in position in an image. This allows the process to over spatial variability between images.
```python
tf.nn.max_pool(x, ksize = [1,2,2,1], stides = [1,2,2,1], padding = 'SAME')
```
The ksize argument controls the size of the pooling and strides controls how much the pooling grid slides across x, just as it does in the convolution layer. Setting strides to a 2x2 grid means the output of the pooling will be exactly one-half of the height and width of the original - one-quarter of the original size overall.
<a id = 'Pooling'></a>
## Dropout
Dropout is a regularization trick used to force the network to distribute the learned representation across all nuerons. Dropout 'turns off a random preset fraction of units in a layer by setting their values to zero during training. These dropped neurons are random, and different for each computation, which forces the network to learn a representation that will work despite the dropout. This process can be thought of training an 'ensemble of multiple network that have a different understanding of the training data, which tends to increase generalization. Dropout is not used in the test phase.
```python
tf.nn.dropout(layer, keep_prob = 0.1)
```
<a id = 'Dropout'></a>
## Model
<a id = 'Model'></a>
```
# helper functions
def weight_variable(shape):
"""
Info:
Description:
Specifies weights for either a fully connected layer or convolutional layer.
Randomized initially using a truncated normal distribution with a SD of 0.1.
This is a pretty typical randomization method.
"""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""
Info:
Description:
Defines bias elements in either a fully connected layer or convolutional layer.
Initialized with the constant value of 0.1
"""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
"""
Info:
Description:
Specifies the convolution that will typically be used.
This represents a full convolution (no skipping) with padding
that creates an output that is the same size as the input.
"""
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="SAME")
def max_pool_2x2(x):
"""
Info:
Description:
Sets the max pool to half the size across both the height and width.
In total, the output is one quarter of the size of the input feature map.
"""
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
def conv_layer(input, shape):
"""
Info:
Description:
The convolutional layer, linear convolution as defined in conv2d, with a
bias followed by the ReLU activation function.
"""
W = weight_variable(shape)
b = bias_variable([shape[3]])
return tf.nn.relu(conv2d(input, W) + b)
def full_layer(input, size):
"""
Info:
Description:
A standard full layer with a bias. To be used for the final output.
"""
in_size = int(input.get_shape()[1])
W = weight_variable([in_size, size])
b = bias_variable([size])
return tf.matmul(input, W) + b
```
> Remarks - Random initialization, as opposed to constant initialization, helps break the symmetry between learned features which allows the modle to learn a diverse and rich representation. Using a bound (truncated) distribution helps control the magnitude of the gradients, allowing the network to ocnverge more efficiently.
```
# setup model
# 28 x 28 pixel input
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
x_image = tf.reshape(x, [-1, 28, 28, 1])
# 5 x 5 x 32 feature map. Creates 28 x 28 x 32 feature map followed by 2x2 max pooling
conv1 = conv_layer(x_image, shape=[5, 5, 1, 32])
conv1_pool = max_pool_2x2(conv1)
# 5 x 5 x 32 x 64 (5 by 5 tiles, 32 deep, 64 sets)
# creates 14 x 14 x 64 feature map followed by 2x2 max pooling
conv2 = conv_layer(conv1_pool, shape=[5, 5, 32, 64])
conv2_pool = max_pool_2x2(conv2)
# 7 x 7 x 64 fully connected layer
conv2_flat = tf.reshape(conv2_pool, [-1, 7 * 7 * 64])
full_1 = tf.nn.relu(full_layer(conv2_flat, 1024))
# dropouts
keep_prob = tf.placeholder(tf.float32)
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
```
> Remarks - First placeholders are defined for the input images and correct labels. Next the input image is reshaped into the @D image format of size 28 x 28 x 1. In the basic logistic regression implemented earlier, since all pixels were treated independently. With a CNN, however, its power comes from the utilization of spatial meaning pixels and nearby pixels.
> Next, two consecutive convolutional layers and pools, each with 5 x 5 convolutions and 32 feature maps. These are followed by a single fully connected layer with 1,024 units. Prior to the image arriving at this fully connected layer, we flatten the image back to a single vector form since the fully connected layer derives no benefit from the spatial relationships of between pixels.
> After the second convolution/pooling layer, the size of the image is 7 x 7 x 64. the original 28 x 28 pixel image is reduced to 14 x 14 by the first pooling operation, and then to 7 x 7 by the second pooling operation. The '64' in 7 x 7 x 64 is the number of feature maps creates in the second convolutional layer.
> One interesting thing to note is that the number of parameters between the 7 x 7 x 64 layer and the fully connected 1 x 1 x 1,024 layer is 3.2 million. Without max pooling, which would give us a 28 x 28 x 64 feature map, would yield 51 million parameters.
> Lastly, the output is a fully connected layer with 10 units, one unit for each handwritten digit.
```
# execute model
from tensorflow.examples.tutorials.mnist import input_data
DATA_DIR = "/main/tmp/data"
STEPS = 5000
mnist = input_data.read_data_sets(DATA_DIR, one_hot=True)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(STEPS):
batch = mnist.train.next_batch(50)
if i % 250 == 0:
train_accuracy = sess.run(
accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}
)
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
X = mnist.test.images.reshape(10, 1000, 784)
Y = mnist.test.labels.reshape(10, 1000, 10)
test_accuracy = np.mean(
[
sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)
]
)
print("test accuracy: {}".format(test_accuracy))
```
# CIFAR10
The CIFAR10 dataset contains a set of 60,000 color images of size 32 x 32 pixels, each belonging to one of ten categories:
- airplane
- automobile
- bird
- cat
- deer
- dog
- frog
- horse
- ship
- truck
<a id = 'CIFAR10'></a>
## Load data
Download the data from [here](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz).
Unzip the folder from the command line in "main/tmp":
tar xvzr cifar-10-python.tar.gz
<a id = 'Load-data'></a>
```
import _pickle as cPickle
# create custom data loader
path = "s3://tdp-ml-datasets/cifar-10-batches-py"
class CifarLoader:
def __init__(self, source_files):
self._source = source_files
self._i = 0
self.images = None
self.labels = None
def load(self):
data = [unpickle(f) for f in self._source]
images = np.vstack([d["data"] for d in data])
n = len(images)
self.images = (
images.reshape(n, 3, 32, 32).transpose(0, 2, 3, 1).astype(float) / 255
)
self.labels = one_hot(np.hstack([d["labels"] for d in data]), 10)
return self
def next_batch(self, batch_size):
x, y = (
self.images[self._i : self._i + batch_size],
self.labels[self._i : self._i + batch_size],
)
self._i = (self._i + batch_size) % len(self.images)
return x, y
# unpickle function
def unpickle(file):
"""
Info:
Description:
Returns dictionary with fields 'data' and 'labels'.
"""
with open(os.path.join(path, file), "rb") as fo:
dict = cPickle.load(fo, encoding="latin1")
return dict
# custom one hot encoder
def one_hot(vec, vals=10):
"""
Info:
Description:
Recodes the labels from integer in rnage 0 to 9 into
vectors of length 10, contains 0's except for a 1 at the
position of the label.
"""
n = len(vec)
out = np.zeros((n, vals))
out[range(n), vec] = 1
return out
# custom dataset handler
class CifarDataManager:
def __init__(self):
self.train = CifarLoader(
["data_batch_{}".format(i) for i in range(1, 6)]
).load()
self.test = CifarLoader(["test_batch"]).load()
# display sample images
def display_cifar(images, size):
n = len(images)
plt.figure(figsize=(15, 15))
plt.gca().set_axis_off()
im = np.vstack(
[
np.hstack([images[np.random.choice(n)] for i in range(size)])
for i in range(size)
]
)
plt.imshow(im)
plt.show()
d = CifarDataManager()
print("Number of train images: {}".format(len(d.train.images)))
print("Number of train images: {}".format(len(d.train.labels)))
print("Number of test images: {}".format(len(d.test.images)))
print("Number of test images: {}".format(len(d.test.labels)))
images = d.train.images
display_cifar(images, 10)
```
## Simple model
<a id = 'Simple-model'></a>
```
# simple CNN
STEPS = 3000
BATCH_SIZE = 100
cifar = CifarDataManager()
x = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
conv1 = conv_layer(x, shape=[5, 5, 3, 32])
conv1_pool = max_pool_2x2(conv1)
conv2 = conv_layer(conv1_pool, shape=[5, 5, 32, 64])
conv2_pool = max_pool_2x2(conv2)
conv2_flat = tf.reshape(conv2_pool, [-1, 8 * 8 * 64])
full_1 = tf.nn.relu(full_layer(conv2_flat, 1024))
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def test(sess):
X = cifar.test.images.reshape(10, 1000, 32, 32, 3)
Y = cifar.test.labels.reshape(10, 1000, 10)
acc = np.mean(
[
sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)
]
)
print("Accuracy: {:.4}".format(acc * 100))
# run session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(STEPS):
batch = cifar.train.next_batch(BATCH_SIZE)
if i % 250 == 0:
train_accuracy = sess.run(
accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}
)
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
test(sess)
```
> Remarks - A key difference between this model and the MNIST model:
```python
x = tf.placeholder(tf.float32, shape = [None, 32, 32, 3])
```
The '3' in the final position of the list defining the placeholder's shape corresponds to the 3 color channels available in the color CIFAR image.
## Better model
We can also add a third convolution layer with 128 feature maps and dropout. We also reduce the number of units in the fully connected layer from 1,024 to 512
This model will take longer to train but will increase the accuracy to around 75%.
<a id = 'Better-model'></a>
```
# better CNN model that introduces additional layer and dropout
x = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
conv1 = conv_layer(x, shape=[5, 5, 3, 32])
conv1_pool = max_pool_2x2(conv1)
conv2 = conv_layer(conv1_pool, shape=[5, 5, 32, 64])
conv2_pool = max_pool_2x2(conv2)
conv3 = conv_layer(conv2_pool, shape=[5, 5, 64, 128])
conv3_pool = max_pool_2x2(conv3)
conv3_flat = tf.reshape(conv3_pool, [-1, 4 * 4 * 128])
conv3_drop = tf.nn.dropout(conv3_flat, keep_prob=keep_prob)
full_1 = tf.nn.relu(full_layer(conv3_drop, 512))
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def test(sess):
X = cifar.test.images.reshape(10, 1000, 32, 32, 3)
Y = cifar.test.labels.reshape(10, 1000, 10)
acc = np.mean(
[
sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)
]
)
print("Accuracy: {:.4}".format(acc * 100))
# run session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(STEPS):
batch = cifar.train.next_batch(BATCH_SIZE)
if i % 250 == 0:
train_accuracy = sess.run(
accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}
)
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
test(sess)
```
> Remarks - Further improvement
- Model size
- Deeper network with mmany more adjustable parameters
- Additional types of layers and methods
- Additional types of layers, such as local response normalization, can be incorprated into the existing structure.
- Optimization tricks
- (More later)
- Domain knowledge
- Pre-processing data utilizing domain knowledge
- Data augmentation
- Adding training data based on the existing data set. Rotating an image any number of ways effectively inrroduces a new training sample.
- Reusing successful methods and architectures
- Find a time-proven method and adapt to the needs of the current problem.
## Alternative approach
This model includes three convolutional layers, followed by the same fully connected and output layers. Each block of convolutional layers contains three consecutive convolutional layers, follwed by a single pool and dropout layer.
The constants C1, C2 and C3 control the number of feature maps in each layer of each of the convolutional blocks, and the constant F1 controls the number of units in the fully connected layer.
<a id = 'Alternative-approach'></a>
```
# CNN model that introduces additional CNN layers
C1, C2, C3 = 30, 50, 80
F1 = 500
x = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
conv1_1 = conv_layer(x, shape=[3, 3, 3, C1])
conv1_2 = conv_layer(conv1_1, shape=[3, 3, C1, C1])
conv1_3 = conv_layer(conv1_2, shape=[3, 3, C1, C1])
conv1_pool = max_pool_2x2(conv1_3)
conv1_drop = tf.nn.dropout(conv1_pool, keep_prob=keep_prob)
conv2_1 = conv_layer(conv1_drop, shape=[3, 3, C1, C2])
conv2_2 = conv_layer(conv2_1, shape=[3, 3, C2, C2])
conv2_3 = conv_layer(conv2_2, shape=[3, 3, C2, C2])
conv2_pool = max_pool_2x2(conv2_3)
conv2_drop = tf.nn.dropout(conv2_pool, keep_prob=keep_prob)
conv3_1 = conv_layer(conv2_drop, shape=[3, 3, C2, C3])
conv3_2 = conv_layer(conv3_1, shape=[3, 3, C3, C3])
conv3_3 = conv_layer(conv3_2, shape=[3, 3, C3, C3])
conv3_pool = tf.nn.max_pool(
conv3_3, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding="SAME"
)
conv3_flat = tf.reshape(conv3_pool, [-1, C3])
conv3_drop = tf.nn.dropout(conv3_flat, keep_prob=keep_prob)
full1 = tf.nn.relu(full_layer(conv3_drop, F1))
full1_drop = tf.nn.dropout(full1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def test(sess):
X = cifar.test.images.reshape(10, 1000, 32, 32, 3)
Y = cifar.test.labels.reshape(10, 1000, 10)
acc = np.mean(
[
sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)
]
)
print("Accuracy: {:.4}".format(acc * 100))
# run session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = cifar.train.next_batch(BATCH_SIZE)
if i % 500 == 0:
train_accuracy = sess.run(
accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}
)
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
test(sess)
```
> Remarks - Prior to the dropout step of the third convolutional layer, there is an 8x8 max pool layer
```python
conv3_pool = tf.nn.max_pool(conv3_3, ksize = [1, 8, 8, 1]
,strides = [1, 8, 8, 1], padding = 'SAME')
```
By this point, the feature maps are of size 8 x 8 (following the first two poolings that each reduced the 32 x 32 pictures by half on each axis), this globally pools each of the feature maps to keep only the maximal value. The number of feature maps at the third block was set to 80, so, after the max pooling, the representation is reduced to only 80 numbers. This keeps the size of the model small - the number of parameters in the transition to the full connected layer is only 80 x 500.
|
github_jupyter
|
# standard libary and settings
import os
import sys
import importlib
import itertools
import warnings
warnings.simplefilter("ignore")
modulePath = os.path.abspath(os.path.join("../../CustomModules"))
sys.path.append(modulePath) if modulePath not in sys.path else None
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# data extensions and settings
import numpy as np
np.set_printoptions(threshold=np.inf, suppress=True)
import pandas as pd
pd.set_option("display.max_rows", 500)
pd.set_option("display.max_columns", 500)
pd.options.display.float_format = "{:,.6f}".format
import tensorflow as tf
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# visualization extensions and settings
import seaborn as sns
import matplotlib.pyplot as plt
# custom extensions and settings
sys.path.append("/home/mlmachine") if "/home/mlmachine" not in sys.path else None
sys.path.append("/home/prettierplot") if "/home/prettierplot" not in sys.path else None
import mlmachine as mlm
from prettierplot.plotter import PrettierPlot
import prettierplot.style as style
# magic functions
%matplotlib inline
tf.nn.conv2d(x, W, strides = [1,1,1,1], padding = 'SAME'
tf.nn.max_pool(x, ksize = [1,2,2,1], stides = [1,2,2,1], padding = 'SAME')
tf.nn.dropout(layer, keep_prob = 0.1)
# helper functions
def weight_variable(shape):
"""
Info:
Description:
Specifies weights for either a fully connected layer or convolutional layer.
Randomized initially using a truncated normal distribution with a SD of 0.1.
This is a pretty typical randomization method.
"""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""
Info:
Description:
Defines bias elements in either a fully connected layer or convolutional layer.
Initialized with the constant value of 0.1
"""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
"""
Info:
Description:
Specifies the convolution that will typically be used.
This represents a full convolution (no skipping) with padding
that creates an output that is the same size as the input.
"""
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="SAME")
def max_pool_2x2(x):
"""
Info:
Description:
Sets the max pool to half the size across both the height and width.
In total, the output is one quarter of the size of the input feature map.
"""
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
def conv_layer(input, shape):
"""
Info:
Description:
The convolutional layer, linear convolution as defined in conv2d, with a
bias followed by the ReLU activation function.
"""
W = weight_variable(shape)
b = bias_variable([shape[3]])
return tf.nn.relu(conv2d(input, W) + b)
def full_layer(input, size):
"""
Info:
Description:
A standard full layer with a bias. To be used for the final output.
"""
in_size = int(input.get_shape()[1])
W = weight_variable([in_size, size])
b = bias_variable([size])
return tf.matmul(input, W) + b
# setup model
# 28 x 28 pixel input
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
x_image = tf.reshape(x, [-1, 28, 28, 1])
# 5 x 5 x 32 feature map. Creates 28 x 28 x 32 feature map followed by 2x2 max pooling
conv1 = conv_layer(x_image, shape=[5, 5, 1, 32])
conv1_pool = max_pool_2x2(conv1)
# 5 x 5 x 32 x 64 (5 by 5 tiles, 32 deep, 64 sets)
# creates 14 x 14 x 64 feature map followed by 2x2 max pooling
conv2 = conv_layer(conv1_pool, shape=[5, 5, 32, 64])
conv2_pool = max_pool_2x2(conv2)
# 7 x 7 x 64 fully connected layer
conv2_flat = tf.reshape(conv2_pool, [-1, 7 * 7 * 64])
full_1 = tf.nn.relu(full_layer(conv2_flat, 1024))
# dropouts
keep_prob = tf.placeholder(tf.float32)
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
# execute model
from tensorflow.examples.tutorials.mnist import input_data
DATA_DIR = "/main/tmp/data"
STEPS = 5000
mnist = input_data.read_data_sets(DATA_DIR, one_hot=True)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(STEPS):
batch = mnist.train.next_batch(50)
if i % 250 == 0:
train_accuracy = sess.run(
accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}
)
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
X = mnist.test.images.reshape(10, 1000, 784)
Y = mnist.test.labels.reshape(10, 1000, 10)
test_accuracy = np.mean(
[
sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)
]
)
print("test accuracy: {}".format(test_accuracy))
import _pickle as cPickle
# create custom data loader
path = "s3://tdp-ml-datasets/cifar-10-batches-py"
class CifarLoader:
def __init__(self, source_files):
self._source = source_files
self._i = 0
self.images = None
self.labels = None
def load(self):
data = [unpickle(f) for f in self._source]
images = np.vstack([d["data"] for d in data])
n = len(images)
self.images = (
images.reshape(n, 3, 32, 32).transpose(0, 2, 3, 1).astype(float) / 255
)
self.labels = one_hot(np.hstack([d["labels"] for d in data]), 10)
return self
def next_batch(self, batch_size):
x, y = (
self.images[self._i : self._i + batch_size],
self.labels[self._i : self._i + batch_size],
)
self._i = (self._i + batch_size) % len(self.images)
return x, y
# unpickle function
def unpickle(file):
"""
Info:
Description:
Returns dictionary with fields 'data' and 'labels'.
"""
with open(os.path.join(path, file), "rb") as fo:
dict = cPickle.load(fo, encoding="latin1")
return dict
# custom one hot encoder
def one_hot(vec, vals=10):
"""
Info:
Description:
Recodes the labels from integer in rnage 0 to 9 into
vectors of length 10, contains 0's except for a 1 at the
position of the label.
"""
n = len(vec)
out = np.zeros((n, vals))
out[range(n), vec] = 1
return out
# custom dataset handler
class CifarDataManager:
def __init__(self):
self.train = CifarLoader(
["data_batch_{}".format(i) for i in range(1, 6)]
).load()
self.test = CifarLoader(["test_batch"]).load()
# display sample images
def display_cifar(images, size):
n = len(images)
plt.figure(figsize=(15, 15))
plt.gca().set_axis_off()
im = np.vstack(
[
np.hstack([images[np.random.choice(n)] for i in range(size)])
for i in range(size)
]
)
plt.imshow(im)
plt.show()
d = CifarDataManager()
print("Number of train images: {}".format(len(d.train.images)))
print("Number of train images: {}".format(len(d.train.labels)))
print("Number of test images: {}".format(len(d.test.images)))
print("Number of test images: {}".format(len(d.test.labels)))
images = d.train.images
display_cifar(images, 10)
# simple CNN
STEPS = 3000
BATCH_SIZE = 100
cifar = CifarDataManager()
x = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
conv1 = conv_layer(x, shape=[5, 5, 3, 32])
conv1_pool = max_pool_2x2(conv1)
conv2 = conv_layer(conv1_pool, shape=[5, 5, 32, 64])
conv2_pool = max_pool_2x2(conv2)
conv2_flat = tf.reshape(conv2_pool, [-1, 8 * 8 * 64])
full_1 = tf.nn.relu(full_layer(conv2_flat, 1024))
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def test(sess):
X = cifar.test.images.reshape(10, 1000, 32, 32, 3)
Y = cifar.test.labels.reshape(10, 1000, 10)
acc = np.mean(
[
sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)
]
)
print("Accuracy: {:.4}".format(acc * 100))
# run session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(STEPS):
batch = cifar.train.next_batch(BATCH_SIZE)
if i % 250 == 0:
train_accuracy = sess.run(
accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}
)
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
test(sess)
x = tf.placeholder(tf.float32, shape = [None, 32, 32, 3])
# better CNN model that introduces additional layer and dropout
x = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
conv1 = conv_layer(x, shape=[5, 5, 3, 32])
conv1_pool = max_pool_2x2(conv1)
conv2 = conv_layer(conv1_pool, shape=[5, 5, 32, 64])
conv2_pool = max_pool_2x2(conv2)
conv3 = conv_layer(conv2_pool, shape=[5, 5, 64, 128])
conv3_pool = max_pool_2x2(conv3)
conv3_flat = tf.reshape(conv3_pool, [-1, 4 * 4 * 128])
conv3_drop = tf.nn.dropout(conv3_flat, keep_prob=keep_prob)
full_1 = tf.nn.relu(full_layer(conv3_drop, 512))
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def test(sess):
X = cifar.test.images.reshape(10, 1000, 32, 32, 3)
Y = cifar.test.labels.reshape(10, 1000, 10)
acc = np.mean(
[
sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)
]
)
print("Accuracy: {:.4}".format(acc * 100))
# run session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(STEPS):
batch = cifar.train.next_batch(BATCH_SIZE)
if i % 250 == 0:
train_accuracy = sess.run(
accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}
)
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
test(sess)
# CNN model that introduces additional CNN layers
C1, C2, C3 = 30, 50, 80
F1 = 500
x = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
conv1_1 = conv_layer(x, shape=[3, 3, 3, C1])
conv1_2 = conv_layer(conv1_1, shape=[3, 3, C1, C1])
conv1_3 = conv_layer(conv1_2, shape=[3, 3, C1, C1])
conv1_pool = max_pool_2x2(conv1_3)
conv1_drop = tf.nn.dropout(conv1_pool, keep_prob=keep_prob)
conv2_1 = conv_layer(conv1_drop, shape=[3, 3, C1, C2])
conv2_2 = conv_layer(conv2_1, shape=[3, 3, C2, C2])
conv2_3 = conv_layer(conv2_2, shape=[3, 3, C2, C2])
conv2_pool = max_pool_2x2(conv2_3)
conv2_drop = tf.nn.dropout(conv2_pool, keep_prob=keep_prob)
conv3_1 = conv_layer(conv2_drop, shape=[3, 3, C2, C3])
conv3_2 = conv_layer(conv3_1, shape=[3, 3, C3, C3])
conv3_3 = conv_layer(conv3_2, shape=[3, 3, C3, C3])
conv3_pool = tf.nn.max_pool(
conv3_3, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding="SAME"
)
conv3_flat = tf.reshape(conv3_pool, [-1, C3])
conv3_drop = tf.nn.dropout(conv3_flat, keep_prob=keep_prob)
full1 = tf.nn.relu(full_layer(conv3_drop, F1))
full1_drop = tf.nn.dropout(full1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_)
)
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def test(sess):
X = cifar.test.images.reshape(10, 1000, 32, 32, 3)
Y = cifar.test.labels.reshape(10, 1000, 10)
acc = np.mean(
[
sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)
]
)
print("Accuracy: {:.4}".format(acc * 100))
# run session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = cifar.train.next_batch(BATCH_SIZE)
if i % 500 == 0:
train_accuracy = sess.run(
accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}
)
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
test(sess)
conv3_pool = tf.nn.max_pool(conv3_3, ksize = [1, 8, 8, 1]
,strides = [1, 8, 8, 1], padding = 'SAME')
| 0.572842 | 0.968886 |
# T4 - Characteristics
Characteristics can be conceptually difficult to define, but are fairly simple in practice. They are essentially special parameters, that are specific to working with compartments and groups of compartments. We will motivate their design with a worked example that builds on the multi-population framework from T3, and then conclude by discussing other aspects of their design that differ from parameters.
The key functionality provided by characteristics is this - the example in T3 had users initialize the compartment sizes by directly entering values for the number of people in the 'Susceptible' and 'Infected' compartments, both of which appeared on the 'Stocks' sheet in the databook.

However, typically country data does not correspond directly to the compartments in the databook. For example, suppose we know
- The total number of people alive
- The number of people who have ever been infected
- The proportion of infections that have now been resolved
We could use this data to work out what the corresponding compartment sizes should be. For example, if we know that there are 1000 people in total, of whom 400 have ever been infected, and of which 75% of infections have been resolved, then the corresponding initial compartment sizes would be
- `sus = 600`
- `inf = 100`
- `rec = 300`
which satisfies that `sus+inf+rec=1000`, and `inf+rec=400`, and `rec/(inf+rec)=300`. The motivation for characteristics is that we want the databook to contain data entry for the total number of people, the number ever infected, and the proportion resolved, because those are the values corresponding to the available data. We would like Atomica to work out the corresponding compartment sizes, rather than having to do the calculation manually.
To do this, we need to store the information in the framework that we have quantities
- `alive = sus+inf+rec`
- `ever_inf = inf+rec`
- `prop_resolved = rec/ever_inf`
and have these quantities appear in the databook instead of the compartments themselves. We could achieve the required data entry using parameters. However, we can't use the parameters to initialize compartments. This is why there is a separate system, 'characteristics', that allows expressions of groups of compartments to be used for initialization.
We can set up the three characteristics defined above in a fairly straightforward way on the 'Characteristics' sheet. Rather than writing the formulas above with '+' and '/' operations, we instead provide a comma separated list of compartments (or other characteristics) to sum (in the 'components' column) and we provide the denominator separately in the 'denominator' column. So the corresponding characteristics sheet is

We will also remove the 'Databook page' for the compartments on the the compartments sheet, since we want to initialize the model using characteristics only.
If we create a databook from the framework as usual, we will have updated data entry tables on the 'Stocks' sheet. We can then go ahead and fill them out with the initialization described above:

The framework and databook are available in the Atomica repository under `atomica/docs/tutorial/assets/t4_framework_1.xlsx` and `atomica/docs/tutorial/assets/t4_databook_1.xlsx`, respectively. We can now load these files in and run a simulation:
```
import atomica as at
P = at.Project(framework='assets/T4/t4_framework_1.xlsx',databook='assets/T4/t4_databook_1.xlsx')
result = P.results[0]
```
We now want to check that the initialization has been performed correctly. In the `result` we can retrieve the variables for the compartment sizes and inspect their values at the first timestep
```
print('sus = %.2f' % (result.get_variable('sus')[0].vals[0]))
print('inf = %.2f' % (result.get_variable('inf')[0].vals[0]))
print('rec = %.2f' % (result.get_variable('rec')[0].vals[0]))
```
So we have successfully used characteristics to have Atomica automatically convert from the aggregated data values to the underlying compartment values.
Under the hood, we are solving a system of simultaneous equations. What happens if there are more unknowns than there are equations? This corresponds to the system being 'underdetermined'. An example would be, suppose we know that there are 1000 people in total, of whom 400 have ever been infected, but we don't know the proportion of people whose infections have been resolved. How do we then decide whether we have 100 infected and 300 recovered, or 300 infected and 100 recovered? Atomica uses the 'minimum norm' solution which means that the inputs are distributed equally across groups of compartments that are nonzero, and is zero if no information is available. We will see this with two examples. First, consider the case above where we only know the total population size and number ever infected. This corresponds to the framework and databook containing


The minimum norm solution would see the 400 people uniformly distributed across `inf` and `rec`, so there will be 200 people in each compartment. If we run the model with these spreadsheets, we obtain
```
import atomica as at
P = at.Project(framework='assets/T4/t4_framework_2.xlsx',databook='assets/T4/t4_databook_2.xlsx')
result = P.results[0]
print('sus = %.2f' % (result.get_variable('sus')[0].vals[0]))
print('inf = %.2f' % (result.get_variable('inf')[0].vals[0]))
print('rec = %.2f' % (result.get_variable('rec')[0].vals[0]))
```
We also now recieve a warning that 'Initialization characteristics are underdetermined' which reflects the fact that we had to rely on the minimum norm solution to infer the value of some of the compartments. For compartments that are missing entirely, we can remove the 'alive' characteristic entirely, leaving us with:


Now, we expect that the 400 people will be assigned to `inf` and `rec` in equal proportions, but since we have no information at all about `sus`, it will be initialized with a value of zero:
```
import atomica as at
P = at.Project(framework='assets/T4/t4_framework_3.xlsx',databook='assets/T4/t4_databook_3.xlsx')
result = P.results[0]
print('sus = %.2f' % (result.get_variable('sus')[0].vals[0]))
print('inf = %.2f' % (result.get_variable('inf')[0].vals[0]))
print('rec = %.2f' % (result.get_variable('rec')[0].vals[0]))
```
It's possible to freely mix compartments and characteristics for initialization. For example, we could set a databook page for 'susceptible' on the compartments sheet, and have the databook explicitly contain data entry for the 'susceptible' compartment as well as the number ever infected.
What happens if you enter conflicting information? For example, if the number ever infected is greater than the total number of people? In that case, a negative compartment size would occur, resulting in an error. In that case, the simulation cannot be run unless you find and fix the error. Atomica will print out diagnosic output to help identify where the negative compartment size originates from. Unfortunately, it is still challenging and one of the most difficult parts of framework design in Atomica.
<div class="alert alert-block alert-danger">
Errors relating to negative compartment sizes mean that inconsistent information about the system has been entered
</div>
It's also possible for a system to be overdetermined. For example, if you specify data values for `inf`, `rec`, and `inf+rec`. If you specify inconsistent initial values, then a warning will be displayed and 'best fit' values will be used.
<div class="alert alert-block alert-info">
<b>Further reading:</b> Compartment initialization is described in more detail in the <a href='https://docs.atomica.tools/en/master/general/Compartment-Initialization.html'>Atomica code documentation</a>
</div>
Finally, the last main difference between characteristics and parameters is that characteristics take up no storage space, because they get dynamically evaluated in the results. Thus, using characteristics instead of parameters leads to smaller file sizes.
|
github_jupyter
|
import atomica as at
P = at.Project(framework='assets/T4/t4_framework_1.xlsx',databook='assets/T4/t4_databook_1.xlsx')
result = P.results[0]
print('sus = %.2f' % (result.get_variable('sus')[0].vals[0]))
print('inf = %.2f' % (result.get_variable('inf')[0].vals[0]))
print('rec = %.2f' % (result.get_variable('rec')[0].vals[0]))
import atomica as at
P = at.Project(framework='assets/T4/t4_framework_2.xlsx',databook='assets/T4/t4_databook_2.xlsx')
result = P.results[0]
print('sus = %.2f' % (result.get_variable('sus')[0].vals[0]))
print('inf = %.2f' % (result.get_variable('inf')[0].vals[0]))
print('rec = %.2f' % (result.get_variable('rec')[0].vals[0]))
import atomica as at
P = at.Project(framework='assets/T4/t4_framework_3.xlsx',databook='assets/T4/t4_databook_3.xlsx')
result = P.results[0]
print('sus = %.2f' % (result.get_variable('sus')[0].vals[0]))
print('inf = %.2f' % (result.get_variable('inf')[0].vals[0]))
print('rec = %.2f' % (result.get_variable('rec')[0].vals[0]))
| 0.14814 | 0.991135 |
```
import os
import pandas as pd
import pickle as pkl
import glob
from process_jtwc import *
from itertools import product
import pymongo
import requests, zipfile, io
import csv
import pdb
```
# JTWC
From https://www.metoc.navy.mil/jtwc/jtwc.html?best-tracks
# Download files for certain years and regions.
```
#download JTWC data for certain years
years = [ x for x in range(2000, 2001+1)]
regions = ['bio', 'bsh', 'bwp']
def get_tc_by_year_region(year=2018, region='bio'):
url = make_url(year, region)
resp = requests.get(url)
if not resp.status_code // 100 == 2:
return "Error: Unexpected response {}".format(resp)
z = zipfile.ZipFile(io.BytesIO(resp.content))
file_path = make_file_path(year, region)
z.extractall(file_path)
def make_file_path(year, region, base='/storage/hurricane'):
filePath = os.path.join(base, region, str(year))
if not os.path.exists(filePath):
os.makedirs(filePath)
return filePath
def make_url(year=2018, region='bio'):
return f'https://www.metoc.navy.mil/jtwc/products/best-tracks/{year}/{year}s-{region}/{region}{year}.zip'
def download_jtwc_files(years, regions, base='/storage/hurricane'):
for year, region in product(years, regions):
make_file_path(year, region, base)
get_tc_by_year_region(year, region)
download_jtwc_files(years, regions)
```
## Convert files into a dataframe
```
df_lst = []
dr = glob.glob('/storage/hurricane/*/*/*.dat') # location of files taken from website.
ulist = []
for fn in dr:
# print(fn)
raw = pd.read_table(fn, header=None, delimiter=',', usecols=range(11))
df_raw = convert_df(raw)
ulist = ulist + df_raw.ID.unique().tolist()
df_lst = df_lst + df_raw.to_dict(orient='records')
df = pd.DataFrame(df_lst)
df['source'] = 'JTWC'
df = df.applymap(lambda x: x.replace(' ', '') if isinstance(x, str) else x)
df = df.rename({"ID":'_id', "LONG": 'lon', 'press': 'pres', 'SEASON':'year'}, axis=1)
df.columns = [col.lower() for col in df.columns]
df.year = df.year.astype(np.int64)
df.time = df.time.astype(np.int64)
df.lat = df.lat.astype(np.float64)
df.lon = df.lon.astype(np.float64)
df['geoLocation'] = [ {"type": "point", "coordinates": [lng, lat]} for lng, lat in df[['lon', 'lat']].values]
df.head()
df.lat.min()
def is_empty(x):
if isinstance(x, str):
return x == ''
elif isinstance(x, float):
return np.isnan(x)
else:
return False
clean_dict_keys = lambda my_dict: list(filter(lambda k: not is_empty(my_dict[k]), my_dict))
def clean_dict(my_dict):
new_dict = dict()
keys = clean_dict_keys(my_dict)
for key in keys:
new_dict[key] = my_dict[key]
return new_dict
def make_docs(df):
docs = []
keys = ['_id', 'name', 'num', 'source']
key_types = {'_id': str, 'name':str , 'num': int, 'source': str}
cols = [col for col in df.columns if col not in keys]
for _id, df_id in df.groupby(['_id']):
df_id.shape
doc = {}
for key in keys:
tpe = key_types[key]
assert len(df_id[key].unique()) == 1, 'nondistinct id'
doc[key] = df_id[key].astype(tpe).iloc[0]
if key == 'num':
doc[key] = int(doc[key])
traj_data = df_id[cols].to_dict(orient='records')
traj_data = [clean_dict(x) for x in traj_data]
year = int(df_id.year.iloc[0])
doc['year'] = year
doc['startDate'] = df_id.timestamp.min()
doc['endDate'] = df_id.timestamp.max()
doc['traj_data'] = df_id[cols].to_dict(orient='records')
doc['_id'] = doc['_id'] + '_' + doc['source']
if doc['name'] == 'UNNAMED':
del doc['name']
docs.append(doc)
docs = [clean_dict(x) for x in docs]
return docs
docs = make_docs(df)
docs[0].keys()
```
# HURDAT2
from https://www.nhc.noaa.gov/data/#hurdat
```
pacific_filename = '/storage/hurricane/hurdat2/pacific.csv'
atlantic_filename = '/storage/hurricane/hurdat2/atlantic.csv'
stormStartCh = ['EP', 'CP', 'AL']
def make_trop_cyc_list(filename):
startIdx = []
tcs = []
with open(filename) as csvfile:
spamreader = csv.reader(csvfile, delimiter=',')
for idx, row in enumerate(spamreader):
if len(row) == 0:
continue
if row[0][0:2] in stormStartCh:
startIdx.append(idx)
_id = row[0]
name= row[1]
num = row[2]
storm = [_id, name, num]
else:
tc = storm + row
tcs.append(tc[0:11])
return tcs
pacific_tcs = make_trop_cyc_list(pacific_filename)
atlantic_tcs = make_trop_cyc_list(atlantic_filename)
cols = ['_id', 'name', 'num', 'date', 'time', 'l', 'class', 'lat', 'lon', 'wind', 'pres']
def convert_lat_lon(strL, postiveDir='N'):
L = float(strL[:-1].replace(' ', ''))
if not postiveDir in strL:
L *= -1
return L
def convert_time(time):
hour = (int(time)/100) %12
return hour
def make_cyc_df(tcs):
df = pd.DataFrame(tcs, columns=cols)
df['year'] = df.date.apply(lambda x: int(x[0:4]))
df = df[df['year'] >= 2000]
df = df.dropna(axis=0, how='any', subset=['lat', 'lon'])
df = df.applymap(lambda x: x.replace(' ', '') if isinstance(x, str) else x)
df.lon = df.lon.apply(lambda lon: convert_lat_lon(lon, 'E')).astype(np.float64)
df.lat = df.lat.apply(lambda lat: convert_lat_lon(lat, 'N')).astype(np.float64)
df.pres = df.pres.astype(np.int64)
df.pres = df.pres.replace(-999, np.nan)
df.wind = df.wind.astype(np.int64)
df.num = df.num.astype(np.int64)
df.date = df.date.apply(lambda x: x[0:4] + '-' + x[4:6] + '-' + x[6:8])
datetimes = df.date.values + ' ' + df.time.values
df['timestamp'] = pd.to_datetime(datetimes, format='%Y-%m-%d %H%M')
df['source'] = 'HURDAT2'
df['geoLocation'] = [ {"type": "point", "coordinates": [lng, lat]} for lng, lat in df[['lon', 'lat']].values]
df.time = df.time.astype(np.int64)
return df
df_pacific = make_cyc_df(pacific_tcs)
df_atlantic = make_cyc_df(atlantic_tcs)
df_pacific.lat.min()
df_atlantic.lat.min()
df_pacific.head()
hurdat2_docs = make_docs(df_pacific) + make_docs(df_atlantic)
df_lane = df_pacific[(df_pacific['name'] == 'LANE') & (df_pacific['year'] == 2018)]
df_lane.head()
test_doc = make_docs(df_lane)
```
# add docs to mongoDB database
```
def create_collection(dbName, collectionName, init_collection):
dbUrl = 'mongodb://localhost:27017/'
client = pymongo.MongoClient(dbUrl)
db = client[dbName]
coll = db[collectionName]
coll = init_collection(coll)
return coll
def init_tc_collection(coll):
coll.create_index([('name', pymongo.DESCENDING)])
coll.create_index([('startDate', pymongo.DESCENDING)])
coll.create_index([('endDate', pymongo.DESCENDING)])
coll.create_index([('startDate', pymongo.DESCENDING), ('endDate', pymongo.DESCENDING)])
return coll
def init_tc_traj_collection(coll):
return coll
#insert docs
dbName='argo'
collectionName = 'tc'
coll = create_collection(dbName, collectionName, init_tc_collection)
coll.drop()
coll.insert_many(docs)
coll.insert_many(hurdat2_docs)
dbName='argo-express-test'
collectionName = 'tc'
coll = create_collection(dbName, collectionName, init_tc_collection)
coll.drop()
coll.insert_many(test_doc)
```
|
github_jupyter
|
import os
import pandas as pd
import pickle as pkl
import glob
from process_jtwc import *
from itertools import product
import pymongo
import requests, zipfile, io
import csv
import pdb
#download JTWC data for certain years
years = [ x for x in range(2000, 2001+1)]
regions = ['bio', 'bsh', 'bwp']
def get_tc_by_year_region(year=2018, region='bio'):
url = make_url(year, region)
resp = requests.get(url)
if not resp.status_code // 100 == 2:
return "Error: Unexpected response {}".format(resp)
z = zipfile.ZipFile(io.BytesIO(resp.content))
file_path = make_file_path(year, region)
z.extractall(file_path)
def make_file_path(year, region, base='/storage/hurricane'):
filePath = os.path.join(base, region, str(year))
if not os.path.exists(filePath):
os.makedirs(filePath)
return filePath
def make_url(year=2018, region='bio'):
return f'https://www.metoc.navy.mil/jtwc/products/best-tracks/{year}/{year}s-{region}/{region}{year}.zip'
def download_jtwc_files(years, regions, base='/storage/hurricane'):
for year, region in product(years, regions):
make_file_path(year, region, base)
get_tc_by_year_region(year, region)
download_jtwc_files(years, regions)
df_lst = []
dr = glob.glob('/storage/hurricane/*/*/*.dat') # location of files taken from website.
ulist = []
for fn in dr:
# print(fn)
raw = pd.read_table(fn, header=None, delimiter=',', usecols=range(11))
df_raw = convert_df(raw)
ulist = ulist + df_raw.ID.unique().tolist()
df_lst = df_lst + df_raw.to_dict(orient='records')
df = pd.DataFrame(df_lst)
df['source'] = 'JTWC'
df = df.applymap(lambda x: x.replace(' ', '') if isinstance(x, str) else x)
df = df.rename({"ID":'_id', "LONG": 'lon', 'press': 'pres', 'SEASON':'year'}, axis=1)
df.columns = [col.lower() for col in df.columns]
df.year = df.year.astype(np.int64)
df.time = df.time.astype(np.int64)
df.lat = df.lat.astype(np.float64)
df.lon = df.lon.astype(np.float64)
df['geoLocation'] = [ {"type": "point", "coordinates": [lng, lat]} for lng, lat in df[['lon', 'lat']].values]
df.head()
df.lat.min()
def is_empty(x):
if isinstance(x, str):
return x == ''
elif isinstance(x, float):
return np.isnan(x)
else:
return False
clean_dict_keys = lambda my_dict: list(filter(lambda k: not is_empty(my_dict[k]), my_dict))
def clean_dict(my_dict):
new_dict = dict()
keys = clean_dict_keys(my_dict)
for key in keys:
new_dict[key] = my_dict[key]
return new_dict
def make_docs(df):
docs = []
keys = ['_id', 'name', 'num', 'source']
key_types = {'_id': str, 'name':str , 'num': int, 'source': str}
cols = [col for col in df.columns if col not in keys]
for _id, df_id in df.groupby(['_id']):
df_id.shape
doc = {}
for key in keys:
tpe = key_types[key]
assert len(df_id[key].unique()) == 1, 'nondistinct id'
doc[key] = df_id[key].astype(tpe).iloc[0]
if key == 'num':
doc[key] = int(doc[key])
traj_data = df_id[cols].to_dict(orient='records')
traj_data = [clean_dict(x) for x in traj_data]
year = int(df_id.year.iloc[0])
doc['year'] = year
doc['startDate'] = df_id.timestamp.min()
doc['endDate'] = df_id.timestamp.max()
doc['traj_data'] = df_id[cols].to_dict(orient='records')
doc['_id'] = doc['_id'] + '_' + doc['source']
if doc['name'] == 'UNNAMED':
del doc['name']
docs.append(doc)
docs = [clean_dict(x) for x in docs]
return docs
docs = make_docs(df)
docs[0].keys()
pacific_filename = '/storage/hurricane/hurdat2/pacific.csv'
atlantic_filename = '/storage/hurricane/hurdat2/atlantic.csv'
stormStartCh = ['EP', 'CP', 'AL']
def make_trop_cyc_list(filename):
startIdx = []
tcs = []
with open(filename) as csvfile:
spamreader = csv.reader(csvfile, delimiter=',')
for idx, row in enumerate(spamreader):
if len(row) == 0:
continue
if row[0][0:2] in stormStartCh:
startIdx.append(idx)
_id = row[0]
name= row[1]
num = row[2]
storm = [_id, name, num]
else:
tc = storm + row
tcs.append(tc[0:11])
return tcs
pacific_tcs = make_trop_cyc_list(pacific_filename)
atlantic_tcs = make_trop_cyc_list(atlantic_filename)
cols = ['_id', 'name', 'num', 'date', 'time', 'l', 'class', 'lat', 'lon', 'wind', 'pres']
def convert_lat_lon(strL, postiveDir='N'):
L = float(strL[:-1].replace(' ', ''))
if not postiveDir in strL:
L *= -1
return L
def convert_time(time):
hour = (int(time)/100) %12
return hour
def make_cyc_df(tcs):
df = pd.DataFrame(tcs, columns=cols)
df['year'] = df.date.apply(lambda x: int(x[0:4]))
df = df[df['year'] >= 2000]
df = df.dropna(axis=0, how='any', subset=['lat', 'lon'])
df = df.applymap(lambda x: x.replace(' ', '') if isinstance(x, str) else x)
df.lon = df.lon.apply(lambda lon: convert_lat_lon(lon, 'E')).astype(np.float64)
df.lat = df.lat.apply(lambda lat: convert_lat_lon(lat, 'N')).astype(np.float64)
df.pres = df.pres.astype(np.int64)
df.pres = df.pres.replace(-999, np.nan)
df.wind = df.wind.astype(np.int64)
df.num = df.num.astype(np.int64)
df.date = df.date.apply(lambda x: x[0:4] + '-' + x[4:6] + '-' + x[6:8])
datetimes = df.date.values + ' ' + df.time.values
df['timestamp'] = pd.to_datetime(datetimes, format='%Y-%m-%d %H%M')
df['source'] = 'HURDAT2'
df['geoLocation'] = [ {"type": "point", "coordinates": [lng, lat]} for lng, lat in df[['lon', 'lat']].values]
df.time = df.time.astype(np.int64)
return df
df_pacific = make_cyc_df(pacific_tcs)
df_atlantic = make_cyc_df(atlantic_tcs)
df_pacific.lat.min()
df_atlantic.lat.min()
df_pacific.head()
hurdat2_docs = make_docs(df_pacific) + make_docs(df_atlantic)
df_lane = df_pacific[(df_pacific['name'] == 'LANE') & (df_pacific['year'] == 2018)]
df_lane.head()
test_doc = make_docs(df_lane)
def create_collection(dbName, collectionName, init_collection):
dbUrl = 'mongodb://localhost:27017/'
client = pymongo.MongoClient(dbUrl)
db = client[dbName]
coll = db[collectionName]
coll = init_collection(coll)
return coll
def init_tc_collection(coll):
coll.create_index([('name', pymongo.DESCENDING)])
coll.create_index([('startDate', pymongo.DESCENDING)])
coll.create_index([('endDate', pymongo.DESCENDING)])
coll.create_index([('startDate', pymongo.DESCENDING), ('endDate', pymongo.DESCENDING)])
return coll
def init_tc_traj_collection(coll):
return coll
#insert docs
dbName='argo'
collectionName = 'tc'
coll = create_collection(dbName, collectionName, init_tc_collection)
coll.drop()
coll.insert_many(docs)
coll.insert_many(hurdat2_docs)
dbName='argo-express-test'
collectionName = 'tc'
coll = create_collection(dbName, collectionName, init_tc_collection)
coll.drop()
coll.insert_many(test_doc)
| 0.176459 | 0.500793 |
# Convolutional Layer
In this notebook, we visualize four filtered outputs (a.k.a. activation maps) of a convolutional layer.
In this example, *we* are defining four filters that are applied to an input image by initializing the **weights** of a convolutional layer, but a trained CNN will learn the values of these weights.
<img src='notebook_ims/conv_layer.gif' height=60% width=60% />
### Import the image
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
```
### Define and visualize the filters
```
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
# visualize all four filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
```
## Define a convolutional layer
The various layers that make up any neural network are documented, [here](http://pytorch.org/docs/stable/nn.html). For a convolutional neural network, we'll start by defining a:
* Convolutional layer
Initialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network!
#### `__init__` and `forward`
To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the forward behavior of a network that applyies those initialized layers to an input (`x`) in the function `forward`. In PyTorch we convert all inputs into the Tensor datatype, which is similar to a list data type in Python.
Below, I define the structure of a class called `Net` that has a convolutional layer that can contain four 4x4 grayscale filters.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a single convolutional layer with four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# returns both layers
return conv_x, activated_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.
```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get the convolutional layer (pre and post activation)
conv_layer, activated_layer = model(gray_img_tensor)
# visualize the output of a conv layer
viz_layer(conv_layer)
```
#### ReLu activation
In this model, we've used an activation function that scales the output of the convolutional layer. We've chose a ReLu function to do this, and this function simply turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
<img src='notebook_ims/relu_ex.png' height=50% width=50% />
```
# after a ReLu is applied
# visualize the output of an activated conv layer
viz_layer(activated_layer)
```
|
github_jupyter
|
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
# visualize all four filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a single convolutional layer with four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# returns both layers
return conv_x, activated_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get the convolutional layer (pre and post activation)
conv_layer, activated_layer = model(gray_img_tensor)
# visualize the output of a conv layer
viz_layer(conv_layer)
# after a ReLu is applied
# visualize the output of an activated conv layer
viz_layer(activated_layer)
| 0.626238 | 0.987092 |
# **Importing Libraries**
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler , Normalizer
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
```
# **Loading and Visualizing Data**
```
corona_pd=pd.read_csv("../input/covid19-symptoms-checker/Cleaned-Data.csv")
corona_pd.sample(5)
#Returns the meta data of the dataset.
corona_pd.info()
#Returns the information like mean,max,min,etc., of the dataset.
corona_pd.describe()
#To remove the columns of the DataFrame in memory.
corona_pd.drop(["Country"],axis=1,inplace=True)
corona_pd.sample(5)
#Returns the sum of null values under each column.
corona_pd.isnull().sum()
#To check whether the row contains duplicate values or not.
corona_pd.duplicated()
#To plot a correlation matrix between features.
f,ax= plt.subplots(figsize=(30,30))
sns.heatmap(corona_pd.corr(),annot=True)
```
You can see from the above visualization that each feature has a effect on other features.
# **Elbow Method**
Used to find optimal number of clusters.
```
#To scale the values along columns.
scaler= StandardScaler()
corona_pd_scaled=scaler.fit_transform(corona_pd)
#To get the Within Cluster Sum of Squares(WCSS) for each cluster count to find the optimal K value(i.e cluster count).
scores=[]
for i in range(1,20):
corona_means=KMeans(n_clusters=i)
corona_means.fit(corona_pd_scaled)
scores.append(corona_means.inertia_)
#Plotting the values obtained to get the optimal K-value.
plt.plot(scores,"-rx")
```
At point 7 ,the graph looks like a elbow. So we choose this as our K value.
# **K-MEANS Implementation**
```
#Applying K-means algorithm with the obtained K value.
corona_means=KMeans(n_clusters=7)
corona_means.fit(corona_pd_scaled)
#Returns an array with cluster labels to which it belongs.
labels=corona_means.labels_
#Creating a Dataframe with cluster centres(The example which is taken as center for each cluster)-If you are not familiar ,learn about k-means through the link given at last.
corona_pd_m=pd.DataFrame(corona_means.cluster_centers_,columns=corona_pd.columns)
corona_pd_m
```
It's clear from the above table that the people at cluster 4 are **not affected with corona** while **other clusters do affected with corona**. The other clusters can also be classified. Have a close look you can find difference between the clusters.
```
#Concatenating the cluster labels.
corona_cluster=pd.concat([corona_pd,pd.DataFrame({"Cluster":labels})],axis=1)
corona_cluster.sample(5)
```
# **Principal Component Analysis (PCA)**
Used to perform dimentionality reduction to have a better view of clusters of examples.
```
#Implementing pca with 3 components i.e 3d plot
corona_pca=PCA(n_components=3)
principal_comp=corona_pca.fit_transform(corona_pd_scaled)
principal_comp=pd.DataFrame(principal_comp,columns=['pca1','pca2','pca3'])
principal_comp.head()
principal_comp=pd.concat([principal_comp,pd.DataFrame({"Cluster":labels})],axis=1)
principal_comp.sample(5)
#Plotting the 2d-plot.
plt.figure(figsize=(10,10))
ax=sns.scatterplot(x='pca1',y='pca2',hue="Cluster",data=principal_comp ,palette=['red','green','blue','orange','black','yellow','violet'])
plt.show()
```
You can understand the same from the above visualization which I mentioned earlier.
```
#Plotting the 3d-plot
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
sc=ax.scatter(xs=principal_comp['pca1'],ys=principal_comp['pca3'],zs=principal_comp['pca2'],c=principal_comp['Cluster'],marker='o',cmap="gist_rainbow")
plt.colorbar(sc)
plt.show()
```
You can have a even better view from the above plot.
For K-Means algorithm - Refer this [link](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiInYrt2cDqAhUbyzgGHaiTBHAQwqsBMAF6BAgKEAQ&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DhDmNF9JG3lo&usg=AOvVaw0uqMZBuHXA-UDOHz-ymSfK)
For PCA - Refer this [link](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjB1NOK28DqAhUjguYKHUNxDxwQwqsBMAB6BAgKEAQ&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Drng04VJxUt4&usg=AOvVaw14_fwoAfo-0sWFezc-7qiy)
Thank You!
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler , Normalizer
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
corona_pd=pd.read_csv("../input/covid19-symptoms-checker/Cleaned-Data.csv")
corona_pd.sample(5)
#Returns the meta data of the dataset.
corona_pd.info()
#Returns the information like mean,max,min,etc., of the dataset.
corona_pd.describe()
#To remove the columns of the DataFrame in memory.
corona_pd.drop(["Country"],axis=1,inplace=True)
corona_pd.sample(5)
#Returns the sum of null values under each column.
corona_pd.isnull().sum()
#To check whether the row contains duplicate values or not.
corona_pd.duplicated()
#To plot a correlation matrix between features.
f,ax= plt.subplots(figsize=(30,30))
sns.heatmap(corona_pd.corr(),annot=True)
#To scale the values along columns.
scaler= StandardScaler()
corona_pd_scaled=scaler.fit_transform(corona_pd)
#To get the Within Cluster Sum of Squares(WCSS) for each cluster count to find the optimal K value(i.e cluster count).
scores=[]
for i in range(1,20):
corona_means=KMeans(n_clusters=i)
corona_means.fit(corona_pd_scaled)
scores.append(corona_means.inertia_)
#Plotting the values obtained to get the optimal K-value.
plt.plot(scores,"-rx")
#Applying K-means algorithm with the obtained K value.
corona_means=KMeans(n_clusters=7)
corona_means.fit(corona_pd_scaled)
#Returns an array with cluster labels to which it belongs.
labels=corona_means.labels_
#Creating a Dataframe with cluster centres(The example which is taken as center for each cluster)-If you are not familiar ,learn about k-means through the link given at last.
corona_pd_m=pd.DataFrame(corona_means.cluster_centers_,columns=corona_pd.columns)
corona_pd_m
#Concatenating the cluster labels.
corona_cluster=pd.concat([corona_pd,pd.DataFrame({"Cluster":labels})],axis=1)
corona_cluster.sample(5)
#Implementing pca with 3 components i.e 3d plot
corona_pca=PCA(n_components=3)
principal_comp=corona_pca.fit_transform(corona_pd_scaled)
principal_comp=pd.DataFrame(principal_comp,columns=['pca1','pca2','pca3'])
principal_comp.head()
principal_comp=pd.concat([principal_comp,pd.DataFrame({"Cluster":labels})],axis=1)
principal_comp.sample(5)
#Plotting the 2d-plot.
plt.figure(figsize=(10,10))
ax=sns.scatterplot(x='pca1',y='pca2',hue="Cluster",data=principal_comp ,palette=['red','green','blue','orange','black','yellow','violet'])
plt.show()
#Plotting the 3d-plot
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
sc=ax.scatter(xs=principal_comp['pca1'],ys=principal_comp['pca3'],zs=principal_comp['pca2'],c=principal_comp['Cluster'],marker='o',cmap="gist_rainbow")
plt.colorbar(sc)
plt.show()
| 0.754915 | 0.913677 |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
def calculate_z(x, y, smearing=False):
# Gaussian + quadratic relation
z = 80*np.exp(-(x-75)**2/30**2) + 0.3 * y + 0.005*y*y
# Linear relation
# z = 20 + x + 2*y
if smearing:
sig_z = 0.1 + (x + y) / 1000 # Up to 0.3
z = z + np.random.normal(z, sig_z * z)
return z
```
# Inspecting calculate_z by plotting x and y projections
```
# z function
data_non_zero = np.random.rand(100) * 100
data_zero = np.zeros(100)
plt.plot(
data_non_zero,
calculate_z(data_non_zero, data_zero, smearing=True),
linestyle='none',
marker='o',
markersize=3
)
plt.plot(
data_non_zero,
calculate_z(data_zero, data_non_zero, smearing=True),
linestyle='none',
marker='o',
markersize=3
)
ax = plt.gca()
ylim = ax.get_ylim()
ax.set_ylim(0, ylim[1])
```
# Creating a large dataset
```
def create_dataset(size):
data_x = np.random.rand(size) * 100
data_y = np.random.rand(size) * 100
data_z = calculate_z(data_x, data_y, smearing=True)
df = pd.DataFrame({'x': data_x, 'y': data_y, 'z': data_z})
return df
# Creating dataset for plotting
data = create_dataset(100_000)
data
```
### Plotting data
```
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.scatter3D(data['x'], data['y'], data['z'], c='g', s=0.001)
```
## Scaling and splitting data
```
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
# Splitting data
features = np.array(data[['x', 'y']])
target = np.array(data[['z']])
# target = np.ravel(target)
features_train, features_test, target_train, target_test = train_test_split(
features,
target,
random_state=1
)
# Scaling data
scaler_features = StandardScaler()
scaler_features.fit(features_train)
features_train_scaled = scaler_features.transform(features_train)
features_test_scaled = scaler_features.transform(features_test)
scaler_target = StandardScaler()
scaler_target.fit(target_train)
target_train_scaled = scaler_target.transform(target_train)
```
# MLP Regression
```
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import r2_score
reg = MLPRegressor(
hidden_layer_sizes=(6, 6),
activation="relu",
random_state=1,
max_iter=2000
).fit(features_train_scaled, np.ravel(target_train_scaled))
pred_test = reg.predict(features_test_scaled)
pred_test = scaler_target.inverse_transform(pred_test)
print(pred_test.shape)
abs_deviation = (pred_test - np.ravel(target_test))
rel_deviation = abs_deviation / np.ravel(target_test)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.hist(rel_deviation, bins=20)
ax2.hist(abs_deviation, bins=20)
mean_abs = np.mean(abs_deviation)
std_abs = np.std(abs_deviation)
print('Abs: {:.3} +/- {:.3}'.format(mean_abs, std_abs))
mean_rel = np.mean(rel_deviation)
std_rel = np.std(rel_deviation)
print('Rel: {:.3} +/- {:.3}'.format(mean_rel, std_rel))
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 8))
ms = 2
mm = 'o'
ax1.plot(
features_test[:,0],
rel_deviation,
linestyle='none',
marker=mm,
markersize=ms
)
ax1.set_ylabel('relative deviation')
ax1.set_xlabel('x')
ax2.plot(
features_test[:,1],
rel_deviation,
linestyle='none',
marker=mm,
markersize=ms
)
ax2.set_ylabel('relative deviation')
ax2.set_xlabel('y')
ax3.plot(
features_test[:,0],
abs_deviation,
linestyle='none',
marker=mm,
markersize=ms
)
ax3.set_ylabel('absolute deviation')
ax3.set_xlabel('x')
ax4.plot(
features_test[:,1],
abs_deviation,
linestyle='none',
marker=mm,
markersize=ms
)
ax4.set_ylabel('absolute deviation')
ax4.set_xlabel('y')
features_test[:,0]
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.scatter3D(features_test[:,0], features_test[:,1], abs_deviation, c='g', s=0.001)
score = reg.score(features_test, target_test)
score
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
def calculate_z(x, y, smearing=False):
# Gaussian + quadratic relation
z = 80*np.exp(-(x-75)**2/30**2) + 0.3 * y + 0.005*y*y
# Linear relation
# z = 20 + x + 2*y
if smearing:
sig_z = 0.1 + (x + y) / 1000 # Up to 0.3
z = z + np.random.normal(z, sig_z * z)
return z
# z function
data_non_zero = np.random.rand(100) * 100
data_zero = np.zeros(100)
plt.plot(
data_non_zero,
calculate_z(data_non_zero, data_zero, smearing=True),
linestyle='none',
marker='o',
markersize=3
)
plt.plot(
data_non_zero,
calculate_z(data_zero, data_non_zero, smearing=True),
linestyle='none',
marker='o',
markersize=3
)
ax = plt.gca()
ylim = ax.get_ylim()
ax.set_ylim(0, ylim[1])
def create_dataset(size):
data_x = np.random.rand(size) * 100
data_y = np.random.rand(size) * 100
data_z = calculate_z(data_x, data_y, smearing=True)
df = pd.DataFrame({'x': data_x, 'y': data_y, 'z': data_z})
return df
# Creating dataset for plotting
data = create_dataset(100_000)
data
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.scatter3D(data['x'], data['y'], data['z'], c='g', s=0.001)
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
# Splitting data
features = np.array(data[['x', 'y']])
target = np.array(data[['z']])
# target = np.ravel(target)
features_train, features_test, target_train, target_test = train_test_split(
features,
target,
random_state=1
)
# Scaling data
scaler_features = StandardScaler()
scaler_features.fit(features_train)
features_train_scaled = scaler_features.transform(features_train)
features_test_scaled = scaler_features.transform(features_test)
scaler_target = StandardScaler()
scaler_target.fit(target_train)
target_train_scaled = scaler_target.transform(target_train)
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import r2_score
reg = MLPRegressor(
hidden_layer_sizes=(6, 6),
activation="relu",
random_state=1,
max_iter=2000
).fit(features_train_scaled, np.ravel(target_train_scaled))
pred_test = reg.predict(features_test_scaled)
pred_test = scaler_target.inverse_transform(pred_test)
print(pred_test.shape)
abs_deviation = (pred_test - np.ravel(target_test))
rel_deviation = abs_deviation / np.ravel(target_test)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.hist(rel_deviation, bins=20)
ax2.hist(abs_deviation, bins=20)
mean_abs = np.mean(abs_deviation)
std_abs = np.std(abs_deviation)
print('Abs: {:.3} +/- {:.3}'.format(mean_abs, std_abs))
mean_rel = np.mean(rel_deviation)
std_rel = np.std(rel_deviation)
print('Rel: {:.3} +/- {:.3}'.format(mean_rel, std_rel))
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 8))
ms = 2
mm = 'o'
ax1.plot(
features_test[:,0],
rel_deviation,
linestyle='none',
marker=mm,
markersize=ms
)
ax1.set_ylabel('relative deviation')
ax1.set_xlabel('x')
ax2.plot(
features_test[:,1],
rel_deviation,
linestyle='none',
marker=mm,
markersize=ms
)
ax2.set_ylabel('relative deviation')
ax2.set_xlabel('y')
ax3.plot(
features_test[:,0],
abs_deviation,
linestyle='none',
marker=mm,
markersize=ms
)
ax3.set_ylabel('absolute deviation')
ax3.set_xlabel('x')
ax4.plot(
features_test[:,1],
abs_deviation,
linestyle='none',
marker=mm,
markersize=ms
)
ax4.set_ylabel('absolute deviation')
ax4.set_xlabel('y')
features_test[:,0]
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.scatter3D(features_test[:,0], features_test[:,1], abs_deviation, c='g', s=0.001)
score = reg.score(features_test, target_test)
score
| 0.726814 | 0.937555 |
## Create Azure Machine Learning datasets for Predictive Maintenance
Azure Machine Learning datasets can be extremely useful for your local or remote experiments. In this notebook, we will do the following things.
1. Configure workspace using credentials for Azure subscription
2. Download the dataset from ADLS Gen2
3. Upload the featured dataset into the default datastore in Azure
4. Register the featured dataset into Azure
## Configure workspace using credentials for Azure subscription
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
```
# Install the required package
!pip install azure-storage-blob==2.1.0
from azureml.core import Workspace
# Importing user defined config
import config
# Import the subscription details as below to access the resources
subscription_id=config.subscription_id
resource_group=config.resource_group
workspace_name=config.workspace_name
try:
workspace = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)
# write the details of the workspace to a configuration file to the notebook library
workspace.write_config()
print("Workspace configuration succeeded. Skip the workspace creation steps below")
except:
print("Workspace not accessible. Change your parameters or create a new workspace below")
```
## Download the dataset from ADLS Gen2
```
## setting up the credentials for ADLS Gen2
import os
from azure.storage.blob import BlockBlobService
# setting up blob storage configs
STORAGE_ACCOUNT_NAME = config.STORAGE_ACCOUNT_NAME
STORAGE_ACCOUNT_ACCESS_KEY = config.STORAGE_ACCOUNT_ACCESS_KEY
STORAGE_CONTAINER_NAME = "azureml-mfg"
blob_service = BlockBlobService(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_ACCESS_KEY)
output_file_path=os.path.join(os.getcwd(),"data", "mfg_pdm.csv")
output_blob_file= "mfg_pdm.csv"
# Create a project_folder if it doesn't exist
if not os.path.isdir('data'):
os.mkdir('data')
# uploading the csv to the ADLSGen2 storage container
blob_service.get_blob_to_path(STORAGE_CONTAINER_NAME, output_blob_file,output_file_path)
```
## Upload the featured dataset into the default datastore in Azure
```
from sklearn import datasets
from azureml.core.dataset import Dataset
from scipy import sparse
import os
# Create a project_folder if it doesn't exist
if not os.path.isdir('data'):
os.mkdir('data')
ds = workspace.get_default_datastore()
ds.upload(src_dir='./data', target_path='mfgdata', overwrite=True, show_progress=True)
final_df = Dataset.Tabular.from_delimited_files(path=ds.path('mfgdata/mfg_pdm.csv'))
```
## Register the featured dataset into Azure
```
# train_data_registered = Dataset.get_by_name(amlworkspace,"train_data",version='latest')
#train_data_registered.unregister_all_versions()
train_data_registered = final_df.register(workspace=workspace,
name='pdmmfg',
description='Synapse Mfg data',
tags= {'type': 'Mfg', 'date':'2020'},
create_new_version=False)
```
|
github_jupyter
|
# Install the required package
!pip install azure-storage-blob==2.1.0
from azureml.core import Workspace
# Importing user defined config
import config
# Import the subscription details as below to access the resources
subscription_id=config.subscription_id
resource_group=config.resource_group
workspace_name=config.workspace_name
try:
workspace = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)
# write the details of the workspace to a configuration file to the notebook library
workspace.write_config()
print("Workspace configuration succeeded. Skip the workspace creation steps below")
except:
print("Workspace not accessible. Change your parameters or create a new workspace below")
## setting up the credentials for ADLS Gen2
import os
from azure.storage.blob import BlockBlobService
# setting up blob storage configs
STORAGE_ACCOUNT_NAME = config.STORAGE_ACCOUNT_NAME
STORAGE_ACCOUNT_ACCESS_KEY = config.STORAGE_ACCOUNT_ACCESS_KEY
STORAGE_CONTAINER_NAME = "azureml-mfg"
blob_service = BlockBlobService(STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_ACCESS_KEY)
output_file_path=os.path.join(os.getcwd(),"data", "mfg_pdm.csv")
output_blob_file= "mfg_pdm.csv"
# Create a project_folder if it doesn't exist
if not os.path.isdir('data'):
os.mkdir('data')
# uploading the csv to the ADLSGen2 storage container
blob_service.get_blob_to_path(STORAGE_CONTAINER_NAME, output_blob_file,output_file_path)
from sklearn import datasets
from azureml.core.dataset import Dataset
from scipy import sparse
import os
# Create a project_folder if it doesn't exist
if not os.path.isdir('data'):
os.mkdir('data')
ds = workspace.get_default_datastore()
ds.upload(src_dir='./data', target_path='mfgdata', overwrite=True, show_progress=True)
final_df = Dataset.Tabular.from_delimited_files(path=ds.path('mfgdata/mfg_pdm.csv'))
# train_data_registered = Dataset.get_by_name(amlworkspace,"train_data",version='latest')
#train_data_registered.unregister_all_versions()
train_data_registered = final_df.register(workspace=workspace,
name='pdmmfg',
description='Synapse Mfg data',
tags= {'type': 'Mfg', 'date':'2020'},
create_new_version=False)
| 0.458834 | 0.842992 |
```
%matplotlib notebook
import matplotlib
import seaborn as sb
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
# Jupyter Specifics
%matplotlib inline
from IPython.display import display, HTML
from ipywidgets.widgets import interact, interactive, IntSlider, FloatSlider, Layout, ToggleButton, ToggleButtons, fixed
display(HTML("<style>.container { width:100% !important; }</style>"))
style = {'description_width': '100px'}
slider_layout = Layout(width='99%')
from time import time
import pickle as pk
from data import *
# new module data_config imported by data.py as well as Cluster.py
import data_config
data_config.report_correct = False
from Cluster import *
print(len(countries_jhu_4_owid),len(countries_jhu_2_owid),len(countries_owid),len(countries_jhu))
print('countries in common: owid format')
print(countries_jhu_2_owid)
print('')
print('owid countries not in common set')
print(set(countries_owid)-set(countries_jhu_2_owid))
print('')
print('countries in common: jhu format')
print(countries_owid_to_jhu)
print('')
print(len(bcountries),'bcountries',bcountries)
cases = [c for c in clusdata_all]
cases
datasets = ['deaths','cases','cases_lin2020','cases_pwlfit','cases_nonlin']
d_countries = [c for c in clusdata_all['deaths']]
c_countries = [c for c in clusdata_all['cases']]
lc_countries = [c for c in clusdata_all['cases_lin2020']]
pc_countries = [c for c in clusdata_all['cases_pwlfit']]
nc_countries = [c for c in clusdata_all['cases_nonlin']]
countries = d_countries
print(len(d_countries))
print(np.sort(d_countries))
# check that all country sets being used are the same and check time series lengths and starting dates
# 72 countries with Oct 9 finish and with mindeaths=100 and mindays=150 and mindeathspm = 0.5
countrysets = [d_countries,c_countries,lc_countries,pc_countries,nc_countries]
print([len(ccs) for ccs in countrysets])
for ccs1 in countrysets:
print([ccs1 == ccs2 for ccs2 in countrysets])
print([len(clusdata_all[d1]['United States']) for d1 in datasets])
# print(len(total_deaths_x['dates']),len(total_cases_x['dates']),len(testing_x['dates']),total_deaths_x['dates'][0],total_cases_x['dates'][0],testing_x['dates'][0])
covid_owid[0].keys()
covid_ts.keys()
covid_owid_ts.keys()
```
# Data save
Execute this section once to produce file `data_all.pk`.
```
miscnms = ['clusdata_all','cases','datasets','contact_dic','age_group_dic']
deathnms = [x for x in dir() if 'deaths' in x]
casenms = [x for x in dir() if 'cases' in x if not callable(eval(x))]
covidnms = [x for x in dir() if 'covid' in x]
popnms = [x for x in dir() if 'population' in x]
# lccountries is type dict_keys, which can't be pickled
countrynms = [x for x in dir() if 'countr' in x and x is not 'lccountries' and not callable(eval(x))]
countrynms = [x for x in dir() if 'countr' in x and (isinstance(eval(x),dict) or isinstance(eval(x),list) or isinstance(eval(x),tuple))]
allnms = deathnms+casenms+covidnms+countrynms+miscnms
allnms = countrynms + covidnms + miscnms + deathnms + casenms + popnms
data_all = {nm:eval(nm) for nm in allnms}
start = time()
pk.dump(data_all,open('./pks/data_all_raw.pk','wb'))
print('elapsed: ',time()-start)
```
# Data Load
Use this code to read in the data, e.g. at the top of another notebook, as an alternative to loading data.py or Cluster.py
```
# read in data
start=time()
print('reading in data...')
with open('./pks/data_all_raw.pk','rb') as fp:
foo = pk.load(fp)
print('elapsed: ',time()-start)
# make each element of the dictionary a global variable named with key:
for x in foo:
stmp = x+"= foo['"+x+"']"
exec(stmp)
```
|
github_jupyter
|
%matplotlib notebook
import matplotlib
import seaborn as sb
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
# Jupyter Specifics
%matplotlib inline
from IPython.display import display, HTML
from ipywidgets.widgets import interact, interactive, IntSlider, FloatSlider, Layout, ToggleButton, ToggleButtons, fixed
display(HTML("<style>.container { width:100% !important; }</style>"))
style = {'description_width': '100px'}
slider_layout = Layout(width='99%')
from time import time
import pickle as pk
from data import *
# new module data_config imported by data.py as well as Cluster.py
import data_config
data_config.report_correct = False
from Cluster import *
print(len(countries_jhu_4_owid),len(countries_jhu_2_owid),len(countries_owid),len(countries_jhu))
print('countries in common: owid format')
print(countries_jhu_2_owid)
print('')
print('owid countries not in common set')
print(set(countries_owid)-set(countries_jhu_2_owid))
print('')
print('countries in common: jhu format')
print(countries_owid_to_jhu)
print('')
print(len(bcountries),'bcountries',bcountries)
cases = [c for c in clusdata_all]
cases
datasets = ['deaths','cases','cases_lin2020','cases_pwlfit','cases_nonlin']
d_countries = [c for c in clusdata_all['deaths']]
c_countries = [c for c in clusdata_all['cases']]
lc_countries = [c for c in clusdata_all['cases_lin2020']]
pc_countries = [c for c in clusdata_all['cases_pwlfit']]
nc_countries = [c for c in clusdata_all['cases_nonlin']]
countries = d_countries
print(len(d_countries))
print(np.sort(d_countries))
# check that all country sets being used are the same and check time series lengths and starting dates
# 72 countries with Oct 9 finish and with mindeaths=100 and mindays=150 and mindeathspm = 0.5
countrysets = [d_countries,c_countries,lc_countries,pc_countries,nc_countries]
print([len(ccs) for ccs in countrysets])
for ccs1 in countrysets:
print([ccs1 == ccs2 for ccs2 in countrysets])
print([len(clusdata_all[d1]['United States']) for d1 in datasets])
# print(len(total_deaths_x['dates']),len(total_cases_x['dates']),len(testing_x['dates']),total_deaths_x['dates'][0],total_cases_x['dates'][0],testing_x['dates'][0])
covid_owid[0].keys()
covid_ts.keys()
covid_owid_ts.keys()
miscnms = ['clusdata_all','cases','datasets','contact_dic','age_group_dic']
deathnms = [x for x in dir() if 'deaths' in x]
casenms = [x for x in dir() if 'cases' in x if not callable(eval(x))]
covidnms = [x for x in dir() if 'covid' in x]
popnms = [x for x in dir() if 'population' in x]
# lccountries is type dict_keys, which can't be pickled
countrynms = [x for x in dir() if 'countr' in x and x is not 'lccountries' and not callable(eval(x))]
countrynms = [x for x in dir() if 'countr' in x and (isinstance(eval(x),dict) or isinstance(eval(x),list) or isinstance(eval(x),tuple))]
allnms = deathnms+casenms+covidnms+countrynms+miscnms
allnms = countrynms + covidnms + miscnms + deathnms + casenms + popnms
data_all = {nm:eval(nm) for nm in allnms}
start = time()
pk.dump(data_all,open('./pks/data_all_raw.pk','wb'))
print('elapsed: ',time()-start)
# read in data
start=time()
print('reading in data...')
with open('./pks/data_all_raw.pk','rb') as fp:
foo = pk.load(fp)
print('elapsed: ',time()-start)
# make each element of the dictionary a global variable named with key:
for x in foo:
stmp = x+"= foo['"+x+"']"
exec(stmp)
| 0.183703 | 0.564759 |
# Gillespie's algorithm for logistic branching process where all cells are different
*Jonathan Lindström, LNU*
I am interested in modelling evolution in cancer cells. The [logistic branching process](https://projecteuclid.org/euclid.aoap/1115137984) is a stochastic branching process that follows logistic growth. I'm looking at a slightly modified version that works as follows. Consider a population of $N$ cells. Individual cells divide (into two) at a rate
$b = r - (N-1)I_b$
and die at a rate
$d = s + (N-1)I_d$
If the birth rate is ever negative, the negative excess is added to the death rate. The process will tend to hover around some population average $\hat{N}$. This is simply simulated with gillespies algorithm:
1. Calculate the birth and death rates
2. Sum birth and death rates to get a total event rate
2. Get a waiting time from the exponential distribution based on the total rate
3. Select division or death with a probability proportional to their rates.
Simmulating the process for a time t will take a time proportional to $\hat{N}$ as the number of events within a time interval depends on the number of cells. This is not a problem.
Consider a situation where every cell has a (near) unique base birth rate $r$. It is determined by the cells genome which changes upon each division (birth event), and also changes with time (simulating drug treatment). The variation in time is slow however, so we can ignore detailed effects of it changing in the simulation. The algorithm now becomes:
1. Calculate birth and death rates for each cell ($O(N)$)
2. Sum all birth and death rates to get a total event rate ($O(N)$)
2. Get waiting time
3. Select event
Note that step one grows with the number of cells. Thus simulating for a time $t$ is now $O(\hat{N}^2)$. I need to run these simulations many times to gather statistics. That is embarrasingly parallell and simple to do. But as it stands the simulations themselves are slower than what would be practical. There are two (maybe three) things I want to do with it:
1. Run multiple processes for gathering statistics on separate cores (embarassingly paralell without GPU, what about after the optimizations below?)
2. Speed up the calculation of birth and death rates (millions of cells so big vector operation) by running it on a GPU.
3. Speed up summing all the rates by also running it on the gpu, possibly as a part of calculation algorithm (more difficult).
It seems unlikely that parallelizing the rate calculation onto the CPU would provide any real-time speedup when running more than one simulation, since the other cores could be more effectively used just running more simulations. But since it is a simple repeated operation, maybe a GPU can speed it up.
### Feasibility
The whole sequential-code simulator is less than 200 lines of c++ code, designed to be flexible with the details of how the cells rates are determined. So it is not a lot of code to modify.
I have already (during the course) implemented optimization 2 above (calculating rates on GPU) using CUDA resulting in a roughly 40% speedup. However, it might be that competition for GPU resources would limit the number of simulations that can be run in parallel on one machine resulting in a net speed-loss. This would have to be investigated.
### Plan
Steps one and two below are the main focus. If time permits, also do step tree and four. If time still permits, do step five.
1. Add threading to program to run multiple simulations at once. Each simulation should have their own CUDA stream for transferring data and running kernels.
2. Investigate scaling with $\hat{N}$ and number of parallel simulations. Compare internal parallelization (with CUDA streams) with simply running more iterations of the program in parallel (is the automatic scheduling good enough?)
3. Move reduction to GPU, check effect on single simulation performance.
4. Again investigate scaling
5. (maybe) The sequential algorithm is still fully functional in the program. It is likely that small population sizes are faster running only on the CPU (no data transfer overhead). Find the size limit that optimizes performance.
I'm interested in modelling evolution in cancer cells. Specifically in a branching process model that obeys logistic growth. Such a process can be simulated using Gillespie's algorithm, but since I'm interested in a population where all cells are different, it is very inefficient. Simulating N cells for a time t is O(N^2). A large part is recalculating birth/death rates for all cells in every timestep. Since this is a large vector operation, I want to move it to the GPU for a speedup.
I also need statistics from multiple simulations. For the sequential program this is embarassingly parallel. Can the GPU be shared effectively enough that this is still true, or will the GPU segment bottleneck the sequential program when running multiple threads?
I have already implemented the GPU vector operation resulting in a decent (~40%) speedup. Now I need to add parallelization to run multiple simulations at once and test if I still get linear speedup as from the sequential program. The performance characteristics with number of parallel simulations and population size could then be measured against the pure sequential program.
There is also a costly reduction step that might also benefit from running on the GPU, this could be explored if time permits.
|
github_jupyter
|
# Gillespie's algorithm for logistic branching process where all cells are different
*Jonathan Lindström, LNU*
I am interested in modelling evolution in cancer cells. The [logistic branching process](https://projecteuclid.org/euclid.aoap/1115137984) is a stochastic branching process that follows logistic growth. I'm looking at a slightly modified version that works as follows. Consider a population of $N$ cells. Individual cells divide (into two) at a rate
$b = r - (N-1)I_b$
and die at a rate
$d = s + (N-1)I_d$
If the birth rate is ever negative, the negative excess is added to the death rate. The process will tend to hover around some population average $\hat{N}$. This is simply simulated with gillespies algorithm:
1. Calculate the birth and death rates
2. Sum birth and death rates to get a total event rate
2. Get a waiting time from the exponential distribution based on the total rate
3. Select division or death with a probability proportional to their rates.
Simmulating the process for a time t will take a time proportional to $\hat{N}$ as the number of events within a time interval depends on the number of cells. This is not a problem.
Consider a situation where every cell has a (near) unique base birth rate $r$. It is determined by the cells genome which changes upon each division (birth event), and also changes with time (simulating drug treatment). The variation in time is slow however, so we can ignore detailed effects of it changing in the simulation. The algorithm now becomes:
1. Calculate birth and death rates for each cell ($O(N)$)
2. Sum all birth and death rates to get a total event rate ($O(N)$)
2. Get waiting time
3. Select event
Note that step one grows with the number of cells. Thus simulating for a time $t$ is now $O(\hat{N}^2)$. I need to run these simulations many times to gather statistics. That is embarrasingly parallell and simple to do. But as it stands the simulations themselves are slower than what would be practical. There are two (maybe three) things I want to do with it:
1. Run multiple processes for gathering statistics on separate cores (embarassingly paralell without GPU, what about after the optimizations below?)
2. Speed up the calculation of birth and death rates (millions of cells so big vector operation) by running it on a GPU.
3. Speed up summing all the rates by also running it on the gpu, possibly as a part of calculation algorithm (more difficult).
It seems unlikely that parallelizing the rate calculation onto the CPU would provide any real-time speedup when running more than one simulation, since the other cores could be more effectively used just running more simulations. But since it is a simple repeated operation, maybe a GPU can speed it up.
### Feasibility
The whole sequential-code simulator is less than 200 lines of c++ code, designed to be flexible with the details of how the cells rates are determined. So it is not a lot of code to modify.
I have already (during the course) implemented optimization 2 above (calculating rates on GPU) using CUDA resulting in a roughly 40% speedup. However, it might be that competition for GPU resources would limit the number of simulations that can be run in parallel on one machine resulting in a net speed-loss. This would have to be investigated.
### Plan
Steps one and two below are the main focus. If time permits, also do step tree and four. If time still permits, do step five.
1. Add threading to program to run multiple simulations at once. Each simulation should have their own CUDA stream for transferring data and running kernels.
2. Investigate scaling with $\hat{N}$ and number of parallel simulations. Compare internal parallelization (with CUDA streams) with simply running more iterations of the program in parallel (is the automatic scheduling good enough?)
3. Move reduction to GPU, check effect on single simulation performance.
4. Again investigate scaling
5. (maybe) The sequential algorithm is still fully functional in the program. It is likely that small population sizes are faster running only on the CPU (no data transfer overhead). Find the size limit that optimizes performance.
I'm interested in modelling evolution in cancer cells. Specifically in a branching process model that obeys logistic growth. Such a process can be simulated using Gillespie's algorithm, but since I'm interested in a population where all cells are different, it is very inefficient. Simulating N cells for a time t is O(N^2). A large part is recalculating birth/death rates for all cells in every timestep. Since this is a large vector operation, I want to move it to the GPU for a speedup.
I also need statistics from multiple simulations. For the sequential program this is embarassingly parallel. Can the GPU be shared effectively enough that this is still true, or will the GPU segment bottleneck the sequential program when running multiple threads?
I have already implemented the GPU vector operation resulting in a decent (~40%) speedup. Now I need to add parallelization to run multiple simulations at once and test if I still get linear speedup as from the sequential program. The performance characteristics with number of parallel simulations and population size could then be measured against the pure sequential program.
There is also a costly reduction step that might also benefit from running on the GPU, this could be explored if time permits.
| 0.81582 | 0.992364 |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#df["job"].cat.codes" data-toc-modified-id="df["job"].cat.codes-1"><span class="toc-item-num">1 </span>df["job"].cat.codes</a></span></li></ul></div>
```
from wildwood.datasets import describe_datasets
df = describe_datasets(include="small-classification")
df
from wildwood.datasets import load_bank
truc = load_bank()
df = truc.df_raw
df.dtypes
for col_idx, (col_name, col_dtype) in enumerate(df.dtypes.items()):
print(col_idx, col_name, col_dtype, col_dtype.name, col_dtype.name == "category")
import numpy as np
import matplotlib.pyplot as plt
n = np.arange(1, 100)
y = np.maximum(2, np.floor(np.log(n)))
plt.plot(n, y)
np.ceil()
for col_type in ["DataFrame", "ndarray"]:
for col_dtype_kind in ["bOSU", "uif"]:
for is_categorical in [True, False, None]:
print(col_type, col_dtype_kind, is_categorical)
# col.dtype.kind "category" or in "OSU" it must be categorical
# If "buif" it depends on is_categorical
# is_categorical : True, False
# colname :
for col in ["category", "OSU"]
df["job"].name
df["age"].dtype.kind
```
## df["job"].cat.codes
```
df["job"].head()
df["job"].cat.codes.head()
type(df["job"].cat.codes.to_numpy())
df["y"].dtype.name
import pandas as pd
isinstance(df["job"].cat.codes, pd.Series)
isinstance(df, pd.DataFrame)
type(df["job"].cat.values)
df
for col_name, col in df.items():
print(col_name, col.head().values)
col
import numpy as np
import pandas as pd
col = df["job"]
categories = col.cat.categories.values
x = np.array(col.values)
x
pd.Categorical(x, categories=categories)
col2 = col.cat.set_categories(['management', 'technician', 'entrepreneur'])
col.cat.categories
set(col2.cat.categories) in set(col.cat.categories)
set(col2.cat.categories)
set(col.cat.categories)
set(col2.cat.categories).issubset(set(col.cat.categories))
{1, 2, 3}.issubset({1, 2, 3})
from sklearn.preprocessing import OrdinalEncoder
encoder = OrdinalEncoder(handle_unknown="use_encoded_value")
X1 = pd.DataFrame({"a": ["a", "b", None, "c", "b", "b", None]}, dtype="category")
X2 = pd.DataFrame({"a": ["a", "d", None, "a", "b", "b", None]}, dtype="category")
X1["a"].cat.categories
X2["a"].cat.categories
col = X1["a"]
col.hasnans
col.dropna()
col.cat.categories[[1, 0, 1, 0, 1, 0, -2]]
col.cat.categories[[1, 1, 0, 0, 0, 1, -1]]
col.cat.categories.values[[1, 1, 0, 0, 0, 1, -1]]
truc = np.array(["truc", "machin", "chose"])
truc[np.array([0, 0, 1, 1, 0, 1, 2])]
np.array([(-np.inf, -1.0), (-1.0, 2.0), (2.0, 5.0), (5.0, np.inf)])
bin_thresholds = np.concatenate(([-np.inf], np.array([-1.0, 1.0, 2.0, 3.0]), [np.inf]))
[(a, b) for a, b in zip(bin_thresholds[:-1], bin_thresholds[1:])]
truc = pd.IntervalIndex.from_tuples([(a, b) for a, b in zip(bin_thresholds[:-1], bin_thresholds[1:])]).values
truc
truc
ahah = truc[[0, 1, 2, 1, 0, 0, 0, 2, 2, 1, 1]]
ahah[7] = np.nan
ahah
t = 11
if t is not None and t > 10:
print(t)
df["age"].take([1, 3])
from numbers import Real
isinstance(None, (int, float))
truc.values
pd.Series([pd.Interval(-1.0, 2), pd.Interval(-1.0, 2)])
truc = pd.Series([pd.Interval(-1.0, 2), pd.Interval(-1.0, 2)])
s = {"b", "c", "e", "d", "f", "b"}.difference({"b", "a", "c", "d"})
"Unknowns are {0}".format(s)
tuc = pd.DataFrame({"a": [1, 3, 2, 17]})
tuc.dtypes["a"].kind
col = tuc["a"]
col[[0, 1, 2, 0, 0, 0, 1, 2, 2, 2, 1, 0]]
n_samples = 1000
max_values = np.array([1024, 32, 64], dtype=np.uint64)
dtype = np.uint16
n_features = max_values.size
X_in = np.asfortranarray(
np.random.randint(max_values + 1, size=(n_samples, n_features)), dtype=dtype
)
df_in = pd.DataFrame(data=X_in).astype("category")
df_in
df_in.info()
dataset = array_to_dataset(X_in, max_values=max_values)
dataset_out = Encoder().fit_transform(df)
X_out = dataset_to_array(dataset)
np.testing.assert_array_equal(X_in, X_out)
col.dtype
col[3] = np.nan
col
df.dtypes
categories = df["housing"].cat.categories
categories.to_numpy()
col
pd.Series([1, 2, 3, 1, 0, 0, 1, 1, 0, 0, 1])
df.dtypes
set.difference
truc.int.left
pd.Interval(-1, 3).left
np.Series1 in pd.Series([pd.Interval(-1.0, 2), pd.Interval(-1.0, 2)])
df = pd.DataFrame(
{
"A": [None, "a", "b", None, "a", "a", "d", "c"],
"B": [3, None, 0, -1, 42, 7, 1, None],
"C": ["b", "a", "b", "c", "a", "a", "d", "c"],
"D": [-4, 1, 2, 1, -3, None, 2, 3.0],
# "E": [-4, 1, 2, 1, -3, 1, 2, 3],
},
dtype={"A": "category", "B": "category", "C": "category"}
)
np.nan
df.dtypes
hasattr(df, "")
```
# encoder.fit(X1)
```
encoder.categories_
X1
encoder.transform(X1)
col2.isna()
categories
col2.hasnans
pd.hasnans(col2)
truc = col2.cat.codes.values.copy()
truc[col2.isna()] = 12
col.cat.codes.max()
col2.cat.codes[col2.isna()] = 12
col2
col2.cat.codes [col2.isna()
col2.codes
col.cat.set_categories
df.iloc[:, 1].copy()
categories.values
pd.Categorical(col, categories=["A", "B"])
col.astype("category")
pd.Series(col).astype("category", categories=["A", "B"])
col.cat.categories.get_loc(["admin."])
col.cat.categories.get_loc(["admin.", "entrepreneur", "admin.", "admin.", "student"])
col = df["job"]
```
category -> index
```
col.cat.categories.values
```
index -> category
```
df.dtypes
df.select_dtypes("category")
df["task"].dtype.kind
hasattr(df["task"], "cat")
df.ndim
df.shape
dataset = load_bank()
job = dataset.df_raw["job"]
job.dtype
df = dataset.df_raw
df["age"].dtype.kind
import numbers
isinstance(1.0, numbers.Integral)
df
df["balance"].dtype
df.info()
for col in df:
print(col, df[col].dtype, df[col].dtype.kind)
# hasattr(job, "cat")
df.shape
job.dtype.categories
job.cat.codes.values.max()
dict(enumerate(job.cat.categories))
job.cat.codes.dtype
job[:100]
all(job.cat.categories[job.cat.codes] == job)
df["task"].astype("category")
from wildwood.dataset import load_boston
dataset = load_boston()
dataset.drop = None
dataset.one_hot_encode = True
X_train, X_test, y_train, y_test = dataset.extract(random_state=42)
dataset.columns_
X_train.shape
import numpy as np
dataset.df_raw.apply(lambda col: np.unique(col).size)
np.finfo("float32").eps
np.finfo("float32").epsneg
from wildwood.dataset import loaders_small_classification
for loader in loaders_small_classification:
dataset = loader()
dataset.one_hot_encode = True
X_train, X_test, y_train, y_test = dataset.extract(random_state=42)
print("-" * 32)
print(dataset.name)
print(dataset.categorical_features_)
dataset.one_hot_encode = False
X_train, X_test, y_train, y_test = dataset.extract(random_state=42)
print(dataset.categorical_features_)
```
# Test for categorical on a small toy data
```
import numpy as np
import pandas as pd
X2 = np.repeat(np.arange(5), 20).reshape((-1, 1))
X1 = np.repeat(np.arange(2), 50).reshape((-1, 1))
y = np.repeat([1, 0, 0, 1, 0], 20)
X = np.concatenate([X1, X2], axis=1)
(
pd.DataFrame({"X1": X1.ravel(), "X2": X2.ravel(), "y": y})
.groupby("X2")
.sum()
.reset_index()
)
attributes = ["feature", "bin_threshold", "bin_partition", "y_pred", "is_split_categorical"]
from bokeh.plotting import output_notebook, show
from wildwood import ForestClassifier
from wildwood.plot import plot_tree
output_notebook()
clf = ForestClassifier(
n_estimators=1,
random_state=42,
categorical_features=[True, True],
dirichlet=0.,
aggregation=False
)
clf.fit(X, y)
fig = plot_tree(clf, height=200, attributes=attributes)
show(fig)
clf.predict_proba(np.array([0, 1]).reshape(1, 2))
clf.path_leaf(np.array([0, 1]).reshape(1, 2))[0]
clf = ForestClassifier(
n_estimators=1,
random_state=42,
categorical_features=None,
dirichlet=0.,
aggregation=False
)
clf.fit(X, y)
fig = plot_tree(clf, height=300, attributes=attributes)
show(fig)
from sklearn.preprocessing import OneHotEncoder
X_onehot = OneHotEncoder(sparse=False).fit_transform(X)
clf = ForestClassifier(
max_features=None,
n_estimators=1,
random_state=42,
dirichlet=0.,
aggregation=False,
categorical_features=None)
clf.fit(X_onehot, y)
fig = plot_tree(clf, height=300, attributes=attributes)
show(fig)
X_onehot.shape
X_onehot = OneHotEncoder(sparse=False).fit_transform(X)
clf = ForestClassifier(n_estimators=1, random_state=42, categorical_features=[True] * 5)
clf.fit(X_onehot, y)
fig = plot_tree(clf, height=300)
show(fig)
df.columns
df[["node_id", "left_child", "right_child", "is_leaf", "is_split_categorical", "split_partition", "y_pred"]]
clf = ForestClassifier(n_estimators=1, random_state=42, categorical_features=None)
clf.fit(X, y)
df = clf.get_nodes(0)
df[["node_id", "left_child", "right_child", "is_leaf", "is_split_categorical", "split_partition", "y_pred"]]
from sklearn.preprocessing import OneHotEncoder
one_hot_encoder = OneHotEncoder(sparse=False)
df = clf.get_nodes(0)
df[["node_id", "left_child", "right_child", "is_leaf", "is_split_categorical", "split_partition", "y_pred"]]
aaa
str(np.array([1, 4, 2], dtype=np.uint8))
clf.predict_proba(X)
clf = ForestClassifier(n_estimators=1, random_state=42, categorical_features=[True])
clf.fit(X, y)
df.describe(include="all")
dataset.one_hot_encode = False
X_train, X_test, y_train, y_test = dataset.extract(random_state=42)
dataset.categorical_columns_
dataset.categorical_features_
import numpy as np
np.unique(X_train[:, 9])
X_train[1:100, 12]
categorical_features[18]
np.unique(X_train[:, 18]).size
X_train[:, 2].shape
import numpy as np
n_features = X_train.shape[1]
categorical_features = np.zeros(n_features, dtype=np.bool)
categorical_features[-dataset.n_features_categorical_:] = True
categorical_features
from wildwood import ForestClassifier
clf = ForestClassifier(categorical_features=categorical_features)
clf._check_categories(X_train)
dataset.one_hot_encode = False
dataset.extract(random_state=42)
dataset.transformer.transformer_list
ordinal_encoder = dataset.transformer.transformer_list[1][1].transformers[0][1]
ordinal_encoder.categories
ordinal_encoder.categories_
from wildwood._binning import Binner
binner = Binner()
binner.fit_transform(df)
df = dataset.df_raw
df.dtypes
df.head()
df.to_csv("dataset-small-classification.csv.gz", index=False)
df = describe_datasets(include="small-regression")
df
df.to_csv("dataset-small-regression.csv.gz", index=False)
import pandas as pd
pd.read_csv("dataset-small-regression.csv.gz")
pwd
ls -rtl
from wildwood.dataset import datasets_description
df_datasets = datasets_description(include="small-classification")
df_datasets
df_datasets = datasets_description(include="small-regression")
df_datasets
df_datasets.loc[df_datasets["task"] == "regression"]
.iloc(df_datasets["task"] == "regression")
dataset = load_churn()
dataset.df_raw.head()
import numpy as np
import pandas as pd
from wildwood._binning import Binner
df = pd.DataFrame({
"A": ["a", "b", "a", "a"],
"B": [0.1, 0.2, 0.1, 0.3]
})
df.dtypes
df["A"] = df["A"].astype("category")
df.dtypes
binner = Binner()
binner.fit_transform(df)
print(data["DESCR"])
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer(as_frame=True)
data["frame"]
data["frame"].info()
print(data["DESCR"])
from sklearn.datasets import load_diabetes, load_boston
data = load_boston()
data.keys()
import pandas as pd
df = pd.DataFrame(data["data"], columns=data["feature_names"])
df["target"] = data["target"]
df.info()
df.head()
print(data["DESCR"])
truc["frame"]
truc["target"]
truc.keys()
truc["frame"]
truc["frame"].dtypes
import numpy as np
np.repeat(np.array([2, 1, 17, 3]), repeats=3)
```
|
github_jupyter
|
from wildwood.datasets import describe_datasets
df = describe_datasets(include="small-classification")
df
from wildwood.datasets import load_bank
truc = load_bank()
df = truc.df_raw
df.dtypes
for col_idx, (col_name, col_dtype) in enumerate(df.dtypes.items()):
print(col_idx, col_name, col_dtype, col_dtype.name, col_dtype.name == "category")
import numpy as np
import matplotlib.pyplot as plt
n = np.arange(1, 100)
y = np.maximum(2, np.floor(np.log(n)))
plt.plot(n, y)
np.ceil()
for col_type in ["DataFrame", "ndarray"]:
for col_dtype_kind in ["bOSU", "uif"]:
for is_categorical in [True, False, None]:
print(col_type, col_dtype_kind, is_categorical)
# col.dtype.kind "category" or in "OSU" it must be categorical
# If "buif" it depends on is_categorical
# is_categorical : True, False
# colname :
for col in ["category", "OSU"]
df["job"].name
df["age"].dtype.kind
df["job"].head()
df["job"].cat.codes.head()
type(df["job"].cat.codes.to_numpy())
df["y"].dtype.name
import pandas as pd
isinstance(df["job"].cat.codes, pd.Series)
isinstance(df, pd.DataFrame)
type(df["job"].cat.values)
df
for col_name, col in df.items():
print(col_name, col.head().values)
col
import numpy as np
import pandas as pd
col = df["job"]
categories = col.cat.categories.values
x = np.array(col.values)
x
pd.Categorical(x, categories=categories)
col2 = col.cat.set_categories(['management', 'technician', 'entrepreneur'])
col.cat.categories
set(col2.cat.categories) in set(col.cat.categories)
set(col2.cat.categories)
set(col.cat.categories)
set(col2.cat.categories).issubset(set(col.cat.categories))
{1, 2, 3}.issubset({1, 2, 3})
from sklearn.preprocessing import OrdinalEncoder
encoder = OrdinalEncoder(handle_unknown="use_encoded_value")
X1 = pd.DataFrame({"a": ["a", "b", None, "c", "b", "b", None]}, dtype="category")
X2 = pd.DataFrame({"a": ["a", "d", None, "a", "b", "b", None]}, dtype="category")
X1["a"].cat.categories
X2["a"].cat.categories
col = X1["a"]
col.hasnans
col.dropna()
col.cat.categories[[1, 0, 1, 0, 1, 0, -2]]
col.cat.categories[[1, 1, 0, 0, 0, 1, -1]]
col.cat.categories.values[[1, 1, 0, 0, 0, 1, -1]]
truc = np.array(["truc", "machin", "chose"])
truc[np.array([0, 0, 1, 1, 0, 1, 2])]
np.array([(-np.inf, -1.0), (-1.0, 2.0), (2.0, 5.0), (5.0, np.inf)])
bin_thresholds = np.concatenate(([-np.inf], np.array([-1.0, 1.0, 2.0, 3.0]), [np.inf]))
[(a, b) for a, b in zip(bin_thresholds[:-1], bin_thresholds[1:])]
truc = pd.IntervalIndex.from_tuples([(a, b) for a, b in zip(bin_thresholds[:-1], bin_thresholds[1:])]).values
truc
truc
ahah = truc[[0, 1, 2, 1, 0, 0, 0, 2, 2, 1, 1]]
ahah[7] = np.nan
ahah
t = 11
if t is not None and t > 10:
print(t)
df["age"].take([1, 3])
from numbers import Real
isinstance(None, (int, float))
truc.values
pd.Series([pd.Interval(-1.0, 2), pd.Interval(-1.0, 2)])
truc = pd.Series([pd.Interval(-1.0, 2), pd.Interval(-1.0, 2)])
s = {"b", "c", "e", "d", "f", "b"}.difference({"b", "a", "c", "d"})
"Unknowns are {0}".format(s)
tuc = pd.DataFrame({"a": [1, 3, 2, 17]})
tuc.dtypes["a"].kind
col = tuc["a"]
col[[0, 1, 2, 0, 0, 0, 1, 2, 2, 2, 1, 0]]
n_samples = 1000
max_values = np.array([1024, 32, 64], dtype=np.uint64)
dtype = np.uint16
n_features = max_values.size
X_in = np.asfortranarray(
np.random.randint(max_values + 1, size=(n_samples, n_features)), dtype=dtype
)
df_in = pd.DataFrame(data=X_in).astype("category")
df_in
df_in.info()
dataset = array_to_dataset(X_in, max_values=max_values)
dataset_out = Encoder().fit_transform(df)
X_out = dataset_to_array(dataset)
np.testing.assert_array_equal(X_in, X_out)
col.dtype
col[3] = np.nan
col
df.dtypes
categories = df["housing"].cat.categories
categories.to_numpy()
col
pd.Series([1, 2, 3, 1, 0, 0, 1, 1, 0, 0, 1])
df.dtypes
set.difference
truc.int.left
pd.Interval(-1, 3).left
np.Series1 in pd.Series([pd.Interval(-1.0, 2), pd.Interval(-1.0, 2)])
df = pd.DataFrame(
{
"A": [None, "a", "b", None, "a", "a", "d", "c"],
"B": [3, None, 0, -1, 42, 7, 1, None],
"C": ["b", "a", "b", "c", "a", "a", "d", "c"],
"D": [-4, 1, 2, 1, -3, None, 2, 3.0],
# "E": [-4, 1, 2, 1, -3, 1, 2, 3],
},
dtype={"A": "category", "B": "category", "C": "category"}
)
np.nan
df.dtypes
hasattr(df, "")
encoder.categories_
X1
encoder.transform(X1)
col2.isna()
categories
col2.hasnans
pd.hasnans(col2)
truc = col2.cat.codes.values.copy()
truc[col2.isna()] = 12
col.cat.codes.max()
col2.cat.codes[col2.isna()] = 12
col2
col2.cat.codes [col2.isna()
col2.codes
col.cat.set_categories
df.iloc[:, 1].copy()
categories.values
pd.Categorical(col, categories=["A", "B"])
col.astype("category")
pd.Series(col).astype("category", categories=["A", "B"])
col.cat.categories.get_loc(["admin."])
col.cat.categories.get_loc(["admin.", "entrepreneur", "admin.", "admin.", "student"])
col = df["job"]
col.cat.categories.values
df.dtypes
df.select_dtypes("category")
df["task"].dtype.kind
hasattr(df["task"], "cat")
df.ndim
df.shape
dataset = load_bank()
job = dataset.df_raw["job"]
job.dtype
df = dataset.df_raw
df["age"].dtype.kind
import numbers
isinstance(1.0, numbers.Integral)
df
df["balance"].dtype
df.info()
for col in df:
print(col, df[col].dtype, df[col].dtype.kind)
# hasattr(job, "cat")
df.shape
job.dtype.categories
job.cat.codes.values.max()
dict(enumerate(job.cat.categories))
job.cat.codes.dtype
job[:100]
all(job.cat.categories[job.cat.codes] == job)
df["task"].astype("category")
from wildwood.dataset import load_boston
dataset = load_boston()
dataset.drop = None
dataset.one_hot_encode = True
X_train, X_test, y_train, y_test = dataset.extract(random_state=42)
dataset.columns_
X_train.shape
import numpy as np
dataset.df_raw.apply(lambda col: np.unique(col).size)
np.finfo("float32").eps
np.finfo("float32").epsneg
from wildwood.dataset import loaders_small_classification
for loader in loaders_small_classification:
dataset = loader()
dataset.one_hot_encode = True
X_train, X_test, y_train, y_test = dataset.extract(random_state=42)
print("-" * 32)
print(dataset.name)
print(dataset.categorical_features_)
dataset.one_hot_encode = False
X_train, X_test, y_train, y_test = dataset.extract(random_state=42)
print(dataset.categorical_features_)
import numpy as np
import pandas as pd
X2 = np.repeat(np.arange(5), 20).reshape((-1, 1))
X1 = np.repeat(np.arange(2), 50).reshape((-1, 1))
y = np.repeat([1, 0, 0, 1, 0], 20)
X = np.concatenate([X1, X2], axis=1)
(
pd.DataFrame({"X1": X1.ravel(), "X2": X2.ravel(), "y": y})
.groupby("X2")
.sum()
.reset_index()
)
attributes = ["feature", "bin_threshold", "bin_partition", "y_pred", "is_split_categorical"]
from bokeh.plotting import output_notebook, show
from wildwood import ForestClassifier
from wildwood.plot import plot_tree
output_notebook()
clf = ForestClassifier(
n_estimators=1,
random_state=42,
categorical_features=[True, True],
dirichlet=0.,
aggregation=False
)
clf.fit(X, y)
fig = plot_tree(clf, height=200, attributes=attributes)
show(fig)
clf.predict_proba(np.array([0, 1]).reshape(1, 2))
clf.path_leaf(np.array([0, 1]).reshape(1, 2))[0]
clf = ForestClassifier(
n_estimators=1,
random_state=42,
categorical_features=None,
dirichlet=0.,
aggregation=False
)
clf.fit(X, y)
fig = plot_tree(clf, height=300, attributes=attributes)
show(fig)
from sklearn.preprocessing import OneHotEncoder
X_onehot = OneHotEncoder(sparse=False).fit_transform(X)
clf = ForestClassifier(
max_features=None,
n_estimators=1,
random_state=42,
dirichlet=0.,
aggregation=False,
categorical_features=None)
clf.fit(X_onehot, y)
fig = plot_tree(clf, height=300, attributes=attributes)
show(fig)
X_onehot.shape
X_onehot = OneHotEncoder(sparse=False).fit_transform(X)
clf = ForestClassifier(n_estimators=1, random_state=42, categorical_features=[True] * 5)
clf.fit(X_onehot, y)
fig = plot_tree(clf, height=300)
show(fig)
df.columns
df[["node_id", "left_child", "right_child", "is_leaf", "is_split_categorical", "split_partition", "y_pred"]]
clf = ForestClassifier(n_estimators=1, random_state=42, categorical_features=None)
clf.fit(X, y)
df = clf.get_nodes(0)
df[["node_id", "left_child", "right_child", "is_leaf", "is_split_categorical", "split_partition", "y_pred"]]
from sklearn.preprocessing import OneHotEncoder
one_hot_encoder = OneHotEncoder(sparse=False)
df = clf.get_nodes(0)
df[["node_id", "left_child", "right_child", "is_leaf", "is_split_categorical", "split_partition", "y_pred"]]
aaa
str(np.array([1, 4, 2], dtype=np.uint8))
clf.predict_proba(X)
clf = ForestClassifier(n_estimators=1, random_state=42, categorical_features=[True])
clf.fit(X, y)
df.describe(include="all")
dataset.one_hot_encode = False
X_train, X_test, y_train, y_test = dataset.extract(random_state=42)
dataset.categorical_columns_
dataset.categorical_features_
import numpy as np
np.unique(X_train[:, 9])
X_train[1:100, 12]
categorical_features[18]
np.unique(X_train[:, 18]).size
X_train[:, 2].shape
import numpy as np
n_features = X_train.shape[1]
categorical_features = np.zeros(n_features, dtype=np.bool)
categorical_features[-dataset.n_features_categorical_:] = True
categorical_features
from wildwood import ForestClassifier
clf = ForestClassifier(categorical_features=categorical_features)
clf._check_categories(X_train)
dataset.one_hot_encode = False
dataset.extract(random_state=42)
dataset.transformer.transformer_list
ordinal_encoder = dataset.transformer.transformer_list[1][1].transformers[0][1]
ordinal_encoder.categories
ordinal_encoder.categories_
from wildwood._binning import Binner
binner = Binner()
binner.fit_transform(df)
df = dataset.df_raw
df.dtypes
df.head()
df.to_csv("dataset-small-classification.csv.gz", index=False)
df = describe_datasets(include="small-regression")
df
df.to_csv("dataset-small-regression.csv.gz", index=False)
import pandas as pd
pd.read_csv("dataset-small-regression.csv.gz")
pwd
ls -rtl
from wildwood.dataset import datasets_description
df_datasets = datasets_description(include="small-classification")
df_datasets
df_datasets = datasets_description(include="small-regression")
df_datasets
df_datasets.loc[df_datasets["task"] == "regression"]
.iloc(df_datasets["task"] == "regression")
dataset = load_churn()
dataset.df_raw.head()
import numpy as np
import pandas as pd
from wildwood._binning import Binner
df = pd.DataFrame({
"A": ["a", "b", "a", "a"],
"B": [0.1, 0.2, 0.1, 0.3]
})
df.dtypes
df["A"] = df["A"].astype("category")
df.dtypes
binner = Binner()
binner.fit_transform(df)
print(data["DESCR"])
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer(as_frame=True)
data["frame"]
data["frame"].info()
print(data["DESCR"])
from sklearn.datasets import load_diabetes, load_boston
data = load_boston()
data.keys()
import pandas as pd
df = pd.DataFrame(data["data"], columns=data["feature_names"])
df["target"] = data["target"]
df.info()
df.head()
print(data["DESCR"])
truc["frame"]
truc["target"]
truc.keys()
truc["frame"]
truc["frame"].dtypes
import numpy as np
np.repeat(np.array([2, 1, 17, 3]), repeats=3)
| 0.668772 | 0.889193 |
```
%pylab inline
#%matplotlib qt
from __future__ import division # use so 1/2 = 0.5, etc.
import sigsys
import imp # used for module reload after editing, e.g., imp.reload(alias)
import scipy.signal as signal
from IPython.display import Audio, display
from IPython.display import Image, SVG
pylab.rcParams['savefig.dpi'] = 100 # default 72
#pylab.rcParams['figure.figsize'] = (6.0, 4.0) # default (6,4)
%config InlineBackend.figure_formats=['png'] # default for inline viewing
#%config InlineBackend.figure_formats=['svg'] # SVG inline viewing
#%config InlineBackend.figure_formats=['pdf'] # render pdf figs for LaTeX
```
# Playback Using the Notebook Audio Widget
This interface is used often when developing algorithms that involve processing signal samples that result in audible sounds. You will see this in the tutorial. Processing is done before hand as an analysis task, then the samples are written to a `.wav` file for playback using the PC audio system.
```
Audio('c_major.wav')
```
Below I import the `.wav` file so I can work with the signal samples:
```
fs,x = sigsys.from_wav('c_major.wav')
```
Here I visualize the C-major chord using the *spectrogram* to see the chord as being composed or the fundamental or root plus two third and fifth harmonics or overtones.
```
specgram(x,NFFT=2**13,Fs=fs);
ylim([0,1000])
title(r'Visualize the 3 Pitches of a C-Major Chord')
xlabel(r'Time (s)')
ylabel(r'Frequency (Hz)');
```
# Using Pyaudio: Callback with a `wav` File Source
With Pyaudio you set up a real-time interface between the audio source, a processing algorithm in Python, and a playback means. In the test case below the wave file is read into memory then played back frame-by-frame using a *callback* function. In this case the signals samples read from memory, or perhaps a buffer, are passed directly to the audio interface. In general processing algorithms may be implemented that operate on each frame. We will explore this in the tutorial.
```
import pyaudio
import wave
import time
import sys
"""PyAudio Example: Play a wave file (callback version)"""
wf = wave.open('Music_Test.wav', 'rb')
#wf = wave.open('c_major.wav', 'rb')
print('Sample width in bits: %d' % (8*wf.getsampwidth(),))
print('Number of channels: %d' % wf.getnchannels())
print('Sampling rate: %1.1f sps' % wf.getframerate())
p = pyaudio.PyAudio()
def callback(in_data, frame_count, time_info, status):
data = wf.readframes(frame_count)
#In general do some processing before returning data
#Here the data is in signed integer format
#In Python it is more comfortable to work with float (float64)
return (data, pyaudio.paContinue)
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(0.1)
stream.stop_stream()
stream.close()
wf.close()
p.terminate()
```
|
github_jupyter
|
%pylab inline
#%matplotlib qt
from __future__ import division # use so 1/2 = 0.5, etc.
import sigsys
import imp # used for module reload after editing, e.g., imp.reload(alias)
import scipy.signal as signal
from IPython.display import Audio, display
from IPython.display import Image, SVG
pylab.rcParams['savefig.dpi'] = 100 # default 72
#pylab.rcParams['figure.figsize'] = (6.0, 4.0) # default (6,4)
%config InlineBackend.figure_formats=['png'] # default for inline viewing
#%config InlineBackend.figure_formats=['svg'] # SVG inline viewing
#%config InlineBackend.figure_formats=['pdf'] # render pdf figs for LaTeX
Audio('c_major.wav')
fs,x = sigsys.from_wav('c_major.wav')
specgram(x,NFFT=2**13,Fs=fs);
ylim([0,1000])
title(r'Visualize the 3 Pitches of a C-Major Chord')
xlabel(r'Time (s)')
ylabel(r'Frequency (Hz)');
import pyaudio
import wave
import time
import sys
"""PyAudio Example: Play a wave file (callback version)"""
wf = wave.open('Music_Test.wav', 'rb')
#wf = wave.open('c_major.wav', 'rb')
print('Sample width in bits: %d' % (8*wf.getsampwidth(),))
print('Number of channels: %d' % wf.getnchannels())
print('Sampling rate: %1.1f sps' % wf.getframerate())
p = pyaudio.PyAudio()
def callback(in_data, frame_count, time_info, status):
data = wf.readframes(frame_count)
#In general do some processing before returning data
#Here the data is in signed integer format
#In Python it is more comfortable to work with float (float64)
return (data, pyaudio.paContinue)
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True,
stream_callback=callback)
stream.start_stream()
while stream.is_active():
time.sleep(0.1)
stream.stop_stream()
stream.close()
wf.close()
p.terminate()
| 0.418222 | 0.75602 |
# Chapter 8 - Movie Review Example
```
%pylab inline
import pandas
d = pandas.read_csv("data/movie_reviews.tsv", delimiter="\t")
# Holdout split
split = 0.7
d_train = d[:int(split*len(d))]
d_test = d[int((1-split)*len(d)):]
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
features = vectorizer.fit_transform(d_train.review)
i = 45000
j = 10
words = vectorizer.get_feature_names()[i:i+10]
pandas.DataFrame(features[j:j+7,i:i+10].todense(), columns=words)
float(features.getnnz())*100 / (features.shape[0]*features.shape[1])
from sklearn.naive_bayes import MultinomialNB
model1 = MultinomialNB()
model1.fit(features, d_train.sentiment)
pred1 = model1.predict_proba(vectorizer.transform(d_test.review))
from sklearn.metrics import accuracy_score, roc_auc_score, classification_report, roc_curve
def performance(y_true, pred, color="g", ann=True):
acc = accuracy_score(y_true, pred[:,1] > 0.5)
auc = roc_auc_score(y_true, pred[:,1])
fpr, tpr, thr = roc_curve(y_true, pred[:,1])
plot(fpr, tpr, color, linewidth="3")
xlabel("False positive rate")
ylabel("True positive rate")
if ann:
annotate("Acc: %0.2f" % acc, (0.1,0.8), size=14)
annotate("AUC: %0.2f" % auc, (0.1,0.7), size=14)
performance(d_test.sentiment, pred1)
```
## tf-idf features
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
features = vectorizer.fit_transform(d_train.review)
pred2 = model1.predict_proba(vectorizer.transform(d_test.review))
performance(d_test.sentiment, pred1, ann=False)
performance(d_test.sentiment, pred2, color="b")
xlim(0,0.5)
ylim(0.5,1)
```
## Parameter optimization
```
param_ranges = {
"max_features": [10000, 30000, 50000, None],
"min_df": [1,2,3],
"nb_alpha": [0.01, 0.1, 1.0]
}
def build_model(max_features=None, min_df=1, nb_alpha=1.0, return_preds=False):
vectorizer = TfidfVectorizer(max_features=max_features, min_df=min_df)
features = vectorizer.fit_transform(d_train.review)
model = MultinomialNB(alpha=nb_alpha)
model.fit(features, d_train.sentiment)
pred = model.predict_proba(vectorizer.transform(d_test.review))
res = {
"max_features": max_features,
"min_df": min_df,
"nb_alpha": nb_alpha,
"auc": roc_auc_score(d_test.sentiment, pred[:,1])
}
if return_preds:
res['preds'] = pred
return res
from itertools import product
results = []
for p in product(*param_ranges.values()):
res = build_model(**dict(zip(param_ranges.keys(), p)))
results.append( res )
print (res)
opt = pandas.DataFrame(results)
mf_idx = [0,9,18,27]
plot(opt.max_features[mf_idx], opt.auc[mf_idx], linewidth=2)
title("AUC vs max_features")
mdf_idx = [27,28,29]
plot(opt.min_df[mdf_idx], opt.auc[mdf_idx], linewidth=2)
title("AUC vs min_df")
nba_idx = [27,30,33]
plot(opt.nb_alpha[nba_idx], opt.auc[nba_idx], linewidth=2)
title("AUC vs alpha")
pred3 = build_model(nb_alpha=0.01, return_preds=True)['preds']
performance(d_test.sentiment, pred1, ann=False)
performance(d_test.sentiment, pred2, color="b", ann=False)
performance(d_test.sentiment, pred3, color="r")
xlim(0,0.5)
ylim(0.5,1)
```
## Random Forest
```
vectorizer = TfidfVectorizer(strip_accents='unicode', stop_words='english', min_df=3, max_features=30000, norm="l2")
features = vectorizer.fit_transform(d_train.review)
model3 = MultinomialNB()
model3.fit(features, d_train.sentiment)
pred3 = model3.predict_proba(vectorizer.transform(d_test.review))
performance(d_test.sentiment, pred3)
from sklearn.ensemble import RandomForestClassifier
model2 = RandomForestClassifier(n_estimators=100)
model2.fit(features, d_train.sentiment)
pred2 = model2.predict_proba(vectorizer.transform(d_test.review))
performance(d_test.sentiment, pred2)
```
## Word2Vec
```
import re, string
stop_words = set(['all', "she'll", "don't", 'being', 'over', 'through', 'yourselves', 'its', 'before', "he's", "when's", "we've", 'had', 'should', "he'd", 'to', 'only', "there's", 'those', 'under', 'ours', 'has', "haven't", 'do', 'them', 'his', "they'll", 'very', "who's", "they'd", 'cannot', "you've", 'they', 'not', 'during', 'yourself', 'him', 'nor', "we'll", 'did', "they've", 'this', 'she', 'each', "won't", 'where', "mustn't", "isn't", "i'll", "why's", 'because', "you'd", 'doing', 'some', 'up', 'are', 'further', 'ourselves', 'out', 'what', 'for', 'while', "wasn't", 'does', "shouldn't", 'above', 'between', 'be', 'we', 'who', "you're", 'were', 'here', 'hers', "aren't", 'by', 'both', 'about', 'would', 'of', 'could', 'against', "i'd", "weren't", "i'm", 'or', "can't", 'own', 'into', 'whom', 'down', "hadn't", "couldn't", 'your', "doesn't", 'from', "how's", 'her', 'their', "it's", 'there', 'been', 'why', 'few', 'too', 'themselves', 'was', 'until', 'more', 'himself', "where's", "i've", 'with', "didn't", "what's", 'but', 'herself', 'than', "here's", 'he', 'me', "they're", 'myself', 'these', "hasn't", 'below', 'ought', 'theirs', 'my', "wouldn't", "we'd", 'and', 'then', 'is', 'am', 'it', 'an', 'as', 'itself', 'at', 'have', 'in', 'any', 'if', 'again', 'no', 'that', 'when', 'same', 'how', 'other', 'which', 'you', "shan't", 'our', 'after', "let's", 'most', 'such', 'on', "he'll", 'a', 'off', 'i', "she'd", 'yours', "you'll", 'so', "we're", "she's", 'the', "that's", 'having', 'once'])
def tokenize(docs):
pattern = re.compile('[\W_]+', re.UNICODE)
sentences = []
for d in docs:
sentence = d.lower().split(" ")
sentence = [pattern.sub('', w) for w in sentence]
sentences.append( [w for w in sentence if w not in stop_words] )
return sentences
print list(stop_words)
sentences = tokenize(d_train.review)
from gensim.models.word2vec import Word2Vec
model = Word2Vec(sentences, size=300, window=10, min_count=1, sample=1e-3, workers=2)
model.init_sims(replace=True)
model['movie']
def featurize_w2v(model, sentences):
f = zeros((len(sentences), model.vector_size))
for i,s in enumerate(sentences):
for w in s:
try:
vec = model[w]
except KeyError:
continue
f[i,:] = f[i,:] + vec
f[i,:] = f[i,:] / len(s)
return f
features_w2v = featurize_w2v(model, sentences)
model4 = RandomForestClassifier(n_estimators=100, n_jobs=-1)
model4.fit(features_w2v, d_train.sentiment)
test_sentences = tokenize(d_test.review)
test_features_w2v = featurize_w2v(model, test_sentences)
pred4 = model4.predict_proba(test_features_w2v)
performance(d_test.sentiment, pred1, ann=False)
performance(d_test.sentiment, pred2, color="b", ann=False)
performance(d_test.sentiment, pred3, color="r", ann=False)
performance(d_test.sentiment, pred4, color="c")
xlim(0,0.3)
ylim(0.6,1)
examples = [
"This movie is bad",
"This movie is great",
"I was going to say something awesome, but I simply can't because the movie is so bad.",
"I was going to say something awesome or great or good, but I simply can't because the movie is so bad.",
"It might have bad actors, but everything else is good."
]
example_feat4 = featurize_w2v(model, tokenize(examples))
model4.predict(example_feat4)
```
|
github_jupyter
|
%pylab inline
import pandas
d = pandas.read_csv("data/movie_reviews.tsv", delimiter="\t")
# Holdout split
split = 0.7
d_train = d[:int(split*len(d))]
d_test = d[int((1-split)*len(d)):]
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
features = vectorizer.fit_transform(d_train.review)
i = 45000
j = 10
words = vectorizer.get_feature_names()[i:i+10]
pandas.DataFrame(features[j:j+7,i:i+10].todense(), columns=words)
float(features.getnnz())*100 / (features.shape[0]*features.shape[1])
from sklearn.naive_bayes import MultinomialNB
model1 = MultinomialNB()
model1.fit(features, d_train.sentiment)
pred1 = model1.predict_proba(vectorizer.transform(d_test.review))
from sklearn.metrics import accuracy_score, roc_auc_score, classification_report, roc_curve
def performance(y_true, pred, color="g", ann=True):
acc = accuracy_score(y_true, pred[:,1] > 0.5)
auc = roc_auc_score(y_true, pred[:,1])
fpr, tpr, thr = roc_curve(y_true, pred[:,1])
plot(fpr, tpr, color, linewidth="3")
xlabel("False positive rate")
ylabel("True positive rate")
if ann:
annotate("Acc: %0.2f" % acc, (0.1,0.8), size=14)
annotate("AUC: %0.2f" % auc, (0.1,0.7), size=14)
performance(d_test.sentiment, pred1)
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
features = vectorizer.fit_transform(d_train.review)
pred2 = model1.predict_proba(vectorizer.transform(d_test.review))
performance(d_test.sentiment, pred1, ann=False)
performance(d_test.sentiment, pred2, color="b")
xlim(0,0.5)
ylim(0.5,1)
param_ranges = {
"max_features": [10000, 30000, 50000, None],
"min_df": [1,2,3],
"nb_alpha": [0.01, 0.1, 1.0]
}
def build_model(max_features=None, min_df=1, nb_alpha=1.0, return_preds=False):
vectorizer = TfidfVectorizer(max_features=max_features, min_df=min_df)
features = vectorizer.fit_transform(d_train.review)
model = MultinomialNB(alpha=nb_alpha)
model.fit(features, d_train.sentiment)
pred = model.predict_proba(vectorizer.transform(d_test.review))
res = {
"max_features": max_features,
"min_df": min_df,
"nb_alpha": nb_alpha,
"auc": roc_auc_score(d_test.sentiment, pred[:,1])
}
if return_preds:
res['preds'] = pred
return res
from itertools import product
results = []
for p in product(*param_ranges.values()):
res = build_model(**dict(zip(param_ranges.keys(), p)))
results.append( res )
print (res)
opt = pandas.DataFrame(results)
mf_idx = [0,9,18,27]
plot(opt.max_features[mf_idx], opt.auc[mf_idx], linewidth=2)
title("AUC vs max_features")
mdf_idx = [27,28,29]
plot(opt.min_df[mdf_idx], opt.auc[mdf_idx], linewidth=2)
title("AUC vs min_df")
nba_idx = [27,30,33]
plot(opt.nb_alpha[nba_idx], opt.auc[nba_idx], linewidth=2)
title("AUC vs alpha")
pred3 = build_model(nb_alpha=0.01, return_preds=True)['preds']
performance(d_test.sentiment, pred1, ann=False)
performance(d_test.sentiment, pred2, color="b", ann=False)
performance(d_test.sentiment, pred3, color="r")
xlim(0,0.5)
ylim(0.5,1)
vectorizer = TfidfVectorizer(strip_accents='unicode', stop_words='english', min_df=3, max_features=30000, norm="l2")
features = vectorizer.fit_transform(d_train.review)
model3 = MultinomialNB()
model3.fit(features, d_train.sentiment)
pred3 = model3.predict_proba(vectorizer.transform(d_test.review))
performance(d_test.sentiment, pred3)
from sklearn.ensemble import RandomForestClassifier
model2 = RandomForestClassifier(n_estimators=100)
model2.fit(features, d_train.sentiment)
pred2 = model2.predict_proba(vectorizer.transform(d_test.review))
performance(d_test.sentiment, pred2)
import re, string
stop_words = set(['all', "she'll", "don't", 'being', 'over', 'through', 'yourselves', 'its', 'before', "he's", "when's", "we've", 'had', 'should', "he'd", 'to', 'only', "there's", 'those', 'under', 'ours', 'has', "haven't", 'do', 'them', 'his', "they'll", 'very', "who's", "they'd", 'cannot', "you've", 'they', 'not', 'during', 'yourself', 'him', 'nor', "we'll", 'did', "they've", 'this', 'she', 'each', "won't", 'where', "mustn't", "isn't", "i'll", "why's", 'because', "you'd", 'doing', 'some', 'up', 'are', 'further', 'ourselves', 'out', 'what', 'for', 'while', "wasn't", 'does', "shouldn't", 'above', 'between', 'be', 'we', 'who', "you're", 'were', 'here', 'hers', "aren't", 'by', 'both', 'about', 'would', 'of', 'could', 'against', "i'd", "weren't", "i'm", 'or', "can't", 'own', 'into', 'whom', 'down', "hadn't", "couldn't", 'your', "doesn't", 'from', "how's", 'her', 'their', "it's", 'there', 'been', 'why', 'few', 'too', 'themselves', 'was', 'until', 'more', 'himself', "where's", "i've", 'with', "didn't", "what's", 'but', 'herself', 'than', "here's", 'he', 'me', "they're", 'myself', 'these', "hasn't", 'below', 'ought', 'theirs', 'my', "wouldn't", "we'd", 'and', 'then', 'is', 'am', 'it', 'an', 'as', 'itself', 'at', 'have', 'in', 'any', 'if', 'again', 'no', 'that', 'when', 'same', 'how', 'other', 'which', 'you', "shan't", 'our', 'after', "let's", 'most', 'such', 'on', "he'll", 'a', 'off', 'i', "she'd", 'yours', "you'll", 'so', "we're", "she's", 'the', "that's", 'having', 'once'])
def tokenize(docs):
pattern = re.compile('[\W_]+', re.UNICODE)
sentences = []
for d in docs:
sentence = d.lower().split(" ")
sentence = [pattern.sub('', w) for w in sentence]
sentences.append( [w for w in sentence if w not in stop_words] )
return sentences
print list(stop_words)
sentences = tokenize(d_train.review)
from gensim.models.word2vec import Word2Vec
model = Word2Vec(sentences, size=300, window=10, min_count=1, sample=1e-3, workers=2)
model.init_sims(replace=True)
model['movie']
def featurize_w2v(model, sentences):
f = zeros((len(sentences), model.vector_size))
for i,s in enumerate(sentences):
for w in s:
try:
vec = model[w]
except KeyError:
continue
f[i,:] = f[i,:] + vec
f[i,:] = f[i,:] / len(s)
return f
features_w2v = featurize_w2v(model, sentences)
model4 = RandomForestClassifier(n_estimators=100, n_jobs=-1)
model4.fit(features_w2v, d_train.sentiment)
test_sentences = tokenize(d_test.review)
test_features_w2v = featurize_w2v(model, test_sentences)
pred4 = model4.predict_proba(test_features_w2v)
performance(d_test.sentiment, pred1, ann=False)
performance(d_test.sentiment, pred2, color="b", ann=False)
performance(d_test.sentiment, pred3, color="r", ann=False)
performance(d_test.sentiment, pred4, color="c")
xlim(0,0.3)
ylim(0.6,1)
examples = [
"This movie is bad",
"This movie is great",
"I was going to say something awesome, but I simply can't because the movie is so bad.",
"I was going to say something awesome or great or good, but I simply can't because the movie is so bad.",
"It might have bad actors, but everything else is good."
]
example_feat4 = featurize_w2v(model, tokenize(examples))
model4.predict(example_feat4)
| 0.550124 | 0.807802 |
# Predicting Boston Housing Prices
## Using XGBoost in SageMaker (Batch Transform)
_Deep Learning Nanodegree Program | Deployment_
---
As an introduction to using SageMaker's Low Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.
The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/)
## General Outline
Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
```
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
```
## Step 0: Setting up the notebook
We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
```
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
```
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
```
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
```
## Step 1: Downloading the data
Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
```
boston = load_boston()
```
## Step 2: Preparing and splitting the data
Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
```
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
```
## Step 3: Uploading the data files to S3
When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details.
### Save the data locally
First we need to create the test, train and validation csv files which we will then upload to S3.
```
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Upload to S3
Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
```
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
## Step 4: Train and construct the XGBoost model
Now that we have the training and validation data uploaded to S3, we can construct a training job for our XGBoost model and build the model itself.
### Set up the training job
First, we will set up and execute a training job for our model. To do this we need to specify some information that SageMaker will use to set up and properly execute the computation. For additional documentation on constructing a training job, see the [CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) reference.
```
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
```
### Execute the training job
Now that we've built the dictionary object containing the training job parameters, we can ask SageMaker to execute the job.
```
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
print(training_job_name)
```
The training job has now been created by SageMaker and is currently running. Since we need the output of the training job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the training job and continue doing so until the training job terminates.
```
session.logs_for_job(training_job_name, wait=True)
```
### Build the model
Now that the training job has completed, we have some model artifacts which we can use to build a model. Note that here we mean SageMaker's definition of a model, which is a collection of information about a specific algorithm along with the artifacts which result from a training job.
```
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
```
## Step 5: Testing the model
Now that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier.
### Set up the batch transform job
Just like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.
We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
```
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
```
### Execute the batch transform job
Now that we have created the request data structure, it is time to ask SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
```
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
```
### Analyze the results
Now that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
```
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
```
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
```
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
```
## Optional: Clean up
The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
```
|
github_jupyter
|
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
boston = load_boston()
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
prefix = 'boston-xgboost-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. You may wish to change these to see what effect
# there is on the resulting model.
training_params['HyperParameters'] = {
"max_depth": "5",
"eta": "0.2",
"gamma": "4",
"min_child_weight": "6",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
# First we need to choose a training job name. This is useful for if we want to recall information about our
# training job at a later date. Note that SageMaker requires a training job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
training_job_name = "boston-xgboost-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
training_params['TrainingJobName'] = training_job_name
# And now we ask SageMaker to create (and execute) the training job
training_job = session.sagemaker_client.create_training_job(**training_params)
print(training_job_name)
session.logs_for_job(training_job_name, wait=True)
# We begin by asking SageMaker to describe for us the results of the training job. The data structure
# returned contains a lot more information than we currently need, try checking it out yourself in
# more detail.
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
| 0.545044 | 0.986954 |
# Programming Assignment: Логистическая регрессия
## Введение
Логистическая регрессия — один из видов линейных классификаторов. Одной из ее особенностей является возможность оценивания вероятностей классов, тогда как большинство линейных классификаторов могут выдавать только номера классов.
Логистическая регрессия использует достаточно сложный функционал качества, который не допускает записи решения в явном виде (в отличие от, например, линейной регрессии). Тем не менее, логистическую регрессию можно настраивать с помощью градиентного спуска.
Мы будем работать с выборкой, содержащей два признака. Будем считать, что ответы лежат в множестве {-1, 1}. Для настройки логистической регрессии мы будем решать следующую задачу:

Здесь $x_{i1}$ и $x_{i2}$ — значение первого и второго признаков соответственно на объекте $x_{i}$. В этом задании мы будем рассматривать алгоритмы без свободного члена, чтобы упростить работу.
Градиентный шаг для весов будет заключаться в одновременном обновлении весов $w_1$ и $w_2$ по следующим формулам (проверьте сами, что здесь действительно выписана производная нашего функционала):


Здесь $k$ — размер шага.
Линейные методы могут переобучаться и давать плохое качество из-за различных проблем в данных: мультиколлинеарности, зашумленности и т.д. Чтобы избежать этого, следует использовать регуляризацию — она позволяет понизить сложность модели и не допустить переобучения. Сила регуляризации определяется коэффициентом C в формулах, указанных выше.
## Реализация в Scikit-Learn
В этом задании мы предлагаем вам самостоятельно реализовать градиентный спуск.
В качестве метрики качества будем использовать `AUC-ROC` (`Area Under ROC-Curve`). Она предназначена для алгоритмов бинарной классификации, выдающих оценку принадлежности объекта к одному из классов. По сути, значение этой метрики является агрегацией показателей качества всех алгоритмов, которые можно получить, выбирая какой-либо порог для оценки принадлежности.
В `Scikit-Learn` метрика `AUC` реализована функцией `sklearn.metrics.roc_auc_score`. В качестве первого аргумента ей передается вектор истинных ответов, в качестве второго — вектор с оценками принадлежности объектов к первому классу.
## Материалы
* [Подробнее о логистической регрессии и предсказании вероятностей с ее помощью](https://github.com/esokolov/ml-course-hse/blob/master/2016-fall/lecture-notes/lecture05-linclass.pdf)
* [Подробнее о градиентах и градиентном спуске](https://github.com/esokolov/ml-course-hse/blob/master/2016-fall/lecture-notes/lecture02-linregr.pdf)
## Инструкция по выполнению
### Шаг 1:
Загрузите данные из файла `data-logistic.csv`. Это двумерная выборка, целевая переменная на которой принимает значения -1 или 1.
```
import numpy as np
import pandas as pd
from sklearn.metrics import mean_squared_error as mse, roc_auc_score
df = pd.read_csv('data-logistic.csv', header=None)
X = df.loc[:, 1:]
y = df[0]
```
### Шаг 2:
Убедитесь, что выше выписаны правильные формулы для градиентного спуска. Обратите внимание, что мы используем полноценный градиентный спуск, а не его стохастический вариант!
```
def sigma_y(i, w1, w2):
return 1. / (1. + np.exp(-y[i] * (w1*X[1][i] + w2*X[2][i])))
def delta_for_w(w_index, w1, w2, C, k):
addition = sum(y[i] * X[w_index][i] * (1. - sigma_y(i, w1, w2)) for i in np.arange(0, len(y)))
addition *= k / len(y)
addition -= k * C * (w1 if w_index == 1 else w2)
return addition
```
### Шаг 3:
Реализуйте градиентный спуск для обычной и `L2`-регуляризованной (с коэффициентом регуляризации 10) логистической регрессии. Используйте длину шага `k=0.1`. В качестве начального приближения используйте вектор (0, 0).
```
def gradient_regressor(C, iterations_remaining=10000, k=0.1, ERROR=1e-5):
changed_w1, changed_w2 = 0., 0.
while iterations_remaining:
iterations_remaining -= 1
w1, w2 = changed_w1, changed_w2
changed_w1 = w1 + delta_for_w(1, w1, w2, C, k)
changed_w2 = w2 + delta_for_w(2, w1, w2, C, k)
if np.sqrt(mse([w1, w2], [changed_w1, changed_w2])) <= ERROR:
break
return changed_w1, changed_w2
```
### Шаг 4:
Запустите градиентный спуск и доведите до сходимости (евклидово расстояние между векторами весов на соседних итерациях должно быть не больше 1e-5). Рекомендуется ограничить сверху число итераций десятью тысячами.
```
def sigma(xi, w1, w2):
return 1. / (1 + np.exp(-w1 * xi[1] - w2 * xi[2]))
w1, w2 = gradient_regressor(0.)
l2w1, l2w2 = gradient_regressor(10.)
print(w1, w2, l2w1, l2w2)
scores = X.apply(lambda xi: sigma(xi, w1, w2), axis=1)
l2scores = X.apply(lambda xi: sigma(xi, l2w1, l2w2), axis=1)
```
### Шаг 5:
Какое значение принимает `AUC-ROC` на обучении без регуляризации и при ее использовании? Эти величины будут ответом на задание. В качестве ответа приведите два числа через пробел. Обратите внимание, что на вход функции roc_auc_score нужно подавать оценки вероятностей, подсчитанные обученным алгоритмом. Для этого воспользуйтесь сигмоидной функцией: $a(x) = 1 / (1 + exp(-w_1 x_1 - w_2 x_2)$).
```
auc_score = roc_auc_score(y, scores)
l2_auc_score = roc_auc_score(y, l2scores)
print(auc_score)
print(l2_auc_score)
with open("ans.txt", "w") as f:
f.write(str(auc_score) + ' ' + str(l2_auc_score))
```
### Шаг 6:
Попробуйте поменять длину шага. Будет ли сходиться алгоритм, если делать более длинные шаги? Как меняется число итераций при уменьшении длины шага?
### Шаг 7:
Попробуйте менять начальное приближение. Влияет ли оно на что-нибудь?
Если ответом является нецелое число, то целую и дробную часть необходимо разграничивать точкой, например, 0.421. При необходимости округляйте дробную часть до трех знаков.
|
github_jupyter
|
import numpy as np
import pandas as pd
from sklearn.metrics import mean_squared_error as mse, roc_auc_score
df = pd.read_csv('data-logistic.csv', header=None)
X = df.loc[:, 1:]
y = df[0]
def sigma_y(i, w1, w2):
return 1. / (1. + np.exp(-y[i] * (w1*X[1][i] + w2*X[2][i])))
def delta_for_w(w_index, w1, w2, C, k):
addition = sum(y[i] * X[w_index][i] * (1. - sigma_y(i, w1, w2)) for i in np.arange(0, len(y)))
addition *= k / len(y)
addition -= k * C * (w1 if w_index == 1 else w2)
return addition
def gradient_regressor(C, iterations_remaining=10000, k=0.1, ERROR=1e-5):
changed_w1, changed_w2 = 0., 0.
while iterations_remaining:
iterations_remaining -= 1
w1, w2 = changed_w1, changed_w2
changed_w1 = w1 + delta_for_w(1, w1, w2, C, k)
changed_w2 = w2 + delta_for_w(2, w1, w2, C, k)
if np.sqrt(mse([w1, w2], [changed_w1, changed_w2])) <= ERROR:
break
return changed_w1, changed_w2
def sigma(xi, w1, w2):
return 1. / (1 + np.exp(-w1 * xi[1] - w2 * xi[2]))
w1, w2 = gradient_regressor(0.)
l2w1, l2w2 = gradient_regressor(10.)
print(w1, w2, l2w1, l2w2)
scores = X.apply(lambda xi: sigma(xi, w1, w2), axis=1)
l2scores = X.apply(lambda xi: sigma(xi, l2w1, l2w2), axis=1)
auc_score = roc_auc_score(y, scores)
l2_auc_score = roc_auc_score(y, l2scores)
print(auc_score)
print(l2_auc_score)
with open("ans.txt", "w") as f:
f.write(str(auc_score) + ' ' + str(l2_auc_score))
| 0.515376 | 0.936865 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
path = '/Users/redabelhaj/Desktop/INF473V/Projet/scaling_results/scaling.xlsx'
ds = pd.read_excel(path, sheet_name = ['w_new', 'r_new', 'd_new', 'basline', 'c_new'], header=None)
w_new, r_new, d_new, base, c_new = ds['w_new'], ds['r_new'], ds['d_new'], ds['basline'], ds['c_new']
w_moy = np.mean(w_new).to_numpy()
d_moy = np.mean(d_new).to_numpy()
r_moy = np.mean(r_new).to_numpy()
c_moy = np.mean(c_new).to_numpy()
base_moy = np.mean(base).to_numpy()
flops = [1, 1.33, 1.66, 2, 3]
## un plotsans error-bars
c_moy= c_moy[~pd.isnull(c_moy)]
w_moy_f = np.concatenate([base_moy, w_moy])
d_moy_f = np.concatenate([base_moy, d_moy])
r_moy_f = np.concatenate([base_moy, r_moy])
c_moy_f = np.concatenate([base_moy, c_moy])
plt.plot(flops, w_moy_f, marker = 'o', color = 'r', label='width')
plt.plot(flops, d_moy_f, marker = 'x', color = 'b', label='depth')
plt.plot(flops, r_moy_f, marker = 'p', color = 'g', label='resolution')
plt.plot([1,2,3], c_moy_f, marker = 'p', color = 'orange', label='compound')
plt.xlabel('Relative flops')
plt.ylabel('CIFAR-10 Top 1-accuracy')
plt.legend()
plt.grid()
plt.title('Comparing scaling methods')
plt.show()
## Un plot avec erreurs
base_error = np.std(base).to_numpy()
base_error
w_err = np.std(w_new).to_numpy()
d_err = np.std(d_new).to_numpy()
r_err = np.std(r_new).to_numpy()
c_err = np.std(c_new).to_numpy()
c_err2 = c_err[~pd.isnull(c_err)]
w_err_f = np.concatenate([base_error, w_err])
d_err_f = np.concatenate([base_error, d_err])
r_err_f = np.concatenate([base_error, r_err])
c_err_f = np.concatenate([base_error, c_err2])
plt.errorbar(flops, w_moy_f,yerr=w_err_f, marker = 'x', color = 'g', ecolor = 'purple', capsize=5)
plt.xlabel('relative flops')
plt.ylabel('CIFAR-10 top-1 accuracy')
plt.title('Width scaling with std error bars')
plt.grid()
plt.show()
plt.errorbar(flops, d_moy_f,yerr=d_err_f, marker = 'x', color = 'g', ecolor = 'purple', capsize=5)
plt.xlabel('relative flops')
plt.ylabel('CIFAR-10 top-1 accuracy')
plt.title('Depth scaling with std error bars')
plt.grid()
plt.show()
plt.errorbar(flops, r_moy_f,yerr=r_err_f, marker = 'x', color = 'g', ecolor = 'purple', capsize=5)
plt.xlabel('relative flops')
plt.ylabel('CIFAR-10 top-1 accuracy')
plt.title('Resolution scaling with std error bars')
plt.grid()
plt.show()
plt.errorbar([1,2,3], c_moy_f,yerr=c_err_f, marker = 'x', color = 'g', ecolor = 'purple', capsize=5)
plt.xlabel('relative flops')
plt.ylabel('CIFAR-10 top-1 accuracy')
plt.title('Compound scaling with std error bars')
plt.grid()
plt.show()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
path = '/Users/redabelhaj/Desktop/INF473V/Projet/scaling_results/scaling.xlsx'
ds = pd.read_excel(path, sheet_name = ['w_new', 'r_new', 'd_new', 'basline', 'c_new'], header=None)
w_new, r_new, d_new, base, c_new = ds['w_new'], ds['r_new'], ds['d_new'], ds['basline'], ds['c_new']
w_moy = np.mean(w_new).to_numpy()
d_moy = np.mean(d_new).to_numpy()
r_moy = np.mean(r_new).to_numpy()
c_moy = np.mean(c_new).to_numpy()
base_moy = np.mean(base).to_numpy()
flops = [1, 1.33, 1.66, 2, 3]
## un plotsans error-bars
c_moy= c_moy[~pd.isnull(c_moy)]
w_moy_f = np.concatenate([base_moy, w_moy])
d_moy_f = np.concatenate([base_moy, d_moy])
r_moy_f = np.concatenate([base_moy, r_moy])
c_moy_f = np.concatenate([base_moy, c_moy])
plt.plot(flops, w_moy_f, marker = 'o', color = 'r', label='width')
plt.plot(flops, d_moy_f, marker = 'x', color = 'b', label='depth')
plt.plot(flops, r_moy_f, marker = 'p', color = 'g', label='resolution')
plt.plot([1,2,3], c_moy_f, marker = 'p', color = 'orange', label='compound')
plt.xlabel('Relative flops')
plt.ylabel('CIFAR-10 Top 1-accuracy')
plt.legend()
plt.grid()
plt.title('Comparing scaling methods')
plt.show()
## Un plot avec erreurs
base_error = np.std(base).to_numpy()
base_error
w_err = np.std(w_new).to_numpy()
d_err = np.std(d_new).to_numpy()
r_err = np.std(r_new).to_numpy()
c_err = np.std(c_new).to_numpy()
c_err2 = c_err[~pd.isnull(c_err)]
w_err_f = np.concatenate([base_error, w_err])
d_err_f = np.concatenate([base_error, d_err])
r_err_f = np.concatenate([base_error, r_err])
c_err_f = np.concatenate([base_error, c_err2])
plt.errorbar(flops, w_moy_f,yerr=w_err_f, marker = 'x', color = 'g', ecolor = 'purple', capsize=5)
plt.xlabel('relative flops')
plt.ylabel('CIFAR-10 top-1 accuracy')
plt.title('Width scaling with std error bars')
plt.grid()
plt.show()
plt.errorbar(flops, d_moy_f,yerr=d_err_f, marker = 'x', color = 'g', ecolor = 'purple', capsize=5)
plt.xlabel('relative flops')
plt.ylabel('CIFAR-10 top-1 accuracy')
plt.title('Depth scaling with std error bars')
plt.grid()
plt.show()
plt.errorbar(flops, r_moy_f,yerr=r_err_f, marker = 'x', color = 'g', ecolor = 'purple', capsize=5)
plt.xlabel('relative flops')
plt.ylabel('CIFAR-10 top-1 accuracy')
plt.title('Resolution scaling with std error bars')
plt.grid()
plt.show()
plt.errorbar([1,2,3], c_moy_f,yerr=c_err_f, marker = 'x', color = 'g', ecolor = 'purple', capsize=5)
plt.xlabel('relative flops')
plt.ylabel('CIFAR-10 top-1 accuracy')
plt.title('Compound scaling with std error bars')
plt.grid()
plt.show()
| 0.437103 | 0.350505 |
```
import ast
import pandas as pd
#---------------STARTING WITH R8----------------------------
##------------BUILDING THE DATASET
X_train = pd.read_csv('datasets/r8-train-all-terms.txt', sep="\t", header=None)
X_test = pd.read_csv('datasets/r8-test-all-terms.txt', sep="\t", header=None)
data_r8=pd.concat([X_train,X_test], ignore_index=True)
data_r8.columns = ["class", "text"]
classes_count = data_r8.groupby('class').count().sort_values(by=['text'],ascending=False)
classes_count
#---------------STARTING WITH R52----------------------------
##------------BUILDING THE DATASET
X_train = pd.read_csv('datasets/r52-train-all-terms.txt', sep="\t", header=None)
X_test = pd.read_csv('datasets/r52-test-all-terms.txt', sep="\t", header=None)
data_r52=pd.concat([X_train,X_test], ignore_index=True)
data_r52.columns = ["class", "text"]
classes_count = data_r52.groupby('class').count().sort_values(by=['text'],ascending=False)
classes_count
#---------------STARTING WITH R40----------------------------
##------------BUILDING THE DATASET
X_train = pd.read_csv('datasets/r40_texts.txt', header=None)
X_labels = pd.read_csv('datasets/r40_labels.txt', header=None)
data_r40=pd.concat([X_labels, X_train], axis=1, ignore_index=True)
data_r40.columns = ["class", "text"]
classes_count = data_r40.groupby('class').count().sort_values(by=['text'],ascending=False)
classes_count
# ---------------Dataset CLASSIC4 & CLASSIC3----------------------------
##------------BUILDING THE DATASET
list_ = []
with open("datasets/classic4.json", 'r+') as g:
for x in g:
list_.append(ast.literal_eval(x))
data = pd.DataFrame(list_)
data.columns = ["text", "class"]
classes_count = data.groupby('class').count().sort_values(by=['text'], ascending=False)
# ---------------DATASET WEBKB STEMMED----------------------------
##------------BUILDING THE DATASET
X_train = pd.read_csv('datasets/webkb-train-stemmed.txt', sep="\t", header=None)
X_test = pd.read_csv('datasets/webkb-test-stemmed.txt', sep="\t", header=None)
data_webkb = pd.concat([X_test, X_train], ignore_index=True)
data_webkb.columns = ["class", "text"]
classes_count = data_webkb.groupby('class').count().sort_values(by=['text'], ascending=False)
classes_count
#====>
#---------------DATASET NG20----------------------------
##------------BUILDING THE DATASET
from sklearn.datasets import fetch_20newsgroups
ng20 = fetch_20newsgroups()
data_ng20 = ng20.data
labels_ng20 = ng20.target
pd.DataFrame([data_ng20, labels_ng20])
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
```
|
github_jupyter
|
import ast
import pandas as pd
#---------------STARTING WITH R8----------------------------
##------------BUILDING THE DATASET
X_train = pd.read_csv('datasets/r8-train-all-terms.txt', sep="\t", header=None)
X_test = pd.read_csv('datasets/r8-test-all-terms.txt', sep="\t", header=None)
data_r8=pd.concat([X_train,X_test], ignore_index=True)
data_r8.columns = ["class", "text"]
classes_count = data_r8.groupby('class').count().sort_values(by=['text'],ascending=False)
classes_count
#---------------STARTING WITH R52----------------------------
##------------BUILDING THE DATASET
X_train = pd.read_csv('datasets/r52-train-all-terms.txt', sep="\t", header=None)
X_test = pd.read_csv('datasets/r52-test-all-terms.txt', sep="\t", header=None)
data_r52=pd.concat([X_train,X_test], ignore_index=True)
data_r52.columns = ["class", "text"]
classes_count = data_r52.groupby('class').count().sort_values(by=['text'],ascending=False)
classes_count
#---------------STARTING WITH R40----------------------------
##------------BUILDING THE DATASET
X_train = pd.read_csv('datasets/r40_texts.txt', header=None)
X_labels = pd.read_csv('datasets/r40_labels.txt', header=None)
data_r40=pd.concat([X_labels, X_train], axis=1, ignore_index=True)
data_r40.columns = ["class", "text"]
classes_count = data_r40.groupby('class').count().sort_values(by=['text'],ascending=False)
classes_count
# ---------------Dataset CLASSIC4 & CLASSIC3----------------------------
##------------BUILDING THE DATASET
list_ = []
with open("datasets/classic4.json", 'r+') as g:
for x in g:
list_.append(ast.literal_eval(x))
data = pd.DataFrame(list_)
data.columns = ["text", "class"]
classes_count = data.groupby('class').count().sort_values(by=['text'], ascending=False)
# ---------------DATASET WEBKB STEMMED----------------------------
##------------BUILDING THE DATASET
X_train = pd.read_csv('datasets/webkb-train-stemmed.txt', sep="\t", header=None)
X_test = pd.read_csv('datasets/webkb-test-stemmed.txt', sep="\t", header=None)
data_webkb = pd.concat([X_test, X_train], ignore_index=True)
data_webkb.columns = ["class", "text"]
classes_count = data_webkb.groupby('class').count().sort_values(by=['text'], ascending=False)
classes_count
#====>
#---------------DATASET NG20----------------------------
##------------BUILDING THE DATASET
from sklearn.datasets import fetch_20newsgroups
ng20 = fetch_20newsgroups()
data_ng20 = ng20.data
labels_ng20 = ng20.target
pd.DataFrame([data_ng20, labels_ng20])
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
| 0.327346 | 0.285624 |
# Quatitative Evaluation
Flipping the top pixels and see how much logodds drop in terms of the top-1 classification error.
```
%matplotlib inline
import torch
import os
import pandas as pd
import seaborn as sns
```
Take out all the files end with 'records.th'
```
def get_file_names(directory):
directory = os.path.join('../result', directory)
result = []
for filename in os.listdir(directory):
if filename.endswith("records.th"):
result.append(filename)
return result
arr = []
identifiers = ['1013-vbd_l1_opposite-0.1']
for directory in identifiers:
filenames = get_file_names(directory)
arr.append(filenames)
import matplotlib.pyplot as plt
import numpy as np
def plot_given_file(ax, filepath, name):
orig_log_odds, all_log_odds, unnormalized_img, imp_vector, rodds = \
torch.load(filepath)
x_flip = np.array([0] + all_log_odds.keys())
y_flip = np.array([orig_log_odds] + all_log_odds.values())
ax.plot(x_flip, y_flip, label='{} flip'.format(name))
if 'p_b' not in name:
x = [k for k, v in rodds]
y = np.array([v for k, v in rodds])
ax.scatter(x, y, label='{} random'.format(name))
return unnormalized_img
for idx in xrange(10):
fig, ax = plt.subplots()
for name in [
# '1013-vbd_l1_opposite-0.1/8_{}_records.th'.format(idx),
# '1013-vbd_l1_opposite-1E-3/8_{}_records.th',.format(idx)
# '1013-vbd_l1_opposite-1E-4', '1013-vbd_l1_opposite-1E-5',
# '1013-vbd_l1_opposite-1E-6/8_{}_records.th',.format(idx)
'1013-vbd_l1_opposite-0/8_{}_records.th'.format(idx),
'1018-vbd_l1-0/8_{}_records.th'.format(idx),
'1013-vbd_opposite-0.5-0.1/8_{}_records.th'.format(idx),
# '1013-vbd_opposite-0.5-1.0/8_{}_records.th'.format(idx),
'1013-p_b/8_3_{}_records.th'.format(idx)
]:
path = '../result/{}'.format(name)
thereal_name = name.split('/')[0]
unnormalized_img = plot_given_file(ax, path, name=thereal_name)
# plot_given_file(ax, '../imgs/val_benchmark/0927_ae_hole_p_b_val/{}'.format(arr[0][idx]), name='p_b')
plt.ylabel('Log odds')
plt.legend(bbox_to_anchor=(1, 1))
plt.show()
identifiers = [ '1005-p_b',
'1005-vbd-p0.5-0.001', '1005-vbd-p0.5-0.01', '1005-vbd-p0.5-0.1',
'1005-vbd-p0.999-1E-4', '1005-vbd-p0.999-1E-5', '1005-vbd-p0.999-1E-6',
'1005-vbdl1-1E-3', '1005-vbdl1-1E-4', '1005-vbdl1-1E-5',
]
arr = [get_file_names('../result/{}'.format(i)) for i in identifiers]
def prepare_pd_table(arr, identifiers):
result = []
for i in xrange(len(identifiers)):
identifier = identifiers[i]
for j in xrange(len(arr[i])):
orig_log_odds, all_log_odds_dict, unnormalized_img, imp_vector = \
torch.load(os.path.join('../result', '%s' % identifier, arr[i][j]))
for key in all_log_odds_dict:
log_odds_drop = orig_log_odds - all_log_odds_dict[key]
result.append([identifier + '(n = %d)' % (len(arr[i])), j, key, log_odds_drop])
result = pd.DataFrame(result)
result.columns = ['method', 'img_index', 'num_flippings', 'odds_diff']
return result
```
## Old
```
orig_log_odds, all_log_odds, unnormalized_img, imp_vector, rodds = \
torch.load('../result/1007-vbd_l1_opposite-0.1/' + arr[0][0])
```
## Compare btw vbd, vbdf1, vbd 0.999
Notice only 20 images here
```
table = prepare_pd_table(arr, identifiers)
ax = sns.boxplot(x="num_flippings", y="odds_diff", hue="method", data=table)
ax.legend(bbox_to_anchor=(1, 1))
```
|
github_jupyter
|
%matplotlib inline
import torch
import os
import pandas as pd
import seaborn as sns
def get_file_names(directory):
directory = os.path.join('../result', directory)
result = []
for filename in os.listdir(directory):
if filename.endswith("records.th"):
result.append(filename)
return result
arr = []
identifiers = ['1013-vbd_l1_opposite-0.1']
for directory in identifiers:
filenames = get_file_names(directory)
arr.append(filenames)
import matplotlib.pyplot as plt
import numpy as np
def plot_given_file(ax, filepath, name):
orig_log_odds, all_log_odds, unnormalized_img, imp_vector, rodds = \
torch.load(filepath)
x_flip = np.array([0] + all_log_odds.keys())
y_flip = np.array([orig_log_odds] + all_log_odds.values())
ax.plot(x_flip, y_flip, label='{} flip'.format(name))
if 'p_b' not in name:
x = [k for k, v in rodds]
y = np.array([v for k, v in rodds])
ax.scatter(x, y, label='{} random'.format(name))
return unnormalized_img
for idx in xrange(10):
fig, ax = plt.subplots()
for name in [
# '1013-vbd_l1_opposite-0.1/8_{}_records.th'.format(idx),
# '1013-vbd_l1_opposite-1E-3/8_{}_records.th',.format(idx)
# '1013-vbd_l1_opposite-1E-4', '1013-vbd_l1_opposite-1E-5',
# '1013-vbd_l1_opposite-1E-6/8_{}_records.th',.format(idx)
'1013-vbd_l1_opposite-0/8_{}_records.th'.format(idx),
'1018-vbd_l1-0/8_{}_records.th'.format(idx),
'1013-vbd_opposite-0.5-0.1/8_{}_records.th'.format(idx),
# '1013-vbd_opposite-0.5-1.0/8_{}_records.th'.format(idx),
'1013-p_b/8_3_{}_records.th'.format(idx)
]:
path = '../result/{}'.format(name)
thereal_name = name.split('/')[0]
unnormalized_img = plot_given_file(ax, path, name=thereal_name)
# plot_given_file(ax, '../imgs/val_benchmark/0927_ae_hole_p_b_val/{}'.format(arr[0][idx]), name='p_b')
plt.ylabel('Log odds')
plt.legend(bbox_to_anchor=(1, 1))
plt.show()
identifiers = [ '1005-p_b',
'1005-vbd-p0.5-0.001', '1005-vbd-p0.5-0.01', '1005-vbd-p0.5-0.1',
'1005-vbd-p0.999-1E-4', '1005-vbd-p0.999-1E-5', '1005-vbd-p0.999-1E-6',
'1005-vbdl1-1E-3', '1005-vbdl1-1E-4', '1005-vbdl1-1E-5',
]
arr = [get_file_names('../result/{}'.format(i)) for i in identifiers]
def prepare_pd_table(arr, identifiers):
result = []
for i in xrange(len(identifiers)):
identifier = identifiers[i]
for j in xrange(len(arr[i])):
orig_log_odds, all_log_odds_dict, unnormalized_img, imp_vector = \
torch.load(os.path.join('../result', '%s' % identifier, arr[i][j]))
for key in all_log_odds_dict:
log_odds_drop = orig_log_odds - all_log_odds_dict[key]
result.append([identifier + '(n = %d)' % (len(arr[i])), j, key, log_odds_drop])
result = pd.DataFrame(result)
result.columns = ['method', 'img_index', 'num_flippings', 'odds_diff']
return result
orig_log_odds, all_log_odds, unnormalized_img, imp_vector, rodds = \
torch.load('../result/1007-vbd_l1_opposite-0.1/' + arr[0][0])
table = prepare_pd_table(arr, identifiers)
ax = sns.boxplot(x="num_flippings", y="odds_diff", hue="method", data=table)
ax.legend(bbox_to_anchor=(1, 1))
| 0.273477 | 0.70186 |
Linear Algebra is the topic of Chapter 12 of [A Guided Tour of Mathematical Methods for the Physical Sciences](http://www.cambridge.org/nz/academic/subjects/physics/mathematical-methods/guided-tour-mathematical-methods-physical-sciences-3rd-edition#KUoGXYx5FTwytcUg.97). In this notebook we treat linear regression on sea level measurements in the Auckland harbour as an application of linear algebra, as we solve the normal equations that describe linear regression. In the process, this notebook also has bearing on Inverse Problems (Chapter 22), and Statistics (Chapter 21).
### Sea Level Data ###
Sea level measurements for the Auckland harbour can be downloaded [here](https://ndownloader.figshare.com/files/11235611) (thanks to Dr. John Hannah), using [pandas](https://pandas.pydata.org/). The value for each year is the average of many measurements taken throughout that year to obtain a mean value and standard deviation. We read in the data into a pandas data structure, convert it to a matrix, and extract the time (first column) and sea level measurement (second column):
```
import pandas as pd
url="https://auckland.figshare.com/ndownloader/files/21844113"
df = pd.read_csv(url,sep=',')
npdat =df.to_numpy()
time = npdat[:,0]
height = npdat[:,1]
print(height)
print(time)
```
Next, we plot sea level in the Auckland Harbour, as a function of time:
```
import matplotlib.pyplot as plt
plt.plot(time,height,'ro')
plt.grid()
plt.xlabel('Date (year)')
plt.ylabel('Mean Sea Level (m)')
plt.title('Princes Wharf, Auckland (New Zealand)')
plt.axis('tight')
plt.show()
```
### Linear Regression ###
There is an obvious spread in these measurements. You can read about the challenges of tidal gauge readings [here](http://www.fig.net/resources/monthly_articles/2010/hannah_july_2010.asp). Nevertheless, a trend of increasing depth seems clear. The rest of this notebook is an attempt of answering the question: "What is the best fitting linear equation to these data?"
Later, in Chapter 22, we will discuss Inverse Problems, where we address important questions about what it means to fit the data. Here, we are more concerned with minimizing the misfit between a vector of $n$ data points, $\mathbf{d}$, and those predicted by a model $\mathbf{m}$ that is represented by 2 variables: the slope and intercept of a straight line.
If we accept that these data can be represented by a straight line, then any datum at time $t$ would have a water depth $d = intercept + slope*t$. This would be true for all data, so we can write this in matrix form. If
$$ \mathbf{A}\mathbf{m}= \mathbf{d},$$ then
$\mathbf{A} = \begin{pmatrix}1 & t_1 \\ \vdots & \vdots\\ 1& t_n\end{pmatrix}$, $\mathbf{m} = \begin{pmatrix} intercept \\ slope \end{pmatrix}$, and $\mathbf{d} = \begin{pmatrix} d_1 \\ \vdots\\ d_n\end{pmatrix}$ .
### Normal equations ###
Finding the slope and intercept that best fit the data in a least--squares sense is derived in many text books, including Section~22.2 of ours. Suffices here to say that we want to manipulate the linear system of equations so that we "free up" $\mathbf{m}$. If $\mathbf{A}$ had an inverse, we could multiply the left and right of $ \mathbf{A}\mathbf{m}= \mathbf{d}$ to achieve our goals. But $\mathbf{A}$ is not even square, so there is no chance of that! The next best scenario is to multiply left and right side of $\mathbf{A}\mathbf{m}= \mathbf{d}$ by the transpose of $\mathbf{A}$:
$$ \mathbf{A}^T\mathbf{A}\mathbf{m} = \mathbf{A}^T\mathbf{d}.$$
These are the so-called *normal equations*, and we can rewrite this system as
$$ \tilde{\mathbf{A}}\mathbf{m} = \tilde{\mathbf{d}},$$
where $ \tilde{\mathbf{A}} = \mathbf{A}^T\mathbf{A}$ and $ \tilde{\mathbf{d}}= \mathbf{A}^T\mathbf{d}$. We could tackle the problem of solving for $\mathbf{m}$ with Singular Value Decomposition (SVD, see Section 12.6), for example. Python has many ways to solve this system of equations, and here is our function to find the best fitting line through a set of points:
```
import numpy as np # numerical tools
def my_linregress(t,y):
''' this linear regression function takes (t,y) data and returns best fitting slope and intercept only:
bells = whistles = 0'''
a22 = np.dot(t,t)
a12 = np.sum(t)
a21 = a12
a11 = len(t)
Atilde = np.array([[a11,a12], [a21,a22]])
d2 = np.dot(t,y)
d1 = np.sum(y)
dtilde = np.array([d1,d2])
[intercept, slope] = np.linalg.solve(Atilde,dtilde)
return intercept, slope
```
Take the time to confirm the elements of the vectors and matrix involved!! Once you are convinced, you can call this function:
```
intercept,slope = my_linregress(time, height)
print(intercept, slope)
```
And plot this best-fitting line through our data:
```
plt.plot(time,height,'ro')
plt.plot(time,intercept+slope*time,'k')
plt.grid()
plt.xlabel('Date (year)')
plt.ylabel('Mean Sea Level (m)')
plt.title('Princes Wharf, Auckland (New Zealand)')
plt.legend(['Measurements','Slope is {:.2f} mm/y'.format(1000*slope)],loc=4)
plt.axis('tight')
plt.show()
```
### Residuals or misfit ###
The line looks like a reasonable representation of the data, but how did we do from a quantitative point of view? Let's compute the mean and standard deviation of the residual values. These are the values of the water depth minus the best fitting straight line through the data:
```
residuals = height-(intercept + time*slope)
mu = np.mean(residuals)
std = np.std(residuals)
print(mu, std)
```
The mean is practically zero, which shows that for all data we underestimate the observations just as much as we overestimate them. The standard devation turns out to be 3.4 cm. Why do you think this is? What factors can you think of that contribute to the standard deviation? The scientists involved in collecting these data estimate the standard error in each of these annual means for sea level in Auckland is 2.5 cm.
If we were to feel that 3.4 cm is a "poor" fit, one could always fit the data better with a model that has more degrees of freedom. Does this experiment warrant a quadratic term? Or even higher-order polynomials? Maybe not over these 100 years of data, but if the rise is due to climate change and we have positive feedbacks, maybe we should account for that in our model. In any case:
**Given enough degrees of freedom in the model, we can fit the (any) data perfectly!**
### Linear regression "out of the box"
By the way, there are many ways to do linear regression, or more advanced polynomial fitting, in python. Here's one example from the stats functions in scipy:
```
from scipy.stats import linregress # linear regression function
slope, intercept, r_value, p_value, std_err = linregress(time,height)
print(slope, intercept, r_value, p_value)
plt.plot(time,height,'ro')
plt.plot(time,intercept+slope*time,'k')
plt.grid()
plt.xlabel('Date (year)')
plt.ylabel('Water depth (mm)')
plt.title('Mean sea level, Auckland (New Zealand)')
plt.legend(['measurements','slope is {:.2f} mm/y'.format(1000*slope)],loc=4)
plt.axis('tight')
plt.show()
```
### Climate change? An exercise ###
Australian scientists confirm their historic data also support a [1.6 mm/y rise in sea level](https://en.wikipedia.org/wiki/Sea_level_rise) averaged over the last 100+ years. However, tidal gauge and satellite data from the last decade(s) indicate sea level may now be rising at double this rate! With this info, have another look at the Auckland data. Most sea level values in the 2000s falls *above* the regression line. It would require more than data from one tidal gauge to conclude this is significant, of course. Especially, when you learn that the Auckland tidal gauge has been moved site three times since 2000. However, for the sake of a fitting exercise, we encourage the reader to [fit these data with an exponent](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html), for example. Are the residuals smaller than for a linear fit? Closer to the reported standard error in the data?
|
github_jupyter
|
import pandas as pd
url="https://auckland.figshare.com/ndownloader/files/21844113"
df = pd.read_csv(url,sep=',')
npdat =df.to_numpy()
time = npdat[:,0]
height = npdat[:,1]
print(height)
print(time)
import matplotlib.pyplot as plt
plt.plot(time,height,'ro')
plt.grid()
plt.xlabel('Date (year)')
plt.ylabel('Mean Sea Level (m)')
plt.title('Princes Wharf, Auckland (New Zealand)')
plt.axis('tight')
plt.show()
import numpy as np # numerical tools
def my_linregress(t,y):
''' this linear regression function takes (t,y) data and returns best fitting slope and intercept only:
bells = whistles = 0'''
a22 = np.dot(t,t)
a12 = np.sum(t)
a21 = a12
a11 = len(t)
Atilde = np.array([[a11,a12], [a21,a22]])
d2 = np.dot(t,y)
d1 = np.sum(y)
dtilde = np.array([d1,d2])
[intercept, slope] = np.linalg.solve(Atilde,dtilde)
return intercept, slope
intercept,slope = my_linregress(time, height)
print(intercept, slope)
plt.plot(time,height,'ro')
plt.plot(time,intercept+slope*time,'k')
plt.grid()
plt.xlabel('Date (year)')
plt.ylabel('Mean Sea Level (m)')
plt.title('Princes Wharf, Auckland (New Zealand)')
plt.legend(['Measurements','Slope is {:.2f} mm/y'.format(1000*slope)],loc=4)
plt.axis('tight')
plt.show()
residuals = height-(intercept + time*slope)
mu = np.mean(residuals)
std = np.std(residuals)
print(mu, std)
from scipy.stats import linregress # linear regression function
slope, intercept, r_value, p_value, std_err = linregress(time,height)
print(slope, intercept, r_value, p_value)
plt.plot(time,height,'ro')
plt.plot(time,intercept+slope*time,'k')
plt.grid()
plt.xlabel('Date (year)')
plt.ylabel('Water depth (mm)')
plt.title('Mean sea level, Auckland (New Zealand)')
plt.legend(['measurements','slope is {:.2f} mm/y'.format(1000*slope)],loc=4)
plt.axis('tight')
plt.show()
| 0.692642 | 0.991023 |
```
from __future__ import division
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import sys
from tkinter import *
import tkinter.filedialog as fdialog
from collections import defaultdict
from io import StringIO
from PIL import Image
import cv2
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
# Path to frozen detection graph. This is the actual model that is used for the object detection.
# Note: Model used for SSDLite_Mobilenet_v2
#PATH_TO_CKPT = './object_detection/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb'
PATH_TO_CKPT = './object_detection/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = './object_detection/data/mscoco_label_map.pbtxt'
NUM_CLASSES = 90
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
#print(categories)
#print(category_index)
def count_nonblack_np(img):
"""Return the number of pixels in img that are not black.
img must be a Numpy array with colour values along the last axis.
"""
return img.any(axis=-1).sum()
def detect_team(image, col1, col2, col_gk, show = False):
# convert to HSV colorbase
img_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# define color intervals (in HSV colorbase)
lower_yellow = np.array([25,100,100])
upper_yellow = np.array([35,255,255])
lower_lightblue = np.array([95,80,80])
upper_lightblue = np.array([120,170,255])
lower_blue = np.array([100,80,80])
upper_blue = np.array([120,255,255])
lower_red = np.array([165,50,100])
upper_red = np.array([180,255,255])
#lower_red2 = np.array([0,50,100])
#upper_red2 = np.array([10,255,255])
lower_purple = np.array([130,80,80])
upper_purple = np.array([160,255,255])
lower_green = np.array([35,80,80])
upper_green = np.array([50,255,255])
lower_orange = np.array([28,80,80])
upper_orange = np.array([30,255,255])
lower_white = np.array([0,0,240])
upper_white = np.array([190,60,255])
# define the list of boundaries
# Team 1
if col1 == 'red':
rgb1_low = lower_red
rgb1_up = upper_red
elif col1 == 'lightblue':
rgb1_low = lower_lightblue
rgb1_up = upper_lightblue
elif col1 == 'yellow':
rgb1_low = lower_yellow
rgb1_up = upper_yellow
elif col1 == 'blue':
rgb1_low = lower_blue
rgb1_up = upper_blue
elif col1 == 'purple':
rgb1_low = lower_purple
rgb1_up = upper_purple
elif col1 == 'green':
rgb1_low = lower_green
rgb1_up = upper_green
elif col1 == 'orange':
rgb1_low = lower_orange
rgb1_up = upper_orange
elif col1 == 'white':
rgb1_low = lower_white
rgb1_up = upper_white
# Team 2
if col2 == 'red':
rgb2_low = lower_red
rgb2_up = upper_red
elif col2 == 'lightblue':
rgb2_low = lower_lightblue
rgb2_up = upper_lightblue
elif col2 == 'yellow':
rgb2_low = lower_yellow
rgb2_up = upper_yellow
elif col2 == 'blue':
rgb2_low = lower_blue
rgb2_up = upper_blue
elif col2 == 'purple':
rgb2_low = lower_purple
rgb2_up = upper_purple
elif col2 == 'green':
rgb2_low = lower_green
rgb2_up = upper_green
elif col2 == 'orange':
rgb2_low = lower_orange
rgb2_up = upper_orange
elif col2 == 'white':
rgb2_low = lower_white
rgb2_up = upper_white
# Goal-keeper
if col_gk == 'red':
rgbGK_low = lower_red
rgbGK_up = upper_red
elif col_gk == 'lightblue':
rgbGK_low = lower_lightblue
rgbGK_up = upper_lightblue
elif col_gk == 'yellow':
rgbGK_low = lower_yellow
rgbGK_up = upper_yellow
elif col_gk == 'blue':
rgbGK_low = lower_blue
rgbGK_up = upper_blue
elif col_gk == 'purple':
rgbGK_low = lower_purple
rgbGK_up = upper_purple
elif col_gk == 'green':
rgbGK_low = lower_green
rgbGK_up = upper_green
elif col_gk == 'orange':
rgbGK_low = lower_orange
rgbGK_up = upper_orange
elif col_gk == 'white':
rgbGK_low = lower_white
rgbGK_up = upper_white
boundaries = [
(rgb1_low, rgb1_up), #red
(rgb2_low, rgb2_up), #light-blue
(rgbGK_low, rgbGK_up)
]
# ([25, 146, 190], [96, 174, 250]) #yellow
i = 0
for (lower, upper) in boundaries:
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype = "uint8")
upper = np.array(upper, dtype = "uint8")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(img_hsv, lower, upper)
output = cv2.bitwise_and(image, image, mask = mask)
tot_pix = count_nonblack_np(image)
color_pix = count_nonblack_np(output)
ratio = color_pix/tot_pix
# print("ratio is:", ratio)
if ratio > 0.01 and i == 0:
return 'Team1' #'red'
elif ratio > 0.01 and i == 1:
return 'Team2' #'yellow'
elif ratio > 0.01 and i == 2:
return 'GK'
i += 1
if show == True:
cv2.imshow("images", np.hstack([image, output]))
if cv2.waitKey(0) & 0xFF == ord('q'):
cv2.destroyAllWindows()
return 'not_sure'
## To View Color Mask
# filename = 'frame74.jpg' #'image2.jpg'
# image = cv2.imread(filename)
# resize = cv2.resize(image, (640,360))
# detect_team(resize, show=True)
# Para abrir ventana elegir archivo de video
root = Tk()
root.filename = fdialog.askopenfile(initialdir = "/",title = "Select file",filetypes = (("avi files","*.avi"),("all files","*.*")))
print('Example picture: ', root.filename.name)
example_image_path = root.filename.name
pathAux = example_image_path.find('/',-20)
example_image = example_image_path[pathAux+1:]
print(example_image)
root.destroy()
color_1 = str(input('Color team 1 (red,yellow,blue,lightblue,green,white,etc) : '))
color_2 = str(input('Color team 2: '))
color_gk = str(input('Color Goalkeeper: '))
name_team1 = str(input('Name team 1: '))
name_team2 = str(input('Name team 2: '))
#intializing the web camera device
#filename = 'DORSALES/D13.avi' #'soccer_small.mp4'
filename = example_image
cap = cv2.VideoCapture(filename)
size = (int(cap.get(3)),
int(cap.get(4)))
out = cv2.VideoWriter()
fourcc = cv2.VideoWriter_fourcc('m','p','4','v')
success = out.open('soccer_out.avi',fourcc,20.0,size,True)
# Running the tensorflow session
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
counter = 0
while (True):
ret, image_np = cap.read()
counter += 1
if ret:
h = image_np.shape[0]
w = image_np.shape[1]
if not ret:
break
if counter % 1 == 0:
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=3,
min_score_thresh=0.6)
frame_number = counter
loc = {}
for n in range(len(scores[0])):
if scores[0][n] > 0.60:
# Calculate position
ymin = int(boxes[0][n][0] * h)
xmin = int(boxes[0][n][1] * w)
ymax = int(boxes[0][n][2] * h)
xmax = int(boxes[0][n][3] * w)
# Find label corresponding to that class
for cat in categories:
if cat['id'] == classes[0][n]:
label = cat['name']
## extract every person
if label == 'person':
#crop them
crop_img = image_np[ymin:ymax, xmin:xmax]
color = detect_team(crop_img,color_1,color_2,color_gk)
if color != 'not_sure':
coords = (xmin, ymin)
if color == 'Team1':
loc[coords] = name_team1
elif color == 'Team2':
loc[coords] = name_team2
elif color == 'GK':
loc[coords] = 'GK'
else:
loc[coords] = name_team2
## print color next to the person
for key in loc.keys():
text_pos = str(loc[key])
cv2.putText(image_np, text_pos, (key[0], key[1]-20), cv2.FONT_HERSHEY_SIMPLEX, 0.50, (255, 0, 0), 2) # Text in black
print(counter) #cv2.imshow('image', image_np)
out.write(image_np)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
print('Done!')
cap.release()
out.release()
cv2.destroyAllWindows()
```
|
github_jupyter
|
from __future__ import division
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import sys
from tkinter import *
import tkinter.filedialog as fdialog
from collections import defaultdict
from io import StringIO
from PIL import Image
import cv2
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
# Path to frozen detection graph. This is the actual model that is used for the object detection.
# Note: Model used for SSDLite_Mobilenet_v2
#PATH_TO_CKPT = './object_detection/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb'
PATH_TO_CKPT = './object_detection/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = './object_detection/data/mscoco_label_map.pbtxt'
NUM_CLASSES = 90
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
#print(categories)
#print(category_index)
def count_nonblack_np(img):
"""Return the number of pixels in img that are not black.
img must be a Numpy array with colour values along the last axis.
"""
return img.any(axis=-1).sum()
def detect_team(image, col1, col2, col_gk, show = False):
# convert to HSV colorbase
img_hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# define color intervals (in HSV colorbase)
lower_yellow = np.array([25,100,100])
upper_yellow = np.array([35,255,255])
lower_lightblue = np.array([95,80,80])
upper_lightblue = np.array([120,170,255])
lower_blue = np.array([100,80,80])
upper_blue = np.array([120,255,255])
lower_red = np.array([165,50,100])
upper_red = np.array([180,255,255])
#lower_red2 = np.array([0,50,100])
#upper_red2 = np.array([10,255,255])
lower_purple = np.array([130,80,80])
upper_purple = np.array([160,255,255])
lower_green = np.array([35,80,80])
upper_green = np.array([50,255,255])
lower_orange = np.array([28,80,80])
upper_orange = np.array([30,255,255])
lower_white = np.array([0,0,240])
upper_white = np.array([190,60,255])
# define the list of boundaries
# Team 1
if col1 == 'red':
rgb1_low = lower_red
rgb1_up = upper_red
elif col1 == 'lightblue':
rgb1_low = lower_lightblue
rgb1_up = upper_lightblue
elif col1 == 'yellow':
rgb1_low = lower_yellow
rgb1_up = upper_yellow
elif col1 == 'blue':
rgb1_low = lower_blue
rgb1_up = upper_blue
elif col1 == 'purple':
rgb1_low = lower_purple
rgb1_up = upper_purple
elif col1 == 'green':
rgb1_low = lower_green
rgb1_up = upper_green
elif col1 == 'orange':
rgb1_low = lower_orange
rgb1_up = upper_orange
elif col1 == 'white':
rgb1_low = lower_white
rgb1_up = upper_white
# Team 2
if col2 == 'red':
rgb2_low = lower_red
rgb2_up = upper_red
elif col2 == 'lightblue':
rgb2_low = lower_lightblue
rgb2_up = upper_lightblue
elif col2 == 'yellow':
rgb2_low = lower_yellow
rgb2_up = upper_yellow
elif col2 == 'blue':
rgb2_low = lower_blue
rgb2_up = upper_blue
elif col2 == 'purple':
rgb2_low = lower_purple
rgb2_up = upper_purple
elif col2 == 'green':
rgb2_low = lower_green
rgb2_up = upper_green
elif col2 == 'orange':
rgb2_low = lower_orange
rgb2_up = upper_orange
elif col2 == 'white':
rgb2_low = lower_white
rgb2_up = upper_white
# Goal-keeper
if col_gk == 'red':
rgbGK_low = lower_red
rgbGK_up = upper_red
elif col_gk == 'lightblue':
rgbGK_low = lower_lightblue
rgbGK_up = upper_lightblue
elif col_gk == 'yellow':
rgbGK_low = lower_yellow
rgbGK_up = upper_yellow
elif col_gk == 'blue':
rgbGK_low = lower_blue
rgbGK_up = upper_blue
elif col_gk == 'purple':
rgbGK_low = lower_purple
rgbGK_up = upper_purple
elif col_gk == 'green':
rgbGK_low = lower_green
rgbGK_up = upper_green
elif col_gk == 'orange':
rgbGK_low = lower_orange
rgbGK_up = upper_orange
elif col_gk == 'white':
rgbGK_low = lower_white
rgbGK_up = upper_white
boundaries = [
(rgb1_low, rgb1_up), #red
(rgb2_low, rgb2_up), #light-blue
(rgbGK_low, rgbGK_up)
]
# ([25, 146, 190], [96, 174, 250]) #yellow
i = 0
for (lower, upper) in boundaries:
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype = "uint8")
upper = np.array(upper, dtype = "uint8")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(img_hsv, lower, upper)
output = cv2.bitwise_and(image, image, mask = mask)
tot_pix = count_nonblack_np(image)
color_pix = count_nonblack_np(output)
ratio = color_pix/tot_pix
# print("ratio is:", ratio)
if ratio > 0.01 and i == 0:
return 'Team1' #'red'
elif ratio > 0.01 and i == 1:
return 'Team2' #'yellow'
elif ratio > 0.01 and i == 2:
return 'GK'
i += 1
if show == True:
cv2.imshow("images", np.hstack([image, output]))
if cv2.waitKey(0) & 0xFF == ord('q'):
cv2.destroyAllWindows()
return 'not_sure'
## To View Color Mask
# filename = 'frame74.jpg' #'image2.jpg'
# image = cv2.imread(filename)
# resize = cv2.resize(image, (640,360))
# detect_team(resize, show=True)
# Para abrir ventana elegir archivo de video
root = Tk()
root.filename = fdialog.askopenfile(initialdir = "/",title = "Select file",filetypes = (("avi files","*.avi"),("all files","*.*")))
print('Example picture: ', root.filename.name)
example_image_path = root.filename.name
pathAux = example_image_path.find('/',-20)
example_image = example_image_path[pathAux+1:]
print(example_image)
root.destroy()
color_1 = str(input('Color team 1 (red,yellow,blue,lightblue,green,white,etc) : '))
color_2 = str(input('Color team 2: '))
color_gk = str(input('Color Goalkeeper: '))
name_team1 = str(input('Name team 1: '))
name_team2 = str(input('Name team 2: '))
#intializing the web camera device
#filename = 'DORSALES/D13.avi' #'soccer_small.mp4'
filename = example_image
cap = cv2.VideoCapture(filename)
size = (int(cap.get(3)),
int(cap.get(4)))
out = cv2.VideoWriter()
fourcc = cv2.VideoWriter_fourcc('m','p','4','v')
success = out.open('soccer_out.avi',fourcc,20.0,size,True)
# Running the tensorflow session
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
counter = 0
while (True):
ret, image_np = cap.read()
counter += 1
if ret:
h = image_np.shape[0]
w = image_np.shape[1]
if not ret:
break
if counter % 1 == 0:
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=3,
min_score_thresh=0.6)
frame_number = counter
loc = {}
for n in range(len(scores[0])):
if scores[0][n] > 0.60:
# Calculate position
ymin = int(boxes[0][n][0] * h)
xmin = int(boxes[0][n][1] * w)
ymax = int(boxes[0][n][2] * h)
xmax = int(boxes[0][n][3] * w)
# Find label corresponding to that class
for cat in categories:
if cat['id'] == classes[0][n]:
label = cat['name']
## extract every person
if label == 'person':
#crop them
crop_img = image_np[ymin:ymax, xmin:xmax]
color = detect_team(crop_img,color_1,color_2,color_gk)
if color != 'not_sure':
coords = (xmin, ymin)
if color == 'Team1':
loc[coords] = name_team1
elif color == 'Team2':
loc[coords] = name_team2
elif color == 'GK':
loc[coords] = 'GK'
else:
loc[coords] = name_team2
## print color next to the person
for key in loc.keys():
text_pos = str(loc[key])
cv2.putText(image_np, text_pos, (key[0], key[1]-20), cv2.FONT_HERSHEY_SIMPLEX, 0.50, (255, 0, 0), 2) # Text in black
print(counter) #cv2.imshow('image', image_np)
out.write(image_np)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
print('Done!')
cap.release()
out.release()
cv2.destroyAllWindows()
| 0.419053 | 0.280894 |
# Introduction to Machine Learning:
Examples of Unsupervised and Supervised Machine-Learning Algorithms
========
##### Version 0.1
Broadly speaking, machine-learning methods constitute a diverse collection of data-driven algorithms designed to classify/characterize/analyze sources in multi-dimensional spaces. The topics and studies that fall under the umbrella of machine learning is growing, and there is no good catch-all definition. The number (and variation) of algorithms is vast, and beyond the scope of these exercises. While we will discuss a few specific algorithms today, more importantly, we will explore the scope of the two general methods: unsupervised learning and supervised learning and introduce the powerful (and dangerous?) Python package [`scikit-learn`](http://scikit-learn.org/stable/).
***
By AA Miller
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
```
##Problem 1) Introduction to `scikit-learn`
At the most basic level, `scikit-learn` makes machine learning extremely easy within Python. By way of example, here is a short piece of code that builds a complex, non-linear model to classify sources in the Iris data set that we learned about yesterday:
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
RFclf = RandomForestClassifier().fit(iris.data, iris.target)
Those 4 lines of code have constructed a model that is superior to any system of hard cuts that we could have encoded while looking at the multidimensional space. This can be fast as well: execute the dummy code in the cell below to see how "easy" machine-learning is with `scikit-learn`.
```
# execute dummy code here
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
RFclf = RandomForestClassifier().fit(iris.data, iris.target)
```
Generally speaking, the procedure for `scikit-learn` is uniform across all machine-learning algorithms. Models are accessed via the various modules (`ensemble`, `SVM`, `neighbors`, etc), with user-defined tuning parameters. The features (or data) for the models are stored in a 2D array, `X`, with rows representing individual sources and columns representing the corresponding feature values. [In a minority of cases, `X`, represents a similarity or distance matrix where each entry represents the distance to every other source in the data set.] In cases where there is a known classification or scalar value (typically supervised methods), this information is stored in a 1D array `y`.
Unsupervised models are fit by calling `.fit(X)` and supervised models are fit by calling `.fit(X, y)`. In both cases, predictions for new observations, `Xnew`, can be obtained by calling `.predict(Xnew)`. Those are the basics and beyond that, the details are algorithm specific, but the documentation for essentially everything within `scikit-learn` is excellent, so read the docs.
To further develop our intuition, we will now explore the Iris dataset a little further.
**Problem 1a** What is the pythonic type of `iris`?
```
type(iris)
```
You likely haven't encountered a `scikit-learn Bunch` before. It's functionality is essentially the same as a dictionary.
**Problem 1b** What are the keys of iris?
```
iris.keys()
```
Most importantly, iris contains `data` and `target` values. These are all you need for `scikit-learn`, though the feature and target names and description are useful.
**Problem 1c** What is the shape and content of the `iris` data?
```
print(np.shape(iris.data))
print(iris.data)
```
**Problem 1d** What is the shape and content of the `iris` target?
```
print(np.shape(iris.target))
print(iris.target)
```
Finally, as a baseline for the exercises that follow, we will now make a simple 2D plot showing the separation of the 3 classes in the iris dataset. This plot will serve as the reference for examining the quality of the clustering algorithms.
**Problem 1e** Make a scatter plot showing sepal length vs. sepal width for the iris data set. Color the points according to their respective classes.
```
print(iris.feature_names) # shows that sepal length is first feature and sepal width is second feature
fig, ax = plt.subplots()
ax.scatter(iris.data[:,0], iris.data[:,1], c = iris.target, s = 30, edgecolor = "None", cmap = "viridis")
ax.set_xlabel('sepal length', fontsize=14)
ax.set_ylabel('sepal width', fontsize=14)
fig.tight_layout()
```
## Problem 2) Supervised Machine Learning
Supervised machine learning, on the other hand, aims to predict a target class or produce a regression result based on the location of labelled sources (i.e. the training set) in the multidimensional feature space. The "supervised" comes from the fact that we are specifying the allowed outputs from the model. As there are labels available for the training set, it is possible to estimate the accuracy of the model (though there are generally important caveats about generalization, which we will explore in further detail later).
The details and machinations of supervised learning will be explored further during the following break-out session. Here, we will simply introduce some of the basics as a point of comparison to unsupervised machine learning.
We will begin with a simple, but nevertheless, elegant algorithm for classification and regression: [$k$-nearest-neighbors](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) ($k$NN). In brief, the classification or regression output is determined by examining the $k$ nearest neighbors in the training set, where $k$ is a user defined number. Typically, though not always, distances between sources are Euclidean, and the final classification is assigned to whichever class has a plurality within the $k$ nearest neighbors (in the case of regression, the average of the $k$ neighbors is the output from the model). We will experiment with the steps necessary to optimize $k$, and other tuning parameters, in the detailed break-out problem.
In `scikit-learn` the [`KNeighborsClassifer`](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) algorithm is implemented as part of the [`sklearn.neighbors`](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.neighbors) module.
**Problem 2a** Fit two different $k$NN models to the iris data, one with 3 neighbors and one with 10 neighbors. Plot the resulting class predictions in the sepal length-sepal width plane (same plot as above). How do the results compare to the true classifications? Is there any reason to be suspect of this procedure?
*Hint - after you have constructed the model, it is possible to obtain model predictions using the `.predict()` method, which requires a feature array, same features and order as the training set, as input.*
*Hint that isn't essential, but is worth thinking about - should the features be re-scaled in any way?*
```
from sklearn.neighbors import KNeighborsClassifier
KNNclf = KNeighborsClassifier(n_neighbors = 3).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
fig, ax = plt.subplots()
ax.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
ax.set_xlabel('sepal length', fontsize=14)
ax.set_ylabel('sepal width', fontsize=14)
fig.tight_layout()
KNNclf = KNeighborsClassifier(n_neighbors = 10).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
fig, ax = plt.subplots()
ax.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
ax.set_xlabel('sepal length', fontsize=14)
ax.set_ylabel('sepal width', fontsize=14)
fig.tight_layout()
```
These results are almost identical to the training classifications. However, we have cheated! In this case we are evaluating the accuracy of the model (98% in this case) using the same data that defines the model. Thus, what we have really evaluated here is the training error. The relevant parameter, however, is the generalization error: how accurate are the model predictions on new data?
Without going into too much detail, we will test this using cross validation (CV), which will be explored in more detail later. In brief, CV provides predictions on the training set using a subset of the data to generate a model that predicts the class of the remaining sources. Using [`cross_val_predict`](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_predict.html), we can get a better sense of the model accuracy. Predictions from `cross_val_predict` are produced in the following manner:
from sklearn.cross_validation import cross_val_predict
CVpreds = cross_val_predict(sklearn.model(), X, y)
where `sklearn.model()` is the desired model, `X` is the feature array, and `y` is the label array.
**Problem 3b** Produce cross-validation predictions for the iris dataset and a $k$NN with 5 neighbors. Plot the resulting classifications, as above, and estimate the accuracy of the model as applied to new data. How does this accuracy compare to a $k$NN with 50 neighbors?
```
from sklearn.model_selection import cross_val_predict
CVpreds = cross_val_predict(KNeighborsClassifier(n_neighbors=5),
iris.data, iris.target, cv=3)
fig, ax = plt.subplots()
ax.scatter(iris.data[:,0], iris.data[:,1],
c = preds, s = 30, edgecolor = "None", cmap = "viridis")
ax.set_xlabel('sepal length', fontsize=14)
ax.set_ylabel('sepal width', fontsize=14)
fig.tight_layout()
print("The accuracy of the kNN = 5 model is ~{:.4}".format( sum(CVpreds == iris.target)/len(CVpreds) ))
CVpreds50 = cross_val_predict(KNeighborsClassifier(n_neighbors=50),
iris.data, iris.target, cv=3)
print("The accuracy of the kNN = 50 model is ~{:.4}".format( sum(CVpreds50 == iris.target)/len(CVpreds50) ))
```
While it is useful to understand the overall accuracy of the model, it is even more useful to understand the nature of the misclassifications that occur.
**Problem 2c** Calculate the accuracy for each class in the iris set, as determined via CV for the $k$NN = 50 model.
```
for iris_type in range(3):
iris_acc = sum( (CVpreds50 == iris_type) & (iris.target == iris_type)) / sum(iris.target == iris_type)
print("The accuracy for class {:s} is ~{:.4f}".format(iris.target_names[iris_type], iris_acc))
```
We just found that the classifier does a much better job classifying setosa and versicolor than it does for virginica. The main reason for this is some viginica flowers lie far outside the main virginica locus, and within predominantly versicolor "neighborhoods". In addition to knowing the accuracy for the individual classes, it is also useful to know class predictions for the misclassified sources, or in other words where there is "confusion" for the classifier. The best way to summarize this information is with a confusion matrix. In a confusion matrix, one axis shows the true class and the other shows the predicted class. For a perfect classifier all of the power will be along the diagonal, while confusion is represented by off-diagonal signal.
Like almost everything else we have encountered during this exercise, `scikit-learn` makes it easy to compute a confusion matrix. This can be accomplished with the following:
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_prep)
**Problem 2d** Calculate the confusion matrix for the iris training set and the $k$NN = 50 model.
```
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(iris.target, CVpreds50)
print(cm)
```
From this representation, we see right away that most of the virginica that are being misclassifed are being scattered into the versicolor class. However, this representation could still be improved: it'd be helpful to normalize each value relative to the total number of sources in each class, and better still, it'd be good to have a visual representation of the confusion matrix. This visual representation will be readily digestible. Now let's normalize the confusion matrix.
**Problem 2e** Calculate the normalized confusion matrix. Be careful, you have to sum along one axis, and then divide along the other.
*Anti-hint: This operation is actually straightforward using some array manipulation that we have not covered up to this point. Thus, we have performed the necessary operations for you below. If you have extra time, you should try to develop an alternate way to arrive at the same normalization.*
```
normalized_cm = cm.astype('float')/cm.sum(axis = 1)[:,np.newaxis]
normalized_cm
```
The normalization makes it easier to compare the classes, since each class has a different number of sources. Now we can procede with a visual representation of the confusion matrix. This is best done using `imshow()` within pyplot. You will also need to plot a colorbar, and labeling the axes will also be helpful.
**Problem 2f** Plot the confusion matrix. Be sure to label each of the axeses.
*Hint - you might find the [`sklearn` confusion matrix tutorial](http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#example-model-selection-plot-confusion-matrix-py) helpful for making a nice plot.*
```
plt.imshow(normalized_cm, interpolation = 'nearest', cmap = 'bone_r')# complete
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.ylabel( 'True')# complete
plt.xlabel( 'Predicted' )# complete
plt.colorbar()
plt.tight_layout()
```
Now it is straight-forward to see that virginica and versicolor flowers are the most likely to be confused, which we could intuit from the very first plot in this notebook, but this exercise becomes far more important for large data sets with many, many classes.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
# execute dummy code here
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
RFclf = RandomForestClassifier().fit(iris.data, iris.target)
type(iris)
iris.keys()
print(np.shape(iris.data))
print(iris.data)
print(np.shape(iris.target))
print(iris.target)
print(iris.feature_names) # shows that sepal length is first feature and sepal width is second feature
fig, ax = plt.subplots()
ax.scatter(iris.data[:,0], iris.data[:,1], c = iris.target, s = 30, edgecolor = "None", cmap = "viridis")
ax.set_xlabel('sepal length', fontsize=14)
ax.set_ylabel('sepal width', fontsize=14)
fig.tight_layout()
from sklearn.neighbors import KNeighborsClassifier
KNNclf = KNeighborsClassifier(n_neighbors = 3).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
fig, ax = plt.subplots()
ax.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
ax.set_xlabel('sepal length', fontsize=14)
ax.set_ylabel('sepal width', fontsize=14)
fig.tight_layout()
KNNclf = KNeighborsClassifier(n_neighbors = 10).fit(iris.data, iris.target)
preds = KNNclf.predict(iris.data)
fig, ax = plt.subplots()
ax.scatter(iris.data[:,0], iris.data[:,1],
c = preds, cmap = "viridis", s = 30, edgecolor = "None")
ax.set_xlabel('sepal length', fontsize=14)
ax.set_ylabel('sepal width', fontsize=14)
fig.tight_layout()
from sklearn.model_selection import cross_val_predict
CVpreds = cross_val_predict(KNeighborsClassifier(n_neighbors=5),
iris.data, iris.target, cv=3)
fig, ax = plt.subplots()
ax.scatter(iris.data[:,0], iris.data[:,1],
c = preds, s = 30, edgecolor = "None", cmap = "viridis")
ax.set_xlabel('sepal length', fontsize=14)
ax.set_ylabel('sepal width', fontsize=14)
fig.tight_layout()
print("The accuracy of the kNN = 5 model is ~{:.4}".format( sum(CVpreds == iris.target)/len(CVpreds) ))
CVpreds50 = cross_val_predict(KNeighborsClassifier(n_neighbors=50),
iris.data, iris.target, cv=3)
print("The accuracy of the kNN = 50 model is ~{:.4}".format( sum(CVpreds50 == iris.target)/len(CVpreds50) ))
for iris_type in range(3):
iris_acc = sum( (CVpreds50 == iris_type) & (iris.target == iris_type)) / sum(iris.target == iris_type)
print("The accuracy for class {:s} is ~{:.4f}".format(iris.target_names[iris_type], iris_acc))
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(iris.target, CVpreds50)
print(cm)
normalized_cm = cm.astype('float')/cm.sum(axis = 1)[:,np.newaxis]
normalized_cm
plt.imshow(normalized_cm, interpolation = 'nearest', cmap = 'bone_r')# complete
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.ylabel( 'True')# complete
plt.xlabel( 'Predicted' )# complete
plt.colorbar()
plt.tight_layout()
| 0.678859 | 0.996544 |
```
# This notebook shows how to convert a BERT model to tf_transformers framework
# We will model from BERT hub (downloaded locally) and convert it into
# tf_transformers model
# Hub model
import tensorflow_hub as hub
import json
import tensorflow as tf
from tf_transformers.models import BERTEncoder
from absl import logging
logging.set_verbosity("INFO")
tf.__version__
bert_hub = hub.KerasLayer("../../../pretrained_models/bert_uncased/",
trainable=True)
len(bert_hub.variables)
config = json.load(open("../model_directory/bert_base/bert_config.json"))
# config['max_position_embeddings'] = 1024
# config['intermediate_size'] = 4096
# config['num_attention_heads'] = 16
# config['num_hidden_layers'] = 24
config
16 * 64
# Load Hub Model
# To download Hub Model
# https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1
# Download and unzip
bert_hub = hub.KerasLayer("../../../pretrained_models/bert_uncased/",
trainable=True)
tf.keras.backend.clear_session()
tf.keras.backend.clear_session()
config = json.load(open("../model_directory/bert_base/bert_config.json"))
# We have Keras/Legacy Layer here (Not keras.model)
model_layer = BERTEncoder(config=config,
name='bert',
mask_mode='user_defined',
is_training=False
)
ckpt = tf.train.load_checkpoint("/Users/PRVATE/pretrained_models/bert_base/bert_model.ckpt")
model_vars = tf.train.list_variables("/Users/PRVATE/pretrained_models/bert_base/bert_model.ckpt")
len(model_vars)
config
# BERT hub variables to BERT model
mapping_dict = {
'bert_model/word_embeddings/embeddings:0': 'tf_transformers/bert/word_embeddings/embeddings:0',
'bert_model/embedding_postprocessor/type_embeddings:0': 'tf_transformers/bert/type_embeddings/embeddings:0',
'bert_model/embedding_postprocessor/position_embeddings:0': 'tf_transformers/bert/positional_embeddings/embeddings:0',
'bert_model/embedding_postprocessor/layer_norm/gamma:0': 'tf_transformers/bert/embeddings/layer_norm/gamma:0',
'bert_model/embedding_postprocessor/layer_norm/beta:0': 'tf_transformers/bert/embeddings/layer_norm/beta:0',
'bert_model/pooler_transform/kernel:0': 'tf_transformers/bert/pooler_transform/kernel:0',
'bert_model/pooler_transform/bias:0': 'tf_transformers/bert/pooler_transform/bias:0'
}
tf_transformers_bert_index_dict = {}
for index, var in enumerate(model_layer.variables):
# print(index, var.name, var.shape)
temp_var = var.name.replace('tf_transformers/bert/transformer/', '')
tf_transformers_bert_index_dict[temp_var] = index
# legacy_ai <-- hub
assigned_map = []
assigned_map_values = []
for var in bert_hub.variables:
if 'Variable:0' in var.name:
continue
temp_var = var.name.replace('bert_model/encoder/', '')
# If var in mapping dict, then we can get tf_transformers_bert_index_dict[mapping_dict[var]] index
if temp_var in mapping_dict:
index = tf_transformers_bert_index_dict[mapping_dict[temp_var]]
model_layer.variables[index].assign(var)
assigned_map.append((var.name, model_layer.variables[index].name))
assigned_map_values.append((tf.reduce_sum(var).numpy(), tf.reduce_sum(model_layer.variables[index]).numpy()))
continue
# If not in mapping_dict, then mostly it is from attention layer
index = tf_transformers_bert_index_dict[temp_var]
if 'query/kernel:0' in temp_var or 'key/kernel:0' in temp_var or 'value/kernel:0' in temp_var:
# hub (2D) to tf_transformers (3D)
model_layer.variables[index].assign(tf.reshape(var, (config['embedding_size'],
config['num_attention_heads'],
config['embedding_size'] // config['num_attention_heads'])))
assigned_map.append((var.name, model_layer.variables[index].name))
assigned_map_values.append((tf.reduce_sum(var).numpy(), tf.reduce_sum(model_layer.variables[index]).numpy()))
continue
if 'query/bias:0' in temp_var or 'key/bias:0' in temp_var or 'value/bias:0' in temp_var:
# hub (2D) to tf_transformers (3D)
model_layer.variables[index].assign(tf.reshape(var, (config['num_attention_heads'],
config['embedding_size'] // config['num_attention_heads'])))
assigned_map.append((var.name, model_layer.variables[index].name))
assigned_map_values.append((tf.reduce_sum(var).numpy(), tf.reduce_sum(model_layer.variables[index]).numpy()))
continue
# Rest of the variables
model_layer.variables[index].assign(var)
assigned_map.append((var.name, model_layer.variables[index].name))
assigned_map_values.append((tf.reduce_sum(var).numpy(), tf.reduce_sum(model_layer.variables[index]).numpy()))
logging.info("Done assigning variables weights")
cls_output --> tf.Tensor(-2.06532, shape=(), dtype=float32) --> (1, 768)
token_embeddings --> tf.Tensor(-26.426851, shape=(), dtype=float32) --> (1, 3, 768)
token_logits --> tf.Tensor(-17970.629, shape=(), dtype=float32) --> (1, 3, 30522)
last_token_logits --> tf.Tensor(-4169.4917, shape=(), dtype=float32) --> (1, 30522)
# Compare the results from Hub and tf_transformers
results_hub = bert_hub([tf.constant([[1, 2,3]]), tf.constant([[1, 1,1]]), tf.constant([[0,0,0]])])
results_legacy = model_layer({'input_ids': tf.constant([[1, 2,3]]),
'input_mask': tf.constant([[1,1,1]]),
'input_type_ids': tf.constant([[0, 0, 0]])})
for r in results_hub:
print(tf.reduce_sum(r), '-->', r.shape)
input_ids = tf.constant([[1, 9, 10, 11, 23],
[1, 22, 234, 432, 2349]])
input_mask = tf.ones_like(input_ids)
input_type_ids = tf.ones_like(input_ids)
results_hub = bert_hub([input_ids, input_mask, input_type_ids])
for r in results_hub:
print(tf.reduce_sum(r), '-->', r.shape)
for k, r in results_legacy.items():
if isinstance(r, list):
continue
print(k, '-->', tf.reduce_sum(r), '-->', r.shape)
# Save the model
checkpoint_dir = '../model_directory/bert_base'
ckpt = tf.train.Checkpoint(model=model_layer)
manager = tf.train.CheckpointManager(ckpt, checkpoint_dir, max_to_keep=1)
save_path = manager.save()
input_ids = tf.constant([[1, 9, 10, 11, 23],
[1, 22, 234, 432, 2349]])
input_mask = tf.ones_like(input_ids)
input_type_ids = tf.ones_like(input_ids)
results_legacy = model_layer({'input_ids': input_ids,
'input_mask': input_mask,
'input_type_ids': input_type_ids})
for k, r in results_legacy.items():
if isinstance(r, list):
continue
print(k, '-->', tf.reduce_sum(r), '-->', r.shape)
```
|
github_jupyter
|
# This notebook shows how to convert a BERT model to tf_transformers framework
# We will model from BERT hub (downloaded locally) and convert it into
# tf_transformers model
# Hub model
import tensorflow_hub as hub
import json
import tensorflow as tf
from tf_transformers.models import BERTEncoder
from absl import logging
logging.set_verbosity("INFO")
tf.__version__
bert_hub = hub.KerasLayer("../../../pretrained_models/bert_uncased/",
trainable=True)
len(bert_hub.variables)
config = json.load(open("../model_directory/bert_base/bert_config.json"))
# config['max_position_embeddings'] = 1024
# config['intermediate_size'] = 4096
# config['num_attention_heads'] = 16
# config['num_hidden_layers'] = 24
config
16 * 64
# Load Hub Model
# To download Hub Model
# https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1
# Download and unzip
bert_hub = hub.KerasLayer("../../../pretrained_models/bert_uncased/",
trainable=True)
tf.keras.backend.clear_session()
tf.keras.backend.clear_session()
config = json.load(open("../model_directory/bert_base/bert_config.json"))
# We have Keras/Legacy Layer here (Not keras.model)
model_layer = BERTEncoder(config=config,
name='bert',
mask_mode='user_defined',
is_training=False
)
ckpt = tf.train.load_checkpoint("/Users/PRVATE/pretrained_models/bert_base/bert_model.ckpt")
model_vars = tf.train.list_variables("/Users/PRVATE/pretrained_models/bert_base/bert_model.ckpt")
len(model_vars)
config
# BERT hub variables to BERT model
mapping_dict = {
'bert_model/word_embeddings/embeddings:0': 'tf_transformers/bert/word_embeddings/embeddings:0',
'bert_model/embedding_postprocessor/type_embeddings:0': 'tf_transformers/bert/type_embeddings/embeddings:0',
'bert_model/embedding_postprocessor/position_embeddings:0': 'tf_transformers/bert/positional_embeddings/embeddings:0',
'bert_model/embedding_postprocessor/layer_norm/gamma:0': 'tf_transformers/bert/embeddings/layer_norm/gamma:0',
'bert_model/embedding_postprocessor/layer_norm/beta:0': 'tf_transformers/bert/embeddings/layer_norm/beta:0',
'bert_model/pooler_transform/kernel:0': 'tf_transformers/bert/pooler_transform/kernel:0',
'bert_model/pooler_transform/bias:0': 'tf_transformers/bert/pooler_transform/bias:0'
}
tf_transformers_bert_index_dict = {}
for index, var in enumerate(model_layer.variables):
# print(index, var.name, var.shape)
temp_var = var.name.replace('tf_transformers/bert/transformer/', '')
tf_transformers_bert_index_dict[temp_var] = index
# legacy_ai <-- hub
assigned_map = []
assigned_map_values = []
for var in bert_hub.variables:
if 'Variable:0' in var.name:
continue
temp_var = var.name.replace('bert_model/encoder/', '')
# If var in mapping dict, then we can get tf_transformers_bert_index_dict[mapping_dict[var]] index
if temp_var in mapping_dict:
index = tf_transformers_bert_index_dict[mapping_dict[temp_var]]
model_layer.variables[index].assign(var)
assigned_map.append((var.name, model_layer.variables[index].name))
assigned_map_values.append((tf.reduce_sum(var).numpy(), tf.reduce_sum(model_layer.variables[index]).numpy()))
continue
# If not in mapping_dict, then mostly it is from attention layer
index = tf_transformers_bert_index_dict[temp_var]
if 'query/kernel:0' in temp_var or 'key/kernel:0' in temp_var or 'value/kernel:0' in temp_var:
# hub (2D) to tf_transformers (3D)
model_layer.variables[index].assign(tf.reshape(var, (config['embedding_size'],
config['num_attention_heads'],
config['embedding_size'] // config['num_attention_heads'])))
assigned_map.append((var.name, model_layer.variables[index].name))
assigned_map_values.append((tf.reduce_sum(var).numpy(), tf.reduce_sum(model_layer.variables[index]).numpy()))
continue
if 'query/bias:0' in temp_var or 'key/bias:0' in temp_var or 'value/bias:0' in temp_var:
# hub (2D) to tf_transformers (3D)
model_layer.variables[index].assign(tf.reshape(var, (config['num_attention_heads'],
config['embedding_size'] // config['num_attention_heads'])))
assigned_map.append((var.name, model_layer.variables[index].name))
assigned_map_values.append((tf.reduce_sum(var).numpy(), tf.reduce_sum(model_layer.variables[index]).numpy()))
continue
# Rest of the variables
model_layer.variables[index].assign(var)
assigned_map.append((var.name, model_layer.variables[index].name))
assigned_map_values.append((tf.reduce_sum(var).numpy(), tf.reduce_sum(model_layer.variables[index]).numpy()))
logging.info("Done assigning variables weights")
cls_output --> tf.Tensor(-2.06532, shape=(), dtype=float32) --> (1, 768)
token_embeddings --> tf.Tensor(-26.426851, shape=(), dtype=float32) --> (1, 3, 768)
token_logits --> tf.Tensor(-17970.629, shape=(), dtype=float32) --> (1, 3, 30522)
last_token_logits --> tf.Tensor(-4169.4917, shape=(), dtype=float32) --> (1, 30522)
# Compare the results from Hub and tf_transformers
results_hub = bert_hub([tf.constant([[1, 2,3]]), tf.constant([[1, 1,1]]), tf.constant([[0,0,0]])])
results_legacy = model_layer({'input_ids': tf.constant([[1, 2,3]]),
'input_mask': tf.constant([[1,1,1]]),
'input_type_ids': tf.constant([[0, 0, 0]])})
for r in results_hub:
print(tf.reduce_sum(r), '-->', r.shape)
input_ids = tf.constant([[1, 9, 10, 11, 23],
[1, 22, 234, 432, 2349]])
input_mask = tf.ones_like(input_ids)
input_type_ids = tf.ones_like(input_ids)
results_hub = bert_hub([input_ids, input_mask, input_type_ids])
for r in results_hub:
print(tf.reduce_sum(r), '-->', r.shape)
for k, r in results_legacy.items():
if isinstance(r, list):
continue
print(k, '-->', tf.reduce_sum(r), '-->', r.shape)
# Save the model
checkpoint_dir = '../model_directory/bert_base'
ckpt = tf.train.Checkpoint(model=model_layer)
manager = tf.train.CheckpointManager(ckpt, checkpoint_dir, max_to_keep=1)
save_path = manager.save()
input_ids = tf.constant([[1, 9, 10, 11, 23],
[1, 22, 234, 432, 2349]])
input_mask = tf.ones_like(input_ids)
input_type_ids = tf.ones_like(input_ids)
results_legacy = model_layer({'input_ids': input_ids,
'input_mask': input_mask,
'input_type_ids': input_type_ids})
for k, r in results_legacy.items():
if isinstance(r, list):
continue
print(k, '-->', tf.reduce_sum(r), '-->', r.shape)
| 0.758868 | 0.317294 |
```
import keras
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Flatten, Input, Lambda, Concatenate
from keras.layers import Conv1D, MaxPooling1D
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras import backend as K
import keras.losses
import tensorflow as tf
import pandas as pd
import os
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import isolearn.io as isoio
import isolearn.keras as iso
from scipy.stats import pearsonr
```
<h2>Load 5' Alternative Splicing Data</h2>
- Load a Pandas DataFrame + Matlab Matrix of measured Splicing Sequences<br/>
- isolearn.io loads all .csv and .mat files of a directory into memory as a dictionary<br/>
- The DataFrame has one column - padded_sequence - containing the splice donor sequence<br/>
- The Matrix contains RNA-Seq counts of measured splicing at each position across the sequence<br/>
```
#Load Splicing Data
splicing_dict = isoio.load('data/processed_data/splicing_5ss_data/splicing_5ss_data')
```
<h2>Create a Training and Test Set</h2>
- We create an index containing row numbers corresponding to training and test sequences<br/>
- Notice that we do not alter the underlying DataFrame, we only make lists of pointers to rows<br/>
```
#Generate training, validation and test set indexes
valid_set_size = 0.10
test_set_size = 0.10
data_index = np.arange(len(splicing_dict['df']), dtype=np.int)
train_index = data_index[:-int(len(data_index) * (valid_set_size + test_set_size))]
valid_index = data_index[train_index.shape[0]:-int(len(data_index) * test_set_size)]
test_index = data_index[train_index.shape[0] + valid_index.shape[0]:]
print('Training set size = ' + str(train_index.shape[0]))
print('Validation set size = ' + str(valid_index.shape[0]))
print('Test set size = ' + str(test_index.shape[0]))
```
<h2>Create Data Generators</h2>
- In Isolearn, we always build data generators that will encode and feed us the data on the fly<br/>
- Here, for example, we create a training and test generator separately (using list comprehension)<br/>
- First argument: The list of row indices (of data points) for this generator<br/>
- Second argument: Dictionary or data sources<br/>
- Third argument: Batch size for the data generator
- Fourth argument: List of inputs, where each input is specified as a dictionary of attributes<br/>
- Fifth argument: List of outputs<br/>
- ... (see next example for description of argument 6)<br/>
- Seventh argument: Shuffle the dataset or not<br/>
- Eight argument: True if some data source matrices are in sparse format<br/>
- Ninth argument: In Keras, we typically want to specfiy the Outputs as Inputs when training. <br/>This argument achieves this by moving the outputs over to the input list and replaces the output with a dummy encoder.<br/>
In this example, we specify two One-Hot encoders as the input encoders for different parts of the splice donor sequence.<br/>
We also specify the target output as the normalized RNA-Seq count at position 120 in the count matrix for each cell line (4 outputs).
```
#Create a One-Hot data generator, to be used for a convolutional net to regress SD1 Usage
splicing_gens = {
gen_id : iso.DataGenerator(
idx,
{
'df' : splicing_dict['df'],
'hek_count' : splicing_dict['hek_count'],
'hela_count' : splicing_dict['hela_count'],
'mcf7_count' : splicing_dict['mcf7_count'],
'cho_count' : splicing_dict['cho_count'],
},
batch_size=32,
inputs = [
{
'id' : 'random_region_1',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['padded_sequence'][122: 122 + 35],
'encoder' : iso.OneHotEncoder(seq_length=35),
'dim' : (35, 4),
'sparsify' : False
},
{
'id' : 'random_region_2',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['padded_sequence'][165: 165+35],
'encoder' : iso.OneHotEncoder(seq_length=35),
'dim' : (35, 4),
'sparsify' : False
}
],
outputs = [
{
'id' : cell_type + '_sd1_usage',
'source_type' : 'matrix',
'source' : cell_type + '_count',
'extractor' : lambda c, index: np.ravel(c),
'transformer' : lambda t: t[120] / np.sum(t)
} for cell_type in ['hek', 'hela', 'mcf7', 'cho']
],
randomizers = [],
shuffle = True if gen_id in ['train'] else False,
densify_batch_matrices=True,
move_outputs_to_inputs=True if gen_id in ['train', 'valid'] else False
) for gen_id, idx in [('train', train_index), ('valid', valid_index), ('test', test_index)]
}
```
<h2>Keras Loss Functions</h2>
Here we specfiy a few loss function (Cross-Entropy and KL-divergence) to be used when optimizing our Splicing CNN.<br/>
```
#Keras loss functions
def sigmoid_entropy(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
return -K.sum(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred), axis=-1)
def mean_sigmoid_entropy(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
return -K.mean(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred), axis=-1)
def sigmoid_kl_divergence(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
y_true = K.clip(y_true, K.epsilon(), 1. - K.epsilon())
return K.sum(y_true * K.log(y_true / y_pred) + (1.0 - y_true) * K.log((1.0 - y_true) / (1.0 - y_pred)), axis=-1)
def mean_sigmoid_kl_divergence(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
y_true = K.clip(y_true, K.epsilon(), 1. - K.epsilon())
return K.mean(y_true * K.log(y_true / y_pred) + (1.0 - y_true) * K.log((1.0 - y_true) / (1.0 - y_pred)), axis=-1)
```
<h2>Splicing Model Definition</h2>
Here we specfiy the Keras Inputs that we expect to receive from the data generators.<br/>
We also define the model architecture (2 convolutional-layer CNN with MaxPooling and layer sharing).<br/>
```
#Splicing Model Definition (CNN)
#Inputs
seq_input_1 = Input(shape=(35, 4))
seq_input_2 = Input(shape=(35, 4))
#Outputs
true_usage_hek = Input(shape=(1,))
true_usage_hela = Input(shape=(1,))
true_usage_mcf7 = Input(shape=(1,))
true_usage_cho = Input(shape=(1,))
#Shared Model Definition (Applied to each randomized sequence region)
layer_1 = Conv1D(64, 8, padding='valid', activation='relu')
layer_1_pool = MaxPooling1D(pool_size=2)
layer_2 = Conv1D(128, 6, padding='valid', activation='relu')
def shared_model(seq_input) :
return Flatten()(
layer_2(
layer_1_pool(
layer_1(
seq_input
)
)
)
)
shared_out_1 = shared_model(seq_input_1)
shared_out_2 = shared_model(seq_input_2)
#Layers applied to the concatenated hidden representation
layer_dense = Dense(256, activation='relu')
layer_drop = Dropout(0.2)
concat_out = Concatenate(axis=-1)([shared_out_1, shared_out_2])
dropped_dense_out = layer_drop(layer_dense(concat_out))
#Final cell-line specific regression layers
layer_usage_hek = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_hela = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_mcf7 = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_cho = Dense(1, activation='sigmoid', kernel_initializer='zeros')
pred_usage_hek = layer_usage_hek(dropped_dense_out)
pred_usage_hela = layer_usage_hela(dropped_dense_out)
pred_usage_mcf7 = layer_usage_mcf7(dropped_dense_out)
pred_usage_cho = layer_usage_cho(dropped_dense_out)
#Compile Splicing Model
splicing_model = Model(
inputs=[
seq_input_1,
seq_input_2
],
outputs=[
pred_usage_hek,
pred_usage_hela,
pred_usage_mcf7,
pred_usage_cho
]
)
```
<h2>Loss Model Definition</h2>
Here we specfiy our loss function, and we build it as a separate Keras Model.<br/>
In our case, our loss model averages the KL-divergence of predicted vs. true Splice Donor Usage across the 4 different cell types.<br/>
```
#Loss Model Definition
loss_hek = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_hek, pred_usage_hek])
loss_hela = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_hela, pred_usage_hela])
loss_mcf7 = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_mcf7, pred_usage_mcf7])
loss_cho = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_cho, pred_usage_cho])
total_loss = Lambda(
lambda l: (l[0] + l[1] + l[2] + l[3]) / 4.,
output_shape = (1,)
)(
[
loss_hek,
loss_hela,
loss_mcf7,
loss_cho
]
)
#Must be the same order as defined in the data generators
loss_model = Model([
#Inputs
seq_input_1,
seq_input_2,
#Target SD Usages
true_usage_hek,
true_usage_hela,
true_usage_mcf7,
true_usage_cho
], total_loss)
```
<h2>Optimize the Loss Model</h2>
Here we use SGD to optimize the Loss Model (defined in the previous notebook cell).<br/>
Since our Loss Model indirectly depends on predicted outputs from our CNN Splicing Model, SGD will optimize the weights of our CNN<br/>
<br/>
Note that we very easily pass the data generators, and run them in parallel, by simply calling Keras fit_generator.<br/>
```
#Optimize CNN with Keras using the Data Generators to stream genomic data features
opt = keras.optimizers.SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
loss_model.compile(loss=lambda true, pred: pred, optimizer=opt)
callbacks =[
EarlyStopping(monitor='val_loss', min_delta=0.001, patience=2, verbose=0, mode='auto')
]
loss_model.fit_generator(
generator=splicing_gens['train'],
validation_data=splicing_gens['valid'],
epochs=10,
use_multiprocessing=True,
workers=4,
callbacks=callbacks
)
#Save model
save_dir = os.path.join(os.getcwd(), 'saved_models')
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_name = 'splicing_cnn_multicell.h5'
model_path = os.path.join(save_dir, model_name)
splicing_model.save(model_path)
print('Saved trained model at %s ' % model_path)
#Load model
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'splicing_cnn_multicell.h5'
model_path = os.path.join(save_dir, model_name)
splicing_model = load_model(model_path)
```
<h2>Evaluate the Splicing CNN</h2>
Here we run our Splicing CNN on the Test set data generator (using Keras predict_generator).<br/>
We then compare our predictions of splice donor usage against the true RNA-Seq measurements.<br/>
```
#Evaluate predictions on test set
predictions = splicing_model.predict_generator(splicing_gens['test'], workers=4, use_multiprocessing=True)
pred_usage_hek, pred_usage_hela, pred_usage_mcf7, pred_usage_cho = [np.ravel(prediction) for prediction in predictions]
targets = zip(*[splicing_gens['test'][i][1] for i in range(len(splicing_gens['test']))])
true_usage_hek, true_usage_hela, true_usage_mcf7, true_usage_cho = [np.concatenate(list(target)) for target in targets]
cell_lines = [
('hek', (pred_usage_hek, true_usage_hek)),
('hela', (pred_usage_hela, true_usage_hela)),
('mcf7', (pred_usage_mcf7, true_usage_mcf7)),
('cho', (pred_usage_cho, true_usage_cho))
]
for cell_name, [y_true, y_pred] in cell_lines :
r_val, p_val = pearsonr(y_pred, y_true)
print("Test set R^2 = " + str(round(r_val * r_val, 2)) + ", p = " + str(p_val))
#Plot test set scatter
f = plt.figure(figsize=(4, 4))
plt.scatter(y_pred, y_true, color='black', s=5, alpha=0.05)
plt.xticks([0.0, 0.25, 0.5, 0.75, 1.0], fontsize=14)
plt.yticks([0.0, 0.25, 0.5, 0.75, 1.0], fontsize=14)
plt.xlabel('Predicted SD1 Usage', fontsize=14)
plt.ylabel('True SD1 Usage', fontsize=14)
plt.title(str(cell_name), fontsize=16)
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.tight_layout()
plt.show()
```
|
github_jupyter
|
import keras
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Flatten, Input, Lambda, Concatenate
from keras.layers import Conv1D, MaxPooling1D
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras import backend as K
import keras.losses
import tensorflow as tf
import pandas as pd
import os
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import isolearn.io as isoio
import isolearn.keras as iso
from scipy.stats import pearsonr
#Load Splicing Data
splicing_dict = isoio.load('data/processed_data/splicing_5ss_data/splicing_5ss_data')
#Generate training, validation and test set indexes
valid_set_size = 0.10
test_set_size = 0.10
data_index = np.arange(len(splicing_dict['df']), dtype=np.int)
train_index = data_index[:-int(len(data_index) * (valid_set_size + test_set_size))]
valid_index = data_index[train_index.shape[0]:-int(len(data_index) * test_set_size)]
test_index = data_index[train_index.shape[0] + valid_index.shape[0]:]
print('Training set size = ' + str(train_index.shape[0]))
print('Validation set size = ' + str(valid_index.shape[0]))
print('Test set size = ' + str(test_index.shape[0]))
#Create a One-Hot data generator, to be used for a convolutional net to regress SD1 Usage
splicing_gens = {
gen_id : iso.DataGenerator(
idx,
{
'df' : splicing_dict['df'],
'hek_count' : splicing_dict['hek_count'],
'hela_count' : splicing_dict['hela_count'],
'mcf7_count' : splicing_dict['mcf7_count'],
'cho_count' : splicing_dict['cho_count'],
},
batch_size=32,
inputs = [
{
'id' : 'random_region_1',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['padded_sequence'][122: 122 + 35],
'encoder' : iso.OneHotEncoder(seq_length=35),
'dim' : (35, 4),
'sparsify' : False
},
{
'id' : 'random_region_2',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['padded_sequence'][165: 165+35],
'encoder' : iso.OneHotEncoder(seq_length=35),
'dim' : (35, 4),
'sparsify' : False
}
],
outputs = [
{
'id' : cell_type + '_sd1_usage',
'source_type' : 'matrix',
'source' : cell_type + '_count',
'extractor' : lambda c, index: np.ravel(c),
'transformer' : lambda t: t[120] / np.sum(t)
} for cell_type in ['hek', 'hela', 'mcf7', 'cho']
],
randomizers = [],
shuffle = True if gen_id in ['train'] else False,
densify_batch_matrices=True,
move_outputs_to_inputs=True if gen_id in ['train', 'valid'] else False
) for gen_id, idx in [('train', train_index), ('valid', valid_index), ('test', test_index)]
}
#Keras loss functions
def sigmoid_entropy(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
return -K.sum(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred), axis=-1)
def mean_sigmoid_entropy(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
return -K.mean(y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred), axis=-1)
def sigmoid_kl_divergence(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
y_true = K.clip(y_true, K.epsilon(), 1. - K.epsilon())
return K.sum(y_true * K.log(y_true / y_pred) + (1.0 - y_true) * K.log((1.0 - y_true) / (1.0 - y_pred)), axis=-1)
def mean_sigmoid_kl_divergence(inputs) :
y_true, y_pred = inputs
y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())
y_true = K.clip(y_true, K.epsilon(), 1. - K.epsilon())
return K.mean(y_true * K.log(y_true / y_pred) + (1.0 - y_true) * K.log((1.0 - y_true) / (1.0 - y_pred)), axis=-1)
#Splicing Model Definition (CNN)
#Inputs
seq_input_1 = Input(shape=(35, 4))
seq_input_2 = Input(shape=(35, 4))
#Outputs
true_usage_hek = Input(shape=(1,))
true_usage_hela = Input(shape=(1,))
true_usage_mcf7 = Input(shape=(1,))
true_usage_cho = Input(shape=(1,))
#Shared Model Definition (Applied to each randomized sequence region)
layer_1 = Conv1D(64, 8, padding='valid', activation='relu')
layer_1_pool = MaxPooling1D(pool_size=2)
layer_2 = Conv1D(128, 6, padding='valid', activation='relu')
def shared_model(seq_input) :
return Flatten()(
layer_2(
layer_1_pool(
layer_1(
seq_input
)
)
)
)
shared_out_1 = shared_model(seq_input_1)
shared_out_2 = shared_model(seq_input_2)
#Layers applied to the concatenated hidden representation
layer_dense = Dense(256, activation='relu')
layer_drop = Dropout(0.2)
concat_out = Concatenate(axis=-1)([shared_out_1, shared_out_2])
dropped_dense_out = layer_drop(layer_dense(concat_out))
#Final cell-line specific regression layers
layer_usage_hek = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_hela = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_mcf7 = Dense(1, activation='sigmoid', kernel_initializer='zeros')
layer_usage_cho = Dense(1, activation='sigmoid', kernel_initializer='zeros')
pred_usage_hek = layer_usage_hek(dropped_dense_out)
pred_usage_hela = layer_usage_hela(dropped_dense_out)
pred_usage_mcf7 = layer_usage_mcf7(dropped_dense_out)
pred_usage_cho = layer_usage_cho(dropped_dense_out)
#Compile Splicing Model
splicing_model = Model(
inputs=[
seq_input_1,
seq_input_2
],
outputs=[
pred_usage_hek,
pred_usage_hela,
pred_usage_mcf7,
pred_usage_cho
]
)
#Loss Model Definition
loss_hek = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_hek, pred_usage_hek])
loss_hela = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_hela, pred_usage_hela])
loss_mcf7 = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_mcf7, pred_usage_mcf7])
loss_cho = Lambda(sigmoid_kl_divergence, output_shape = (1,))([true_usage_cho, pred_usage_cho])
total_loss = Lambda(
lambda l: (l[0] + l[1] + l[2] + l[3]) / 4.,
output_shape = (1,)
)(
[
loss_hek,
loss_hela,
loss_mcf7,
loss_cho
]
)
#Must be the same order as defined in the data generators
loss_model = Model([
#Inputs
seq_input_1,
seq_input_2,
#Target SD Usages
true_usage_hek,
true_usage_hela,
true_usage_mcf7,
true_usage_cho
], total_loss)
#Optimize CNN with Keras using the Data Generators to stream genomic data features
opt = keras.optimizers.SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
loss_model.compile(loss=lambda true, pred: pred, optimizer=opt)
callbacks =[
EarlyStopping(monitor='val_loss', min_delta=0.001, patience=2, verbose=0, mode='auto')
]
loss_model.fit_generator(
generator=splicing_gens['train'],
validation_data=splicing_gens['valid'],
epochs=10,
use_multiprocessing=True,
workers=4,
callbacks=callbacks
)
#Save model
save_dir = os.path.join(os.getcwd(), 'saved_models')
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_name = 'splicing_cnn_multicell.h5'
model_path = os.path.join(save_dir, model_name)
splicing_model.save(model_path)
print('Saved trained model at %s ' % model_path)
#Load model
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'splicing_cnn_multicell.h5'
model_path = os.path.join(save_dir, model_name)
splicing_model = load_model(model_path)
#Evaluate predictions on test set
predictions = splicing_model.predict_generator(splicing_gens['test'], workers=4, use_multiprocessing=True)
pred_usage_hek, pred_usage_hela, pred_usage_mcf7, pred_usage_cho = [np.ravel(prediction) for prediction in predictions]
targets = zip(*[splicing_gens['test'][i][1] for i in range(len(splicing_gens['test']))])
true_usage_hek, true_usage_hela, true_usage_mcf7, true_usage_cho = [np.concatenate(list(target)) for target in targets]
cell_lines = [
('hek', (pred_usage_hek, true_usage_hek)),
('hela', (pred_usage_hela, true_usage_hela)),
('mcf7', (pred_usage_mcf7, true_usage_mcf7)),
('cho', (pred_usage_cho, true_usage_cho))
]
for cell_name, [y_true, y_pred] in cell_lines :
r_val, p_val = pearsonr(y_pred, y_true)
print("Test set R^2 = " + str(round(r_val * r_val, 2)) + ", p = " + str(p_val))
#Plot test set scatter
f = plt.figure(figsize=(4, 4))
plt.scatter(y_pred, y_true, color='black', s=5, alpha=0.05)
plt.xticks([0.0, 0.25, 0.5, 0.75, 1.0], fontsize=14)
plt.yticks([0.0, 0.25, 0.5, 0.75, 1.0], fontsize=14)
plt.xlabel('Predicted SD1 Usage', fontsize=14)
plt.ylabel('True SD1 Usage', fontsize=14)
plt.title(str(cell_name), fontsize=16)
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.tight_layout()
plt.show()
| 0.841077 | 0.768993 |
# The Robot World
A robot, much like you, perceives the world through its "senses." For example, self-driving cars use video, radar, and Lidar, to observe the world around them. As cars gather data, they build up a 3D world of observations that tells the car where it is, where other objects (like trees, pedestrians, and other vehicles) are, and where it should be going!
In this section, we'll be working with first a 1D then a 2D representation of the world for simplicity, and because two dimensions are often all you'll need to solve a certain problem.
<img src="files/images/lidar.png" width="50%" height="50%">
These grid representations of the environment are known as **discrete** representations. Discrete just means a limited number of places a robot can be (ex. in one grid cell). That's because robots, and autonomous vehicles like self-driving cars, use maps to figure out where they are, and maps lend themselves to being divided up into grids and sections.
You'll see **continuous** probability distributions when locating objects that are moving around the robot. Continuous means that these objects can be anywhere around the robot and their movement is smooth.
So, let's start with the 1D case.
### Robot World 1-D
First, imagine you have a robot living in a 1-D world. You can think of a 1D world as a one-lane road.
<img src="images/road_1.png" width="50%" height="50%">
We can treat this road as an array, and break it up into grid cells for a robot to understand. In this case, the road is a 1D grid with 5 different spaces. The robot can only move forwards or backwards. If the robot falls off the grid, it will loop back around to the other side (this is known as a cyclic world).
<img src="images/numbered_grid.png" width="50%" height="50%">
### Uniform Distribution
The robot has a map so that it knows there are only 5 spaces in this 1D world. However, it hasn't sensed anything or moved. For a length of 5 cells (a list of 5 values), what is the probability distribution, `p`, that the robot is in any one of these locations?
Since the robot does not know where it is at first, the probability of being in any space is the same! This is a probability distribution and so the sum of all these probabilities should be equal to 1, so `1/5 spaces = 0.2`. A distribution in which all the probabilities are the same (and we have maximum uncertainty) is called a **uniform distribution**.
```
# importing resources
import matplotlib.pyplot as plt
import numpy as np
# uniform distribution for 5 grid cells
p = [0.2, 0.2, 0.2, 0.2, 0.2]
print(p)
```
I'll also include a helper function for visualizing this distribution. The below function, `display_map` will output a bar chart showing the probability that a robot is in each grid space. The y-axis has a range of 0 to 1 for the range of probabilities. For a uniform distribution, this will look like a flat line. You can choose the width of each bar to be <= 1 should you want to space these out.
```
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
# call function on grid, p, from before
display_map(p)
```
Now, what about if the world was 8 grid cells in length instead of 5?
### A function that takes in the number of spaces in the robot's world (in this case 8), and returns the initial probability distribution `p` that the robot is in each space.
This function should store the probabilities in a list. So in this example, there would be a list with 8 probabilities.
**Solution**
We know that all the probabilities in these locations should sum up to 1. So, one solution to this includes dividing 1 by the number of grid cells, then appending that value to a list that is that same passed in number of grid cells in length.
```
# ex. initialize_robot(5) = [0.2, 0.2, 0.2, 0.2, 0.2]
def initialize_robot(grid_length):
''' Takes in a grid length and returns
a uniform distribution of location probabilities'''
p = []
# create a list that has the value of 1/grid_length for each cell
for i in range(grid_length):
p.append(1.0/grid_length)
return p
p = initialize_robot(8)
print(p)
display_map(p)
# Here is what this distribution looks like, with some spacing
# so you can clearly see the probabilty that a robot is in each grid cell
p = initialize_robot(8)
print(p)
display_map(p, bar_width=0.9)
```
Now that you know how a robot initially sees a simple 1D world, let's learn about how it can locate itself by moving around and sensing it's environment!
|
github_jupyter
|
# importing resources
import matplotlib.pyplot as plt
import numpy as np
# uniform distribution for 5 grid cells
p = [0.2, 0.2, 0.2, 0.2, 0.2]
print(p)
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
# call function on grid, p, from before
display_map(p)
# ex. initialize_robot(5) = [0.2, 0.2, 0.2, 0.2, 0.2]
def initialize_robot(grid_length):
''' Takes in a grid length and returns
a uniform distribution of location probabilities'''
p = []
# create a list that has the value of 1/grid_length for each cell
for i in range(grid_length):
p.append(1.0/grid_length)
return p
p = initialize_robot(8)
print(p)
display_map(p)
# Here is what this distribution looks like, with some spacing
# so you can clearly see the probabilty that a robot is in each grid cell
p = initialize_robot(8)
print(p)
display_map(p, bar_width=0.9)
| 0.505127 | 0.995826 |
## Dependencies
```
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
# Load data
```
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
```
# Model parameters
```
input_base_path = '/kaggle/input/250-tweet-train-5fold-roberta-onecycle-lr-025smoot/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
vocab_path = input_base_path + 'vocab.json'
merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
# vocab_path = base_path + 'roberta-base-vocab.json'
# merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = '\n')
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Pre process
```
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, use_bias=False, name='qa_outputs')(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1, name='y_start')
end_logits = tf.squeeze(end_logits, axis=-1, name='y_end')
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
```
# Post process
```
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
# Post-process
test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
```
# Visualize predictions
```
test['text_len'] = test['text'].apply(lambda x : len(x))
test['label_len'] = test['selected_text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' ')))
test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1)
display(test.head(10))
display(test.describe())
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
|
github_jupyter
|
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
input_base_path = '/kaggle/input/250-tweet-train-5fold-roberta-onecycle-lr-025smoot/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
vocab_path = input_base_path + 'vocab.json'
merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
# vocab_path = base_path + 'roberta-base-vocab.json'
# merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = '\n')
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, use_bias=False, name='qa_outputs')(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1, name='y_start')
end_logits = tf.squeeze(end_logits, axis=-1, name='y_end')
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
# Post-process
test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
test['text_len'] = test['text'].apply(lambda x : len(x))
test['label_len'] = test['selected_text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' ')))
test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1)
display(test.head(10))
display(test.describe())
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
| 0.399694 | 0.333897 |
Importing packages
---
__We need to run the scripts install requirements.sh__
```
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
!(gcloud config get-value core/project)
```
Preparing the dataset
--
```
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:\
INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:\
INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
```
__We create the BigQuery dataset and upload the Covertype csv data into a table__
__The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery__
```
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
```
Configuring environment settings
---
```
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://mlops-youness'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
```
Exploring the Covertype dataset
--
```
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
```
Creating a training split
--
__We Run the query below in order to have repeatable sampling of the data in BigQuery__
```
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
```
__We export the BigQuery training table to GCS at $TRAINING_FILE_PATH__
```
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
```
Creating a validation split
---
__We create a validation split that takes 10% of the data using the `bq` command and export this split into the BigQuery table `covertype_dataset.validation`__
```
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
TRAINING_FILE_PATH, VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
```
The training application
--
The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
```
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
```
__Run the pipeline locally.__
```
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
```
__Prepare the hyperparameter tuning application.__
```
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
IMAGE_URI
```
__Build the docker image__
```
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
```
__Submit an AI Platform hyperparameter tuning job__
```
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
--config $TRAINING_APP_FOLDER/hptuning_config.yaml \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--hptune
JOB_NAME
!gcloud ai-platform jobs describe $JOB_NAME
```
__Retrieve HP-tuning results__
```
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
response['trainingOutput']['trials'][0]
```
__Retrain the model with the best hyperparameters__
```
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs describe $JOB_NAME
!gsutil ls $JOB_DIR
```
Deploy the model to AI Platform Prediction
--
```
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud ai-platform models create $model_name \
--regions=$REGION \
--labels=$labels
```
__Create a model version__
```
model_version = 'v01'
!gcloud ai-platform versions create {model_version} \
--model={model_name} \
--origin=$JOB_DIR \
--runtime-version=1.15 \
--framework=scikit-learn \
--python-version=3.7\
--region global
```
__Serve predictions__ : Prepare the input file with JSON formated instances.
```
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
```
__Invoke the model__
```
!gcloud ai-platform predict \
--model $model_name \
--version $model_version \
--json-instances $input_file\
--region global
```
|
github_jupyter
|
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
!(gcloud config get-value core/project)
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:\
INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:\
INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://mlops-youness'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
TRAINING_FILE_PATH, VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
--config $TRAINING_APP_FOLDER/hptuning_config.yaml \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--hptune
JOB_NAME
!gcloud ai-platform jobs describe $JOB_NAME
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
response['trainingOutput']['trials'][0]
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs describe $JOB_NAME
!gsutil ls $JOB_DIR
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud ai-platform models create $model_name \
--regions=$REGION \
--labels=$labels
model_version = 'v01'
!gcloud ai-platform versions create {model_version} \
--model={model_name} \
--origin=$JOB_DIR \
--runtime-version=1.15 \
--framework=scikit-learn \
--python-version=3.7\
--region global
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
!gcloud ai-platform predict \
--model $model_name \
--version $model_version \
--json-instances $input_file\
--region global
| 0.390825 | 0.659857 |
```
import networkx as nx
import itertools as it
# Importo Beautiful Soup
from bs4 import BeautifulSoup
# Importo requests para pillar el código html de donde quiera
import requests
# Cogemos la Pokédex
webDex = requests.get('https://pokemondb.net/pokedex/national')
dexSoup = BeautifulSoup(webDex.content, 'html.parser')
apariciones = {}
pokedex = {}
G = nx.Graph()
teams = []
types = [['itype', 'grass'], ['itype', 'poison'], ['itype', 'fire'], ['itype', 'flying'], ['itype', 'water'],
['itype', 'bug'], ['itype', 'normal'], ['itype', 'electric'], ['itype', 'ground'], ['itype', 'fairy'],
['itype', 'fighting'], ['itype', 'psychic'], ['itype', 'rock'], ['itype', 'steel'],
['itype', 'ice'], ['itype', 'ghost'], ['itype', 'dark'], ['itype', 'dragon']]
for name in dexSoup.find_all('a'):
if name.get('class') == ['ent-name']:
pok = name.get_text()
pokedex[pok] = {'type1' : 'None', 'type2' : 'None'}
if name.get('class') in types:
if pokedex[pok]['type1'] == 'None':
pokedex[pok]['type1'] = name.get_text()
else:
pokedex[pok]['type2'] = name.get_text()
```
# 1vs1
```
t1= []
t1.append('Tapu Lele')
t1.append('Charizard')
t1.append('Pheromosa')
teams.append(t1)
t2= []
t2.append('Gyarados')
t2.append('Magearna')
t2.append('Charizard')
teams.append(t2)
t3= []
t3.append('Type: Null')
t3.append('Tyranitar')
t3.append('Mawile')
teams.append(t3)
t4= []
t4.append('Gyarados')
t4.append('Genesect')
t4.append('Zeraora')
teams.append(t4)
t5= []
t5.append('Altaria')
t5.append('Aegislash')
t5.append('Zeraora')
teams.append(t5)
t6= []
t6.append('Gyarados')
t6.append('Charizard')
t6.append('Magnezone')
teams.append(t6)
t7= []
t7.append('Dragonite')
t7.append('Charizard')
t7.append('Tapu Fini')
teams.append(t7)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# LC
```
t7= []
t7.append('Surskit')
t7.append('Bunnelby')
t7.append('Pawniard')
t7.append('Gastly')
t7.append('Magnemite')
t7.append('Abra')
teams.append(t7)
t8= []
t8.append('Timburr')
t8.append('Onix')
t8.append('Vullaby')
t8.append('Pawniard')
t8.append('Mareanie')
t8.append('Abra')
teams.append(t8)
t9= []
t9.append('Mienfoo')
t9.append('Diglett')
t9.append('Vullaby')
t9.append('Clamperl')
t9.append('Foongus')
t9.append('Onix')
teams.append(t9)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# OU
```
t10= []
t10.append('Venusaur')
t10.append('Chansey')
t10.append('Zapdos')
t10.append('Rhydon')
t10.append('Mew')
t10.append('Melmetal')
teams.append(t10)
t11= []
t11.append('Charizard')
t11.append('Mew')
t11.append('Eevee')
t11.append('Sandslash')
t11.append('Muk')
t11.append('Melmetal')
teams.append(t11)
t12= []
t12.append('Gyarados')
t12.append('Dragonite')
t12.append('Mew')
t12.append('Clefable')
t12.append('Melmetal')
t12.append('Snorlax')
teams.append(t12)
t13= []
t13.append('Dragonite')
t13.append('Poliwrath')
t13.append('Beedrill')
t13.append('Mew')
t13.append('Rhydon')
t13.append('Melmetal')
teams.append(t13)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# Monotype
```
t14= []
t14.append('Dragonite')
t14.append('Thundurus')
t14.append('Landorus')
t14.append('Tornadus')
t14.append('Aerodactyl')
t14.append('Celesteela')
teams.append(t14)
t15= []
t15.append('Steelix')
t15.append('Gastrodon')
t15.append('Excadrill')
t15.append('Garchomp')
t15.append('Landorus')
t15.append('Nidoking')
teams.append(t15)
t16= []
t16.append('Diancie')
t16.append('Tapu Koko')
t16.append('Klefki')
t16.append('Azumarill')
t16.append('Tapu Bulu')
t16.append('Mimikyu')
teams.append(t16)
t17= []
t17.append('Sableye')
t17.append('Mandibuzz')
t17.append('Tyranitar')
t17.append('Greninja')
t17.append('Muk')
t17.append('Hydreigon')
teams.append(t17)
t18= []
t18.append('Toxapex')
t18.append('Greninja')
t18.append('Keldeo')
t18.append('Swampert')
t18.append('Rotom')
t18.append('Sharpedo')
teams.append(t18)
t19= []
t19.append('Venusaur')
t19.append('Nidoking')
t19.append('Salazzle')
t19.append('Toxapex')
t19.append('Muk')
t19.append('Crobat')
teams.append(t19)
t20= []
t20.append('Celebi')
t20.append('Decidueye')
t20.append('Whimsicott')
t20.append('Venusaur')
t20.append('Breloom')
t20.append('Ferrothorn')
teams.append(t20)
t21= []
t21.append('Jirachi')
t21.append('Heatran')
t21.append('Ferrothorn')
t21.append('Excadrill')
t21.append('Celesteela')
t21.append('Scizor')
teams.append(t21)
t22= []
t22.append('Metagross')
t22.append('Gallade')
t22.append('Latios')
t22.append('Latias')
t22.append('Victini')
t22.append('Alakazam')
teams.append(t22)
t23= []
t23.append('Dragonite')
t23.append('Latios')
t23.append('Kommo-o')
t23.append('Garchomp')
t23.append('Altaria')
t23.append('Kyurem')
teams.append(t23)
t24= []
t24.append('Venusaur')
t24.append('Toxapex')
t24.append('Crobat')
t24.append('Muk')
t24.append('Nihilego')
t24.append('Nidoking')
teams.append(t24)
t25= []
t25.append('Zeraora')
t25.append('Raichu')
t25.append('Rotom')
t25.append('Zapdos')
t25.append('Tapu Koko')
t25.append('Golem')
teams.append(t25)
t26= []
t26.append('Krookodile')
t26.append('Mandibuzz')
t26.append('Tyranitar')
t26.append('Hydreigon')
t26.append('Greninja')
t26.append('Muk')
teams.append(t26)
t27= []
t27.append('Slowbro')
t27.append('Celebi')
t27.append('Mew')
t27.append('Latias')
t27.append('Metagross')
t27.append('Victini')
teams.append(t27)
t28= []
t28.append('Excadrill')
t28.append('Hippowdon')
t28.append('Garchomp')
t28.append('Landorus')
t28.append('Gastrodon')
t28.append('Nidoking')
teams.append(t28)
t29= []
t29.append('Raichu')
t29.append('Tapu Koko')
t29.append('Rotom')
t29.append('Golem')
t29.append('Thundurus')
t29.append('Zeraora')
teams.append(t29)
t30= []
t30.append('Suicune')
t30.append('Toxapex')
t30.append('Swampert')
t30.append('Greninja')
t30.append('Keldeo')
t30.append('Mantine')
teams.append(t30)
t31= []
t31.append('Diancie')
t31.append('Tapu Koko')
t31.append('Tapu Bulu')
t31.append('Mimikyu')
t31.append('Clefable')
t31.append('Azumarill')
teams.append(t31)
t32= []
t32.append('Garchomp')
t32.append('Latios')
t32.append('Kyurem')
t32.append('Dragalge')
t32.append('Kommo-o')
t32.append('Dragonite')
teams.append(t32)
t33= []
t33.append('Volcarona')
t33.append('Pinsir')
t33.append('Heracross')
t33.append('Scizor')
t33.append('Galvantula')
t33.append('Armaldo')
teams.append(t33)
t34= []
t34.append('Lopunny')
t34.append('Chansey')
t34.append('Porygon2')
t34.append('Staraptor')
t34.append('Diggersby')
t34.append('Ditto')
teams.append(t34)
t35= []
t35.append('Aerodactyl')
t35.append('Charizard')
t35.append('Dragonite')
t35.append('Landorus')
t35.append('Thundurus')
t35.append('Celesteela')
teams.append(t35)
t36= []
t36.append('Celesteela')
t36.append('Excadrill')
t36.append('Scizor')
t36.append('Skarmory')
t36.append('Heatran')
t36.append('Ferrothorn')
teams.append(t36)
t37= []
t37.append('Gengar')
t37.append('Mimikyu')
t37.append('Blacephalon')
t37.append('Marowak')
t37.append('Decidueye')
t37.append('Sableye')
teams.append(t37)
t38= []
t38.append('Scizor')
t38.append('Volcarona')
t38.append('Heracross')
t38.append('Armaldo')
t38.append('Araquanid')
t38.append('Yanmega')
teams.append(t38)
t39= []
t39.append('Diancie')
t39.append('Terrakion')
t39.append('Tyranitar')
t39.append('Golem')
t39.append('Shuckle')
t39.append('Cradily')
teams.append(t39)
t40= []
t40.append('Charizard')
t40.append('Volcarona')
t40.append('Torkoal')
t40.append('Infernape')
t40.append('Heatran')
t40.append('Blacephalon')
teams.append(t40)
t41= []
t41.append('Meloetta')
t41.append('Diggersby')
t41.append('Staraptor')
t41.append('Ditto')
t41.append('Chansey')
t41.append('Porygon2')
teams.append(t41)
t42= []
t42.append('Gallade')
t42.append('Cobalion')
t42.append('Keldeo')
t42.append('Terrakion')
t42.append('Kommo-o')
t42.append('Heracross')
teams.append(t42)
t43= []
t43.append('Weavile')
t43.append('Ninetales')
t43.append('Mamoswine')
t43.append('Sandslash')
t43.append('Lapras')
t43.append('Kyurem')
teams.append(t43)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# NU
```
t44= []
t44.append('Vikavolt')
t44.append('Incineroar')
t44.append('Vaporeon')
t44.append('Whimsicott')
t44.append('Steelix')
t44.append('Passimian')
teams.append(t44)
t45= []
t45.append('Comfey')
t45.append('Xatu')
t45.append('Incineroar')
t45.append('Sceptile')
t45.append('Steelix')
t45.append('Passimian')
teams.append(t45)
t46= []
t46.append('Braviary')
t46.append('Comfey')
t46.append('Rhydon')
t46.append('Weezing')
t46.append('Dhelmise')
t46.append('Togedemaru')
teams.append(t46)
t47= []
t47.append('Steelix')
t47.append('Gallade')
t47.append('Aerodactyl')
t47.append('Slowking')
t47.append('Dhelmise')
t47.append('Incineroar')
teams.append(t47)
t48= []
t48.append('Seismitoad')
t48.append('Sigilyph')
t48.append('Togedemaru')
t48.append('Incineroar')
t48.append('Dhelmise')
t48.append('Comfey')
teams.append(t48)
t49= []
t49.append('Togedemaru')
t49.append('Pangoro')
t49.append('Slowking')
t49.append('Rhydon')
t49.append('Whimsicott')
t49.append('Garbodor')
teams.append(t49)
t50= []
t50.append('Vaporeon')
t50.append('Silvally')
t50.append('Whimsicott')
t50.append('Rhydon')
t50.append('Incineroar')
t50.append('Rotom')
teams.append(t50)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# OU
```
t51= []
t51.append('Serperior')
t51.append('Tapu Fini')
t51.append('Scizor')
t51.append('Heatran')
t51.append('Landorus')
t51.append('Kyurem')
teams.append(t51)
t52= []
t52.append('Clefable')
t52.append('Ferrothorn')
t52.append('Heatran')
t52.append('Landorus')
t52.append('Latias')
t52.append('Rotom')
teams.append(t52)
t53= []
t53.append('Swampert')
t53.append('Tornadus')
t53.append('Greninja')
t53.append('Ferrothorn')
t53.append('Manaphy')
t53.append('Pelipper')
teams.append(t53)
t54= []
t54.append('Skarmory')
t54.append('Lopunny')
t54.append('Reuniclus')
t54.append('Chansey')
t54.append('Gliscor')
t54.append('Toxapex')
teams.append(t54)
t55= []
t55.append('Medicham')
t55.append('Tornadus')
t55.append('Landorus')
t55.append('Greninja')
t55.append('Rotom')
t55.append('Magearna')
teams.append(t55)
t56= []
t56.append('Gliscor')
t56.append('Lopunny')
t56.append('Magearna')
t56.append('Tangrowth')
t56.append('Tornadus')
t56.append('Toxapex')
teams.append(t56)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# PU
```
t58= []
t58.append('Carracosta')
t58.append('Victreebel')
t58.append('Mudsdale')
t58.append('Dodrio')
t58.append('Hitmonchan')
t58.append('Rotom')
teams.append(t58)
t59= []
t59.append('Abomasnow')
t59.append('Sandslash')
t59.append('Mudsdale')
t59.append('Silvally')
t59.append('Oricorio')
t59.append('Persian')
teams.append(t59)
t60= []
t60.append('Torterra')
t60.append('Poliwrath')
t60.append('Spiritomb')
t60.append('Silvally')
t60.append('Articuno')
t60.append('Audino')
teams.append(t60)
t61= []
t61.append('Eelektross')
t61.append('Mudsdale')
t61.append('Musharna')
t61.append('Cryogonal')
t61.append('Simisear')
t61.append('Gurdurr')
teams.append(t61)
t62= []
t62.append('Leafeon')
t62.append('Ferroseed')
t62.append('Dodrio')
t62.append('Lanturn')
t62.append('Silvally')
t62.append('Musharna')
teams.append(t62)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# RU
```
t63= []
t63.append('Registeel')
t63.append('Mandibuzz')
t63.append('Vaporeon')
t63.append('Necrozma')
t63.append('Ditto')
t63.append('Nidoqueen')
teams.append(t63)
t64= []
t64.append('Donphan')
t64.append('Metagross')
t64.append('Virizion')
t64.append('Barbaracle')
t64.append('Noivern')
t64.append('Goodra')
teams.append(t64)
t65= []
t65.append('Xatu')
t65.append('Registeel')
t65.append('Virizion')
t65.append('Noivern')
t65.append('Blastoise')
t65.append('Florges')
teams.append(t65)
t66= []
t66.append('Raikou')
t66.append('Uxie')
t66.append('Machamp')
t66.append('Mismagius')
t66.append('Abomasnow')
t66.append('Forretress')
teams.append(t66)
t67= []
t67.append('Cresselia')
t67.append('Registeel')
t67.append('Tyrantrum')
t67.append('Blastoise')
t67.append('Roserade')
t67.append('Machamp')
teams.append(t67)
t68= []
t68.append('Zygarde')
t68.append('Bronzong')
t68.append('Mantine')
t68.append('Necrozma')
t68.append('Abomasnow')
t68.append('Florges')
teams.append(t68)
t69= []
t69.append('Araquanid')
t69.append('Toxicroak')
t69.append('Gardevoir')
t69.append('Honchkrow')
t69.append('Nidoqueen')
t69.append('Blastoise')
teams.append(t69)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# Ubers
```
t70= []
t70.append('Salamence')
t70.append('Kyogre')
t70.append('Xerneas')
t70.append('Arceus')
t70.append('Groudon')
t70.append('Necrozma')
teams.append(t70)
t71= []
t71.append('Kyogre')
t71.append('Groudon')
t71.append('Arceus')
t71.append('Yveltal')
t71.append('Salamence')
t71.append('Necrozma')
teams.append(t71)
t72= []
t72.append('Mewtwo')
t72.append('Groudon')
t72.append('Arceus')
t72.append('Yveltal')
t72.append('Necrozma')
t72.append('Marshadow')
teams.append(t72)
t73= []
t73.append('Scizor')
t73.append('Groudon')
t73.append('Kyogre')
t73.append('Giratina')
t73.append('Arceus')
t73.append('Yveltal')
teams.append(t73)
t74= []
t74.append('Mewtwo')
t74.append('Magearna')
t74.append('Arceus')
t74.append('Toxapex')
t74.append('Giratina')
t74.append('Groudon')
teams.append(t74)
t75= []
t75.append('Gengar')
t75.append('Groudon')
t75.append('Arceus')
t75.append('Yveltal')
t75.append('Zygarde')
t75.append('Necrozma')
teams.append(t75)
t76= []
t76.append('Mewtwo')
t76.append('Ho-oh')
t76.append('Groudon')
t76.append('Arceus')
t76.append('Ferrothorn')
t76.append('Zygarde')
teams.append(t76)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
```
# UU
```
t77= []
t77.append('Infernape')
t77.append('Klefki')
t77.append('Reuniclus')
t77.append('Hydreigon')
t77.append('Aerodactyl')
t77.append('Tentacruel')
teams.append(t77)
t78= []
t78.append('Sharpedo')
t78.append('Latias')
t78.append('Froslass')
t78.append('Cobalion')
t78.append('Mimikyu')
t78.append('Mamoswine')
teams.append(t78)
t79= []
t79.append('Xatu')
t79.append('Krookodile')
t79.append('Latias')
t79.append('Linoone')
t79.append('Scizor')
t79.append('Feraligatr')
teams.append(t79)
t80= []
t80.append('Tentacruel')
t80.append('Aerodactyl')
t80.append('Celebi')
t80.append('Hippowdon')
t80.append('Cobalion')
t80.append('Doublade')
teams.append(t80)
t81= []
t81.append('Alomomola')
t81.append('Quagsire')
t81.append('Nihilego')
t81.append('Blissey')
t81.append('Gligar')
t81.append('Altaria')
teams.append(t81)
t82= []
t82.append('Altaria')
t82.append('Magneton')
t82.append('Alomomola')
t82.append('Blissey')
t82.append('Gligar')
t82.append('Scizor')
teams.append(t82)
t83= []
t83.append('Rotom')
t83.append('Altaria')
t83.append('Tentacruel')
t83.append('Krookodile')
t83.append('Necrozma')
t83.append('Scizor')
teams.append(t83)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
for pok in G.nodes :
G.nodes[pok]['teams'] = apariciones[pok]
nx.write_gexf(G, "Gen7Graph.gexf", version="1.2draft")
```
|
github_jupyter
|
import networkx as nx
import itertools as it
# Importo Beautiful Soup
from bs4 import BeautifulSoup
# Importo requests para pillar el código html de donde quiera
import requests
# Cogemos la Pokédex
webDex = requests.get('https://pokemondb.net/pokedex/national')
dexSoup = BeautifulSoup(webDex.content, 'html.parser')
apariciones = {}
pokedex = {}
G = nx.Graph()
teams = []
types = [['itype', 'grass'], ['itype', 'poison'], ['itype', 'fire'], ['itype', 'flying'], ['itype', 'water'],
['itype', 'bug'], ['itype', 'normal'], ['itype', 'electric'], ['itype', 'ground'], ['itype', 'fairy'],
['itype', 'fighting'], ['itype', 'psychic'], ['itype', 'rock'], ['itype', 'steel'],
['itype', 'ice'], ['itype', 'ghost'], ['itype', 'dark'], ['itype', 'dragon']]
for name in dexSoup.find_all('a'):
if name.get('class') == ['ent-name']:
pok = name.get_text()
pokedex[pok] = {'type1' : 'None', 'type2' : 'None'}
if name.get('class') in types:
if pokedex[pok]['type1'] == 'None':
pokedex[pok]['type1'] = name.get_text()
else:
pokedex[pok]['type2'] = name.get_text()
t1= []
t1.append('Tapu Lele')
t1.append('Charizard')
t1.append('Pheromosa')
teams.append(t1)
t2= []
t2.append('Gyarados')
t2.append('Magearna')
t2.append('Charizard')
teams.append(t2)
t3= []
t3.append('Type: Null')
t3.append('Tyranitar')
t3.append('Mawile')
teams.append(t3)
t4= []
t4.append('Gyarados')
t4.append('Genesect')
t4.append('Zeraora')
teams.append(t4)
t5= []
t5.append('Altaria')
t5.append('Aegislash')
t5.append('Zeraora')
teams.append(t5)
t6= []
t6.append('Gyarados')
t6.append('Charizard')
t6.append('Magnezone')
teams.append(t6)
t7= []
t7.append('Dragonite')
t7.append('Charizard')
t7.append('Tapu Fini')
teams.append(t7)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t7= []
t7.append('Surskit')
t7.append('Bunnelby')
t7.append('Pawniard')
t7.append('Gastly')
t7.append('Magnemite')
t7.append('Abra')
teams.append(t7)
t8= []
t8.append('Timburr')
t8.append('Onix')
t8.append('Vullaby')
t8.append('Pawniard')
t8.append('Mareanie')
t8.append('Abra')
teams.append(t8)
t9= []
t9.append('Mienfoo')
t9.append('Diglett')
t9.append('Vullaby')
t9.append('Clamperl')
t9.append('Foongus')
t9.append('Onix')
teams.append(t9)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t10= []
t10.append('Venusaur')
t10.append('Chansey')
t10.append('Zapdos')
t10.append('Rhydon')
t10.append('Mew')
t10.append('Melmetal')
teams.append(t10)
t11= []
t11.append('Charizard')
t11.append('Mew')
t11.append('Eevee')
t11.append('Sandslash')
t11.append('Muk')
t11.append('Melmetal')
teams.append(t11)
t12= []
t12.append('Gyarados')
t12.append('Dragonite')
t12.append('Mew')
t12.append('Clefable')
t12.append('Melmetal')
t12.append('Snorlax')
teams.append(t12)
t13= []
t13.append('Dragonite')
t13.append('Poliwrath')
t13.append('Beedrill')
t13.append('Mew')
t13.append('Rhydon')
t13.append('Melmetal')
teams.append(t13)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t14= []
t14.append('Dragonite')
t14.append('Thundurus')
t14.append('Landorus')
t14.append('Tornadus')
t14.append('Aerodactyl')
t14.append('Celesteela')
teams.append(t14)
t15= []
t15.append('Steelix')
t15.append('Gastrodon')
t15.append('Excadrill')
t15.append('Garchomp')
t15.append('Landorus')
t15.append('Nidoking')
teams.append(t15)
t16= []
t16.append('Diancie')
t16.append('Tapu Koko')
t16.append('Klefki')
t16.append('Azumarill')
t16.append('Tapu Bulu')
t16.append('Mimikyu')
teams.append(t16)
t17= []
t17.append('Sableye')
t17.append('Mandibuzz')
t17.append('Tyranitar')
t17.append('Greninja')
t17.append('Muk')
t17.append('Hydreigon')
teams.append(t17)
t18= []
t18.append('Toxapex')
t18.append('Greninja')
t18.append('Keldeo')
t18.append('Swampert')
t18.append('Rotom')
t18.append('Sharpedo')
teams.append(t18)
t19= []
t19.append('Venusaur')
t19.append('Nidoking')
t19.append('Salazzle')
t19.append('Toxapex')
t19.append('Muk')
t19.append('Crobat')
teams.append(t19)
t20= []
t20.append('Celebi')
t20.append('Decidueye')
t20.append('Whimsicott')
t20.append('Venusaur')
t20.append('Breloom')
t20.append('Ferrothorn')
teams.append(t20)
t21= []
t21.append('Jirachi')
t21.append('Heatran')
t21.append('Ferrothorn')
t21.append('Excadrill')
t21.append('Celesteela')
t21.append('Scizor')
teams.append(t21)
t22= []
t22.append('Metagross')
t22.append('Gallade')
t22.append('Latios')
t22.append('Latias')
t22.append('Victini')
t22.append('Alakazam')
teams.append(t22)
t23= []
t23.append('Dragonite')
t23.append('Latios')
t23.append('Kommo-o')
t23.append('Garchomp')
t23.append('Altaria')
t23.append('Kyurem')
teams.append(t23)
t24= []
t24.append('Venusaur')
t24.append('Toxapex')
t24.append('Crobat')
t24.append('Muk')
t24.append('Nihilego')
t24.append('Nidoking')
teams.append(t24)
t25= []
t25.append('Zeraora')
t25.append('Raichu')
t25.append('Rotom')
t25.append('Zapdos')
t25.append('Tapu Koko')
t25.append('Golem')
teams.append(t25)
t26= []
t26.append('Krookodile')
t26.append('Mandibuzz')
t26.append('Tyranitar')
t26.append('Hydreigon')
t26.append('Greninja')
t26.append('Muk')
teams.append(t26)
t27= []
t27.append('Slowbro')
t27.append('Celebi')
t27.append('Mew')
t27.append('Latias')
t27.append('Metagross')
t27.append('Victini')
teams.append(t27)
t28= []
t28.append('Excadrill')
t28.append('Hippowdon')
t28.append('Garchomp')
t28.append('Landorus')
t28.append('Gastrodon')
t28.append('Nidoking')
teams.append(t28)
t29= []
t29.append('Raichu')
t29.append('Tapu Koko')
t29.append('Rotom')
t29.append('Golem')
t29.append('Thundurus')
t29.append('Zeraora')
teams.append(t29)
t30= []
t30.append('Suicune')
t30.append('Toxapex')
t30.append('Swampert')
t30.append('Greninja')
t30.append('Keldeo')
t30.append('Mantine')
teams.append(t30)
t31= []
t31.append('Diancie')
t31.append('Tapu Koko')
t31.append('Tapu Bulu')
t31.append('Mimikyu')
t31.append('Clefable')
t31.append('Azumarill')
teams.append(t31)
t32= []
t32.append('Garchomp')
t32.append('Latios')
t32.append('Kyurem')
t32.append('Dragalge')
t32.append('Kommo-o')
t32.append('Dragonite')
teams.append(t32)
t33= []
t33.append('Volcarona')
t33.append('Pinsir')
t33.append('Heracross')
t33.append('Scizor')
t33.append('Galvantula')
t33.append('Armaldo')
teams.append(t33)
t34= []
t34.append('Lopunny')
t34.append('Chansey')
t34.append('Porygon2')
t34.append('Staraptor')
t34.append('Diggersby')
t34.append('Ditto')
teams.append(t34)
t35= []
t35.append('Aerodactyl')
t35.append('Charizard')
t35.append('Dragonite')
t35.append('Landorus')
t35.append('Thundurus')
t35.append('Celesteela')
teams.append(t35)
t36= []
t36.append('Celesteela')
t36.append('Excadrill')
t36.append('Scizor')
t36.append('Skarmory')
t36.append('Heatran')
t36.append('Ferrothorn')
teams.append(t36)
t37= []
t37.append('Gengar')
t37.append('Mimikyu')
t37.append('Blacephalon')
t37.append('Marowak')
t37.append('Decidueye')
t37.append('Sableye')
teams.append(t37)
t38= []
t38.append('Scizor')
t38.append('Volcarona')
t38.append('Heracross')
t38.append('Armaldo')
t38.append('Araquanid')
t38.append('Yanmega')
teams.append(t38)
t39= []
t39.append('Diancie')
t39.append('Terrakion')
t39.append('Tyranitar')
t39.append('Golem')
t39.append('Shuckle')
t39.append('Cradily')
teams.append(t39)
t40= []
t40.append('Charizard')
t40.append('Volcarona')
t40.append('Torkoal')
t40.append('Infernape')
t40.append('Heatran')
t40.append('Blacephalon')
teams.append(t40)
t41= []
t41.append('Meloetta')
t41.append('Diggersby')
t41.append('Staraptor')
t41.append('Ditto')
t41.append('Chansey')
t41.append('Porygon2')
teams.append(t41)
t42= []
t42.append('Gallade')
t42.append('Cobalion')
t42.append('Keldeo')
t42.append('Terrakion')
t42.append('Kommo-o')
t42.append('Heracross')
teams.append(t42)
t43= []
t43.append('Weavile')
t43.append('Ninetales')
t43.append('Mamoswine')
t43.append('Sandslash')
t43.append('Lapras')
t43.append('Kyurem')
teams.append(t43)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t44= []
t44.append('Vikavolt')
t44.append('Incineroar')
t44.append('Vaporeon')
t44.append('Whimsicott')
t44.append('Steelix')
t44.append('Passimian')
teams.append(t44)
t45= []
t45.append('Comfey')
t45.append('Xatu')
t45.append('Incineroar')
t45.append('Sceptile')
t45.append('Steelix')
t45.append('Passimian')
teams.append(t45)
t46= []
t46.append('Braviary')
t46.append('Comfey')
t46.append('Rhydon')
t46.append('Weezing')
t46.append('Dhelmise')
t46.append('Togedemaru')
teams.append(t46)
t47= []
t47.append('Steelix')
t47.append('Gallade')
t47.append('Aerodactyl')
t47.append('Slowking')
t47.append('Dhelmise')
t47.append('Incineroar')
teams.append(t47)
t48= []
t48.append('Seismitoad')
t48.append('Sigilyph')
t48.append('Togedemaru')
t48.append('Incineroar')
t48.append('Dhelmise')
t48.append('Comfey')
teams.append(t48)
t49= []
t49.append('Togedemaru')
t49.append('Pangoro')
t49.append('Slowking')
t49.append('Rhydon')
t49.append('Whimsicott')
t49.append('Garbodor')
teams.append(t49)
t50= []
t50.append('Vaporeon')
t50.append('Silvally')
t50.append('Whimsicott')
t50.append('Rhydon')
t50.append('Incineroar')
t50.append('Rotom')
teams.append(t50)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t51= []
t51.append('Serperior')
t51.append('Tapu Fini')
t51.append('Scizor')
t51.append('Heatran')
t51.append('Landorus')
t51.append('Kyurem')
teams.append(t51)
t52= []
t52.append('Clefable')
t52.append('Ferrothorn')
t52.append('Heatran')
t52.append('Landorus')
t52.append('Latias')
t52.append('Rotom')
teams.append(t52)
t53= []
t53.append('Swampert')
t53.append('Tornadus')
t53.append('Greninja')
t53.append('Ferrothorn')
t53.append('Manaphy')
t53.append('Pelipper')
teams.append(t53)
t54= []
t54.append('Skarmory')
t54.append('Lopunny')
t54.append('Reuniclus')
t54.append('Chansey')
t54.append('Gliscor')
t54.append('Toxapex')
teams.append(t54)
t55= []
t55.append('Medicham')
t55.append('Tornadus')
t55.append('Landorus')
t55.append('Greninja')
t55.append('Rotom')
t55.append('Magearna')
teams.append(t55)
t56= []
t56.append('Gliscor')
t56.append('Lopunny')
t56.append('Magearna')
t56.append('Tangrowth')
t56.append('Tornadus')
t56.append('Toxapex')
teams.append(t56)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t58= []
t58.append('Carracosta')
t58.append('Victreebel')
t58.append('Mudsdale')
t58.append('Dodrio')
t58.append('Hitmonchan')
t58.append('Rotom')
teams.append(t58)
t59= []
t59.append('Abomasnow')
t59.append('Sandslash')
t59.append('Mudsdale')
t59.append('Silvally')
t59.append('Oricorio')
t59.append('Persian')
teams.append(t59)
t60= []
t60.append('Torterra')
t60.append('Poliwrath')
t60.append('Spiritomb')
t60.append('Silvally')
t60.append('Articuno')
t60.append('Audino')
teams.append(t60)
t61= []
t61.append('Eelektross')
t61.append('Mudsdale')
t61.append('Musharna')
t61.append('Cryogonal')
t61.append('Simisear')
t61.append('Gurdurr')
teams.append(t61)
t62= []
t62.append('Leafeon')
t62.append('Ferroseed')
t62.append('Dodrio')
t62.append('Lanturn')
t62.append('Silvally')
t62.append('Musharna')
teams.append(t62)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t63= []
t63.append('Registeel')
t63.append('Mandibuzz')
t63.append('Vaporeon')
t63.append('Necrozma')
t63.append('Ditto')
t63.append('Nidoqueen')
teams.append(t63)
t64= []
t64.append('Donphan')
t64.append('Metagross')
t64.append('Virizion')
t64.append('Barbaracle')
t64.append('Noivern')
t64.append('Goodra')
teams.append(t64)
t65= []
t65.append('Xatu')
t65.append('Registeel')
t65.append('Virizion')
t65.append('Noivern')
t65.append('Blastoise')
t65.append('Florges')
teams.append(t65)
t66= []
t66.append('Raikou')
t66.append('Uxie')
t66.append('Machamp')
t66.append('Mismagius')
t66.append('Abomasnow')
t66.append('Forretress')
teams.append(t66)
t67= []
t67.append('Cresselia')
t67.append('Registeel')
t67.append('Tyrantrum')
t67.append('Blastoise')
t67.append('Roserade')
t67.append('Machamp')
teams.append(t67)
t68= []
t68.append('Zygarde')
t68.append('Bronzong')
t68.append('Mantine')
t68.append('Necrozma')
t68.append('Abomasnow')
t68.append('Florges')
teams.append(t68)
t69= []
t69.append('Araquanid')
t69.append('Toxicroak')
t69.append('Gardevoir')
t69.append('Honchkrow')
t69.append('Nidoqueen')
t69.append('Blastoise')
teams.append(t69)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t70= []
t70.append('Salamence')
t70.append('Kyogre')
t70.append('Xerneas')
t70.append('Arceus')
t70.append('Groudon')
t70.append('Necrozma')
teams.append(t70)
t71= []
t71.append('Kyogre')
t71.append('Groudon')
t71.append('Arceus')
t71.append('Yveltal')
t71.append('Salamence')
t71.append('Necrozma')
teams.append(t71)
t72= []
t72.append('Mewtwo')
t72.append('Groudon')
t72.append('Arceus')
t72.append('Yveltal')
t72.append('Necrozma')
t72.append('Marshadow')
teams.append(t72)
t73= []
t73.append('Scizor')
t73.append('Groudon')
t73.append('Kyogre')
t73.append('Giratina')
t73.append('Arceus')
t73.append('Yveltal')
teams.append(t73)
t74= []
t74.append('Mewtwo')
t74.append('Magearna')
t74.append('Arceus')
t74.append('Toxapex')
t74.append('Giratina')
t74.append('Groudon')
teams.append(t74)
t75= []
t75.append('Gengar')
t75.append('Groudon')
t75.append('Arceus')
t75.append('Yveltal')
t75.append('Zygarde')
t75.append('Necrozma')
teams.append(t75)
t76= []
t76.append('Mewtwo')
t76.append('Ho-oh')
t76.append('Groudon')
t76.append('Arceus')
t76.append('Ferrothorn')
t76.append('Zygarde')
teams.append(t76)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
t77= []
t77.append('Infernape')
t77.append('Klefki')
t77.append('Reuniclus')
t77.append('Hydreigon')
t77.append('Aerodactyl')
t77.append('Tentacruel')
teams.append(t77)
t78= []
t78.append('Sharpedo')
t78.append('Latias')
t78.append('Froslass')
t78.append('Cobalion')
t78.append('Mimikyu')
t78.append('Mamoswine')
teams.append(t78)
t79= []
t79.append('Xatu')
t79.append('Krookodile')
t79.append('Latias')
t79.append('Linoone')
t79.append('Scizor')
t79.append('Feraligatr')
teams.append(t79)
t80= []
t80.append('Tentacruel')
t80.append('Aerodactyl')
t80.append('Celebi')
t80.append('Hippowdon')
t80.append('Cobalion')
t80.append('Doublade')
teams.append(t80)
t81= []
t81.append('Alomomola')
t81.append('Quagsire')
t81.append('Nihilego')
t81.append('Blissey')
t81.append('Gligar')
t81.append('Altaria')
teams.append(t81)
t82= []
t82.append('Altaria')
t82.append('Magneton')
t82.append('Alomomola')
t82.append('Blissey')
t82.append('Gligar')
t82.append('Scizor')
teams.append(t82)
t83= []
t83.append('Rotom')
t83.append('Altaria')
t83.append('Tentacruel')
t83.append('Krookodile')
t83.append('Necrozma')
t83.append('Scizor')
teams.append(t83)
for team in teams:
for pok in team:
if(pok in apariciones):
apariciones[pok] += 1
else:
apariciones[pok] = 1
G.add_node(pok, type1 = pokedex[pok]['type1'], type2 = pokedex[pok]['type2'])
edgeList = []
for team in teams:
edgeList.append([x for x in it.combinations(team, 2)])
for perm in edgeList :
for pokPair in perm :
if G.has_edge(pokPair[0], pokPair[1]):
G[pokPair[0]][pokPair[1]]['weight'] += 1
else:
G.add_edge(pokPair[0], pokPair[1], weight = 1)
G.edges.data('weight')
for pok in G.nodes :
G.nodes[pok]['teams'] = apariciones[pok]
nx.write_gexf(G, "Gen7Graph.gexf", version="1.2draft")
| 0.05574 | 0.572006 |
# Notebook Logistic Regression
Logistic Regression adalah salah satu jenis algoritma pembelajaran mesin yang sangat mendasar, tetapi sangat cukup praktis dalam aplikasinya. Di sini, kita akan belajar mengenai aplikasi Logistic Regression dengan menggunakan data berupa Breast Cancer Dataset. Kita diminta mengklasifikasikan apakah suatu jenis kanker tergolong Benignant (Jinak) ataukah Malignant (Ganas). Karena ini bersifat dasar, kita belum perlu melakukan preprocessing data. Dalam kenyataannya, kita perlu melakukan preprocessing data karena data yang ada secara nyata biasanya tidak lengkap ataupun mengandung anomali.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
from sklearn import datasets
np.set_printoptions(precision=3, suppress=False)
breast_cancer = datasets.load_breast_cancer(as_frame=True)
df = breast_cancer.data
df["is_malignant"] = 0
df.loc[breast_cancer.target ==1, "is_malignant"] = 1
def split_data(X, Y, test_sixe=0.2):
data_size = len(X)
test_size = int(data_size * test_size)
test_index = random.sample(range(data_size), test_size)
train_index = [i for i in range(data_size) if i not in test_index]
X_train = X[train_index]
Y_train = Y[train_index]
X_test = X[test_index]
Y_test = Y[test_index]
return X_train, Y_train, X_test, Y_test
```
### Logistic Regression : Linear Regression dengan fungsi sigmoid
Misal regresi linear yang digunakan untuk memprediksi suatu nilai variabel dependen memiliki rumus $$ z = \theta_{0} + \theta_{1} x_1 + \theta_{2} x_2 + ... + \theta_{n} x_n $$
Maka rumus regresi logistik yang digunakan untuk mengategorikan secara biner dua variabel adalah
$$ Ln(\frac{P}{1-P}) = \theta_{0} + \theta_{1} x_1 + \theta_{2} x_2 + ... + \theta_{n} x_n $$
atau dapat dituliskan $$ P = \frac{e^{\theta_{0} + \theta_{1} x_1 + \theta_{2} x_2 + ... + \theta_{n} x_n}}{1+e^{\theta_{0} + \theta_{1} x_1 + \theta_{2} x_2 + ... + \theta_{n} x_n}} $$ atau $$P = \frac{1}{1+e^{-(\theta_{0} + \theta_{1} x_1 + \theta_{2} x_2 + ... + \theta_{n} x_n)}} $$
$$P = \frac{1}{1+e^{-z}} $$
Catatan : Pada beberapa literatur dan beberapa bagian selanjutnya, P akan ditulis sebagai $h(z)$ atau sebuah fungsi yang memprediksi suatu nilai berdasarkan masukan nilai z
Untuk mengklasifikasikan 2 kelas, kita bisa tuliskan $$ P = \{^{\ge0.5,class=1 }_{<0.5,class=0} $$
Karena z berada di rentang $-\infty$ sampai $\infty$, bisa simpulkan bahwa P akan berada di rentang 0 sampai 1. Bila z=0, P akan bernilai 0,5 yang menandakan batas klasifikasi 2 kelas
### Menentukan Persamaan Logistic Regression
Seperti pada kasus regresi linear, kita bisa menyelesaikan permasalahan regresi logistik dengan menggunakan metode yang serupa. Kita bisa mencari nilai secara non iteratif dengan menyelesaikan persamaan Least Square berupa $X\hat{\theta}=Y$ yang merupakan permasalahan mendasar Aljabar Linear atau kita bisa menyelesaikan secara iteratif dengan menggunakan Metode Gradient Descent yang disimbolkan dengan $$ \theta_j = \theta_j - \alpha \nabla $$
Dengan $\theta$ sebagai koefisien persamaan yang kita cari, $\alpha$ sebagai learning rate, dan $\nabla$ sebagai gradien dari Cost Function, atau secara matematis
$$ \nabla = \frac{\partial}{\partial \theta_j}J(\theta)$$
### Cost function
Secara umum, cost function suatu persamaan Logistic Regression bisa dinyatakan sebagai penjumlahan Loss Function untuk setiap data-data yang dijadikan sampel
$$J(\theta) = -\frac{1}{m} \sum_{i=1}^m y^{(i)}\log (h(x(\theta)^{(i)})) + (1-y^{(i)})\log (1-h(x(\theta)^{(i)}))\tag{5} $$
* $m$ adalah jumlah data yang diberikan
* $y^{(i)}$ adalah nilai yang sebenarnya
* $h(x^{(i)})$ adalah nilai yang diprediksi oleh persamaan Logistic Regression
Untuk satu data, Loss function dapat dinyatakan sebagai
$$ Loss = -1 \times \left( y^{(i)}\log (h(x(\theta)^{(i)})) + (1-y^{(i)})\log (1-h(x(\theta)^{(i)})) \right)$$
* Karena nilai h berada pada kisaran $0\le h \le 1$, $log(h)$ pasti negatif. Itulah yang menyebabkan adanya pengali -1 di depan
* Semisal nilai prediksi persamaan adalah 1 ($h(x(\theta)) = 1$) dan label dari data 'y' juga 1, nilai loss suatu fungsi adalah 0.
* Begitu pula, semisal nilai prediksi persamaan adalah 0 ($h(x(\theta)) = 0$) dan label dari data 'y' juga 1, nilai loss suatu fungsi adalah 0.
* Dapat dilihat pula, bila nilai prediksi $h(x(\theta))=0$ dan label data 'y' bernilai 1 ataupun sebaliknya, lossnya akan bernilai tak hingga sehingga Cost Function akan menjadi besar saat ada kesalahan prediksi
Secara keseluruhan, nilai gradien dari Cost Function ini adalah
$$ \nabla = \frac{\partial}{\partial \theta_j}J(\theta) = -\frac{1}{m} \sum_{i=1}^m y^{(i)}\log (h(x(\theta)^{(i)})) + (1-y^{(i)})\log (1-h(x(\theta)^{(i)}))\tag{5} =
-\frac{1}{m} \sum_{i=1}^m (h(x(\theta)^{(i)})-y^{(i)})x^{(i)}$$
Sehingga persamaan tersebut bisa kita gunakan untuk menyelesaikan permasalahan Logistic Regression ini
```
def sigmoid(z):
return 1 / (1 + np.exp(-z))
```
* The number of iterations 'num_iters" is the number of times that you'll use the entire training set.
* For each iteration, you'll calculate the cost function using all training examples (there are 'm' training examples), and for all features.
* Instead of updating a single weight $\theta_i$ at a time, we can update all the weights in the column vector:
$$\mathbf{\theta} = \begin{pmatrix}
\theta_0
\\
\theta_1
\\
\theta_2
\\
\vdots
\\
\theta_n
\end{pmatrix}$$
* $\mathbf{\theta}$ has dimensions (n+1, 1), where 'n' is the number of features, and there is one more element for the bias term $\theta_0$ (note that the corresponding feature value $\mathbf{x_0}$ is 1).
* The 'logits', 'z', are calculated by multiplying the feature matrix 'x' with the weight vector 'theta'. $z = \mathbf{x}\mathbf{\theta}$
* $\mathbf{x}$ has dimensions (m, n+1)
* $\mathbf{\theta}$: has dimensions (n+1, 1)
* $\mathbf{z}$: has dimensions (m, 1)
* The prediction 'h', is calculated by applying the sigmoid to each element in 'z': $h(z) = sigmoid(z)$, and has dimensions (m,1).
* The cost function $J$ is calculated by taking the dot product of the vectors 'y' and 'log(h)'. Since both 'y' and 'h' are column vectors (m,1), transpose the vector to the left, so that matrix multiplication of a row vector with column vector performs the dot product.
$$J = \frac{-1}{m} \times \left(\mathbf{y}^T \cdot log(\mathbf{h}) + \mathbf{(1-y)}^T \cdot log(\mathbf{1-h}) \right)$$
* The update of theta is also vectorized. Because the dimensions of $\mathbf{x}$ are (m, n+1), and both $\mathbf{h}$ and $\mathbf{y}$ are (m, 1), we need to transpose the $\mathbf{x}$ and place it on the left in order to perform matrix multiplication, which then yields the (n+1, 1) answer we need:
$$\mathbf{\theta} = \mathbf{\theta} - \frac{\alpha}{m} \times \left( \mathbf{x}^T \cdot \left( \mathbf{h-y} \right) \right)$$
```
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def gradientDescent(x,y,theta,alpha,iterations):
"""
x: input data
y: output data
theta: initial theta
alpha: learning rate
iterations: number of iterations
J: cost function
"""
m = len(y)
J_history = np.zeros(iterations)
for i in range(iterations):
z = np.dot(x,theta)
h = sigmoid(z)
J_history[i] = (-1/m) * (np.dot(y.T,np.log(h)) + np.dot((1-y).T,np.log(1-h)))
print("Iteration: ",i," Cost: %.4f"%float(J_history[i]))
grad = (1/m) * np.dot(x.T,(h-y))
theta = theta - alpha * grad
print("Final cost: %.4f"%float(J_history[-1]))
visualize_cost(J_history)
return theta
def visualize_cost(J_history):
x = np.arange(len(J_history))
plt.plot(x,J_history)
plt.xlabel("Iterations")
plt.ylabel("Cost")
plt.show()
X = df.drop(["is_malignant"],axis=1).values
Y = df["is_malignant"].values
X_train, Y_train, X_test, Y_test = split_data(X, Y, test_size=0.2)
theta = np.zeros(X.shape[1])
theta = gradientDescent(X_train, Y_train,theta,0.00001,1000)
def predict_class(x, theta):
return 1 if sigmoid(np.dot(x, theta)) >= 0.5 else 0
def test_logistic_regression(x, y, theta):
correct = 0
for i in range(len(x)):
if predict_class(x[i], theta) == y[i]:
correct += 1
return correct / len(x)
test_logistic_regression(X_test, Y_test, theta)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
from sklearn import datasets
np.set_printoptions(precision=3, suppress=False)
breast_cancer = datasets.load_breast_cancer(as_frame=True)
df = breast_cancer.data
df["is_malignant"] = 0
df.loc[breast_cancer.target ==1, "is_malignant"] = 1
def split_data(X, Y, test_sixe=0.2):
data_size = len(X)
test_size = int(data_size * test_size)
test_index = random.sample(range(data_size), test_size)
train_index = [i for i in range(data_size) if i not in test_index]
X_train = X[train_index]
Y_train = Y[train_index]
X_test = X[test_index]
Y_test = Y[test_index]
return X_train, Y_train, X_test, Y_test
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def gradientDescent(x,y,theta,alpha,iterations):
"""
x: input data
y: output data
theta: initial theta
alpha: learning rate
iterations: number of iterations
J: cost function
"""
m = len(y)
J_history = np.zeros(iterations)
for i in range(iterations):
z = np.dot(x,theta)
h = sigmoid(z)
J_history[i] = (-1/m) * (np.dot(y.T,np.log(h)) + np.dot((1-y).T,np.log(1-h)))
print("Iteration: ",i," Cost: %.4f"%float(J_history[i]))
grad = (1/m) * np.dot(x.T,(h-y))
theta = theta - alpha * grad
print("Final cost: %.4f"%float(J_history[-1]))
visualize_cost(J_history)
return theta
def visualize_cost(J_history):
x = np.arange(len(J_history))
plt.plot(x,J_history)
plt.xlabel("Iterations")
plt.ylabel("Cost")
plt.show()
X = df.drop(["is_malignant"],axis=1).values
Y = df["is_malignant"].values
X_train, Y_train, X_test, Y_test = split_data(X, Y, test_size=0.2)
theta = np.zeros(X.shape[1])
theta = gradientDescent(X_train, Y_train,theta,0.00001,1000)
def predict_class(x, theta):
return 1 if sigmoid(np.dot(x, theta)) >= 0.5 else 0
def test_logistic_regression(x, y, theta):
correct = 0
for i in range(len(x)):
if predict_class(x[i], theta) == y[i]:
correct += 1
return correct / len(x)
test_logistic_regression(X_test, Y_test, theta)
| 0.445288 | 0.959001 |
### BIDS conversion: creation of desired folder structure, file move and cleanup.
```
import os
import json
import datetime
bids_path = "/home/connectomics/Pulpit/mounted_data/BONNA_decide_net/data/main_fmri_study"
nii_path = "/home/connectomics/Pulpit/mounted_data/BONNA_decide_net/data/main_fmri_study_nii"
n_subjects = 33
t1w_name = "Ax_3D_T1FSPGR_BRAVO_new"
task_rew_name = "fmri_prl_rew"
task_pun_name = "fmri_prl_pun"
t1w_bids_suffix = "T1w"
task_rew_bids_suffix = "task-prlrew_bold"
task_pun_bids_suffix = "task-prlpun_bold"
```
List all available neuroimaging files. After running look at `img_files` and resolve manually potential conflicts: missing files, duplicates, naming problems. Missing files and duplicates will be detected by the script and printed to the console.
```
# Store all filenames
img_files = {}
for subject_i in range(1, n_subjects+1):
subject_id = f"m{subject_i:02}"
subject_files = [file for file in os.listdir(nii_path) if subject_id in file]
if subject_files:
t1_files = [file for file in subject_files if t1w_name in file]
fmri_rew_files = [file for file in subject_files if task_rew_name in file]
fmri_pun_files = [file for file in subject_files if task_pun_name in file]
# Create dictionary entry
img_files[subject_id] = {
't1w_files': t1_files,
'fmri_rew_files': fmri_rew_files,
'fmri_pun_files': fmri_pun_files,
'all_files': subject_files
}
for subject_key in img_files:
if len(img_files[subject_key]['t1w_files']) != 2:
print(f"Incorrect number of t1w files for subject {subject_key}")
if len(img_files[subject_key]['fmri_rew_files']) != 2:
print(f"Incorrect number of fmri files (reward) for subject {subject_key}")
if len(img_files[subject_key]['fmri_pun_files']) != 2:
print(f"Incorrect number of fmri files (punishment) for subject {subject_key}")
subjects = list(img_files.keys())
```
Create basic folder structure
```
def mkdir_protected(path: str):
'''Creates directory if it doesn't exist'''
if not os.path.exists(path):
os.mkdir(path)
else:
print(f'{path} already exists')
for folder in ['derivatives', 'code', 'sourcedata']:
mkdir_protected(os.path.join(bids_path, folder))
for subject in subjects:
subject_path = os.path.join(bids_path, f'sub-{subject}')
img_files[subject]['subject_path'] = subject_path
mkdir_protected(subject_path)
for folder in ['func', 'anat']:
subject_inner_folder = os.path.join(subject_path, folder)
mkdir_protected(subject_inner_folder)
img_files[subject][f"subject_path_{folder}"] = subject_inner_folder
```
Create additional entries to `img_files` dictionary:
- `t1w_bids_name`: proper name of T1 file acording (BIDS compliant)
- ...
```
for subject in subjects:
# Bids names
img_files[subject]["t1w_bids_name"] = f"sub-{subject}_{t1w_bids_suffix}"
img_files[subject]["task_rew_bids_name"] = f"sub-{subject}_{task_rew_bids_suffix}"
img_files[subject]["task_pun_bids_name"] = f"sub-{subject}_{task_pun_bids_suffix}"
# Original names
img_files[subject]["t1w_orig_name"] = f"{t1w_name}_{subject}"
img_files[subject]["task_rew_orig_name"] = f"{task_rew_name}_{subject}"
img_files[subject]["task_pun_orig_name"] = f"{task_pun_name}_{subject}"
```
Move and rename imaging sequences.
```
for subject in subjects:
# T1w file
for file in img_files[subject]["t1w_files"]:
extension = os.path.splitext(file)[1]
if extension == ".gz": extension = ".nii.gz"
oldname = os.path.join(
nii_path,
file
)
newname = os.path.join(
img_files[subject]["subject_path_anat"],
img_files[subject]["t1w_bids_name"] + extension
)
try: os.rename(oldname, newname)
except: print(f"file {oldname} not found!")
# EPI files
for condition in ["rew", "pun"]:
for file in img_files[subject][f"fmri_{condition}_files"]:
extension = os.path.splitext(file)[1]
if extension == ".gz": extension = ".nii.gz"
oldname = os.path.join(
nii_path,
file
)
newname = os.path.join(
img_files[subject]["subject_path_func"],
img_files[subject][f"task_{condition}_bids_name"] + extension
)
try: os.rename(oldname, newname)
except: print(f"file {oldname} not found!")
```
Save metadata
```
def add_meta(path: str, data: dict):
'''Check if meta exists, and if not saves data as .json file.'''
if not os.path.exists(path):
if type(data) not in [dict, str]:
raise TypeError('data should be either dict or str')
with open(path, 'w') as f:
if type(data) is dict:
json.dump(data, f, indent= 4)
if type(data) is str:
f.write(data)
else:
print(f'{path} already exists')
add_meta(os.path.join(bids_path, "derivatives", "img_files"), img_files)
```
### Add metadata files
- dataset_description.json
- README
- CHANGES
- task-prlrew_bold.json
- task-prlpun_bold.json
```
dataset_description = {
'Name': 'DecideNet Main fMRI Study',
'BIDSVersion': '1.2.0',
'Authors': ['Kamil Bonna', 'Karolina Finc', 'Jaromir Patyk']
}
dataset_description_path = os.path.join(bids_path, 'dataset_description.json')
add_meta(dataset_description_path, dataset_description)
readme = '''# Project
This BIDS folder contains data from main fMRI study in DecideNet project.'''
readme_path = os.path.join(bids_path, 'README')
add_meta(readme_path, readme)
changes = f'''
1.0.0 {str(datetime.date.today())}
- initial release
'''
changes_path = os.path.join(bids_path, 'CHANGES')
add_meta(changes_path, changes)
for condition in ["rew", "pun"]:
if condition == "rew": condition_full = "reward"
elif condition == "pun": condition_full = "punishment"
task_dict = {
"TaskName": f"Probabilistic Reversal Learning ({condition_full} condition)",
"RepetitionTime": 2,
"EchoTime": 0.03,
"InstitutionName": "Nicolaus Copernicus University in Torun"
}
task_meta_path = os.path.join(bids_path, f"task-prl{condition}_bold.json")
add_meta(task_meta_path, task_dict)
```
### Fix values in .PhaseEncodingDirection field in json sidecar for functional files
- value "j?" should be changed to "j-"
```
for subject in subjects:
for file in os.listdir(os.path.join(bids_path, 'sub-'+subject, 'func')):
if '.json' in file:
fname_full = os.path.join(bids_path, 'sub-'+subject, 'func', file)
with open(fname_full, 'r') as f:
data = json.load(f)
os.remove(fname_full)
if data['PhaseEncodingDirection'] == 'j?':
data['PhaseEncodingDirection'] = 'j-' # Apply fix
with open(fname_full, 'w') as f:
json.dump(data, f, indent= 4)
```
|
github_jupyter
|
import os
import json
import datetime
bids_path = "/home/connectomics/Pulpit/mounted_data/BONNA_decide_net/data/main_fmri_study"
nii_path = "/home/connectomics/Pulpit/mounted_data/BONNA_decide_net/data/main_fmri_study_nii"
n_subjects = 33
t1w_name = "Ax_3D_T1FSPGR_BRAVO_new"
task_rew_name = "fmri_prl_rew"
task_pun_name = "fmri_prl_pun"
t1w_bids_suffix = "T1w"
task_rew_bids_suffix = "task-prlrew_bold"
task_pun_bids_suffix = "task-prlpun_bold"
# Store all filenames
img_files = {}
for subject_i in range(1, n_subjects+1):
subject_id = f"m{subject_i:02}"
subject_files = [file for file in os.listdir(nii_path) if subject_id in file]
if subject_files:
t1_files = [file for file in subject_files if t1w_name in file]
fmri_rew_files = [file for file in subject_files if task_rew_name in file]
fmri_pun_files = [file for file in subject_files if task_pun_name in file]
# Create dictionary entry
img_files[subject_id] = {
't1w_files': t1_files,
'fmri_rew_files': fmri_rew_files,
'fmri_pun_files': fmri_pun_files,
'all_files': subject_files
}
for subject_key in img_files:
if len(img_files[subject_key]['t1w_files']) != 2:
print(f"Incorrect number of t1w files for subject {subject_key}")
if len(img_files[subject_key]['fmri_rew_files']) != 2:
print(f"Incorrect number of fmri files (reward) for subject {subject_key}")
if len(img_files[subject_key]['fmri_pun_files']) != 2:
print(f"Incorrect number of fmri files (punishment) for subject {subject_key}")
subjects = list(img_files.keys())
def mkdir_protected(path: str):
'''Creates directory if it doesn't exist'''
if not os.path.exists(path):
os.mkdir(path)
else:
print(f'{path} already exists')
for folder in ['derivatives', 'code', 'sourcedata']:
mkdir_protected(os.path.join(bids_path, folder))
for subject in subjects:
subject_path = os.path.join(bids_path, f'sub-{subject}')
img_files[subject]['subject_path'] = subject_path
mkdir_protected(subject_path)
for folder in ['func', 'anat']:
subject_inner_folder = os.path.join(subject_path, folder)
mkdir_protected(subject_inner_folder)
img_files[subject][f"subject_path_{folder}"] = subject_inner_folder
for subject in subjects:
# Bids names
img_files[subject]["t1w_bids_name"] = f"sub-{subject}_{t1w_bids_suffix}"
img_files[subject]["task_rew_bids_name"] = f"sub-{subject}_{task_rew_bids_suffix}"
img_files[subject]["task_pun_bids_name"] = f"sub-{subject}_{task_pun_bids_suffix}"
# Original names
img_files[subject]["t1w_orig_name"] = f"{t1w_name}_{subject}"
img_files[subject]["task_rew_orig_name"] = f"{task_rew_name}_{subject}"
img_files[subject]["task_pun_orig_name"] = f"{task_pun_name}_{subject}"
for subject in subjects:
# T1w file
for file in img_files[subject]["t1w_files"]:
extension = os.path.splitext(file)[1]
if extension == ".gz": extension = ".nii.gz"
oldname = os.path.join(
nii_path,
file
)
newname = os.path.join(
img_files[subject]["subject_path_anat"],
img_files[subject]["t1w_bids_name"] + extension
)
try: os.rename(oldname, newname)
except: print(f"file {oldname} not found!")
# EPI files
for condition in ["rew", "pun"]:
for file in img_files[subject][f"fmri_{condition}_files"]:
extension = os.path.splitext(file)[1]
if extension == ".gz": extension = ".nii.gz"
oldname = os.path.join(
nii_path,
file
)
newname = os.path.join(
img_files[subject]["subject_path_func"],
img_files[subject][f"task_{condition}_bids_name"] + extension
)
try: os.rename(oldname, newname)
except: print(f"file {oldname} not found!")
def add_meta(path: str, data: dict):
'''Check if meta exists, and if not saves data as .json file.'''
if not os.path.exists(path):
if type(data) not in [dict, str]:
raise TypeError('data should be either dict or str')
with open(path, 'w') as f:
if type(data) is dict:
json.dump(data, f, indent= 4)
if type(data) is str:
f.write(data)
else:
print(f'{path} already exists')
add_meta(os.path.join(bids_path, "derivatives", "img_files"), img_files)
dataset_description = {
'Name': 'DecideNet Main fMRI Study',
'BIDSVersion': '1.2.0',
'Authors': ['Kamil Bonna', 'Karolina Finc', 'Jaromir Patyk']
}
dataset_description_path = os.path.join(bids_path, 'dataset_description.json')
add_meta(dataset_description_path, dataset_description)
readme = '''# Project
This BIDS folder contains data from main fMRI study in DecideNet project.'''
readme_path = os.path.join(bids_path, 'README')
add_meta(readme_path, readme)
changes = f'''
1.0.0 {str(datetime.date.today())}
- initial release
'''
changes_path = os.path.join(bids_path, 'CHANGES')
add_meta(changes_path, changes)
for condition in ["rew", "pun"]:
if condition == "rew": condition_full = "reward"
elif condition == "pun": condition_full = "punishment"
task_dict = {
"TaskName": f"Probabilistic Reversal Learning ({condition_full} condition)",
"RepetitionTime": 2,
"EchoTime": 0.03,
"InstitutionName": "Nicolaus Copernicus University in Torun"
}
task_meta_path = os.path.join(bids_path, f"task-prl{condition}_bold.json")
add_meta(task_meta_path, task_dict)
for subject in subjects:
for file in os.listdir(os.path.join(bids_path, 'sub-'+subject, 'func')):
if '.json' in file:
fname_full = os.path.join(bids_path, 'sub-'+subject, 'func', file)
with open(fname_full, 'r') as f:
data = json.load(f)
os.remove(fname_full)
if data['PhaseEncodingDirection'] == 'j?':
data['PhaseEncodingDirection'] = 'j-' # Apply fix
with open(fname_full, 'w') as f:
json.dump(data, f, indent= 4)
| 0.203787 | 0.668664 |
```
import pandas as pd
import subprocess
def sample(x, n):
"""
Get n number of rows as a sample
"""
import random
return x.iloc[random.sample(list(x.index), n)] #list(range(n))]
def generate_table(nrows, IDstart=1, P1start=1):
"""
Generate table which contain [ID,P1] columns
- nrows: number of rows per table
- IDstart: starting sequence for ID column
- P1start: starting number for P1 column
return generated table `table1`
"""
subjID = range(IDstart, nrows+IDstart)
P1 = ["V_" + str(i) for i in range(P1start, P1start + nrows)]
data = {"ID": subjID, "P1": P1}
table1 = pd.DataFrame(data)
return table1
def update_joinable_rows(table1, table2, nrows, percentage):
"""
Sample rows for percentage and update the sampled rows
- table1:
- table2:
- nrows: number of rows in each table
- percentage: ratio of rows in table1 that are involved in join condition to table2
return: updated table2
"""
prows = nrows * percentage
tbl1_sample = sample(table1, int(prows))
tbl2_sample = sample(table2, int(prows))
for i, j in zip(list(tbl1_sample.index), list(tbl2_sample.index)):
table2.loc[j, 'P1'] = table1.loc[i, 'P1']
return table2
```
# 1. Relation Type: One-to-One
```
# Number of rows per table
nrows = [1000, 3000 10000, 50000]# , 100000]
# percentage of rows that will have 1-1 matching
percentages = [0.25 , 0.5, 1.0]
for nrow in nrows:
subprocess.check_call('mkdir -p ../data/relation_type/one-one/'+ str(int(nrow/1000)) + 'k_rows', shell=True)
table1 = generate_table(nrow)
table1.to_csv('../data/relation_type/one-one/'+ str(int(nrow/1000)) + 'k_rows/table1.csv', index=False )
for rp in percentages:
table2 = generate_table(nrow, P1start=nrow+1)
table2 = update_joinable_rows(table1, table2, nrow, rp)
table2.to_csv('../data/relation_type/one-one/'+ str(int(nrow/1000)) + 'k_rows/' + \
'table2_' + str(int(100*rp)) + '_percent.csv', index=False )
```
# 2. Relation Type: One-to-N
```
def update_joinable_relation_rows(table1,
nrows,
selecivity_percentage,
N,
relation_from_percentage=-1,
relation_to_percentage=-1):
"""
Sample rows for percentage and update the sampled rows
return: updated table 1, table2
"""
prows = nrows * selecivity_percentage
tbl1_sample = sample(table1, int(prows))
rpercentage = nrows
if relation_to_percentage > 0:
rpercentage = nrows * relation_to_percentage
NumOfP1s = rpercentage / N
# print(int(NumOfP1s+0.7), len(tbl1_sample))
tbl1_sample = tbl1_sample.reset_index(drop=True)
P1ForJoin = sample(tbl1_sample, int(NumOfP1s+0.7))
values = list(set([row[1]['P1'] for row in P1ForJoin.iterrows()]))
values = values * N
if len(values) > nrows:
values = values[:nrows]
table2 = generate_table(nrows, P1start=nrows+1)
tbl2_sample = sample(table2, len(values))
# print(len(values), len(list(set((tbl2_sample.index)))))
for i, j in zip(values, list(tbl2_sample.index)):
table2.loc[j, 'P1'] = i
return table1, table2
# Number of rows per table
nrows = [1000, 3000 , 10000, 50000]#, 100000]
# value of N (relation size)
N = [5, 10, 15]
# 50 % selectivity - percentage of rows, overall, involvd in join from table1 to table2
p = 0.5
# percentage of rows that are involved in 1-N relation
percentages = [0.25 , 0.5]
for nrow in nrows:
subprocess.check_call('mkdir -p ../data/relation_type/one-N/'+ str(int(nrow/1000)) + 'k_rows', shell=True)
table1 = generate_table(nrow)
table1.to_csv('../data/relation_type/one-N/'+ str(int(nrow/1000)) + 'k_rows/table1.csv', index=False )
for rp in percentages:
for n in N:
table1, table2 = update_joinable_relation_rows(table1, nrow, p, n, -1, rp)
table2.to_csv('../data/relation_type/one-N/'+ str(int(nrow/1000)) + 'k_rows/table2_' + \
str(int(100*p)) + "_" + str(n) + "_" + str(int(100*rp)) + '_percent.csv', index=False )
```
# 3. Relation Type: N-to-One
```
def update_joinable_relation_rows(table1,
nrows,
selecivity_percentage,
N,
relation_from_percentage=-1,
relation_to_percentage=-1):
"""
Sample rows for percentage and update the sampled rows
return: updated table 1, table2
"""
prows = nrows * selecivity_percentage
tbl1_sample = sample(table1, int(prows))
rpercentage = nrows
if relation_to_percentage > 0:
rpercentage = nrows * relation_to_percentage
NumOfP1s = rpercentage / N
# print(NumOfP1s, prows, rpercentage)
tbl1_sample = tbl1_sample.reset_index(drop=True)
P1ForJoin = sample(tbl1_sample, int(NumOfP1s+0.7))
values = list(set([row[1]['P1'] for row in P1ForJoin.iterrows()]))
values = values * N
if len(values) > nrows:
values = values[:nrows]
table2 = generate_table(nrows, P1start=nrows+1)
tbl2_sample = sample(table2, len(values))
# print(len(values), len(list(set((tbl2_sample.index)))))
for i, j in zip(values, list(tbl2_sample.index)):
table2.loc[j, 'P1'] = i
return table1, table2
# Number of rows per table
nrows = [1000, 3000, 10000, 50000] #, 100000]
# value of N (relation size)
N = [5, 10, 15]
# 50 % selectivity - percentage of rows, overall, involvd in join from table1 to table2
p = 0.5
# percentage of rows that are involved in 1-N relation
percentages = [0.25 , 0.5]
for nrow in nrows:
subprocess.check_call('mkdir -p ../data/relation_type/N-one/'+ str(int(nrow/1000)) + 'k_rows', shell=True)
table1 = generate_table(nrow)
table1.to_csv('../data/relation_type/N-one/'+ str(int(nrow/1000)) + 'k_rows/table2.csv', index=False )
for rp in percentages:
for n in N:
table1, table2 = update_joinable_relation_rows(table1, nrow, p, n, -1, rp)
table2.to_csv('../data/relation_type/N-one/'+ str(int(nrow/1000)) + 'k_rows/table1_' + \
str(int(100*p)) + "_" + str(n) + "_" + str(int(100*rp)) + '_percent.csv', index=False )
```
# 4. Relation Type: N-to-M (Many-to-Many)
```
def update_joinable_n_m_relation_rows(table1,
table2,
nrows=1000,
selecivity_percentage=0.5,
N = 3,
M = 5,
relation_from_percentage=0.1,
relation_to_percentage=0.1):
"""
Sample rows for percentage and update the sampled rows
return: updated table 1, table2
"""
prows = nrows * selecivity_percentage
# Sample selecivity_percentage of rows in the first table
tbl1_sample = sample(table1, int(prows))
# Sample selecivity_percentage of rows from second table to make them joinable (pudate P1 same values as frist table)
tbl2_sample = sample(table2, int(prows))
rpercentagen = nrows
rpercentagem = nrows
if relation_from_percentage > 0:
rpercentagen = nrows * relation_from_percentage
if relation_to_percentage > 0:
rpercentagem = nrows * relation_to_percentage
NumOfP1sN = rpercentagen / N
NumOfP1sM = rpercentagem / M
tbl1_sample_v = tbl1_sample.reset_index(drop=True)
# Sample relation_to_percentage of rows
P1ForJoinM = sample(tbl1_sample_v, int(NumOfP1sM+0.7))
# Extract unique values of P1 from table1, only those sampled for percentage to table 2
values = list(set([row[1]['P1'] for row in P1ForJoinM.iterrows()]))
# Select values that are not in rows that participate in the many relations
# restvalues = tbl1_sample[~tbl1_sample.isin(values)]
# Repeat them M times
values = values * M
if len(values) > nrows:
values = values[:nrows]
# Sample as much as the repeated values of P1 from table 2
tbl2_sample = sample(table2, len(values))
# Update values of P1 based on the samples for repeated values on table 2
for i, j in zip(values, list(tbl2_sample.index)):
table2.loc[j, 'P1'] = i
tbl2_sample_v = tbl2_sample.reset_index(drop=True)
# Sample relation_from_percentage of rows
P1ForJoinN = sample(tbl2_sample_v, int(NumOfP1sN+0.7))
# Extract unique values of P1 from table2, only those sampled from percentage to table 1
values = list(set([row[1]['P1'] for row in P1ForJoinN.iterrows()]))
# Repeat them N times (relation is N-M)
values = values * N
if len(values) > nrows:
values = values[:nrows]
# Sample as much as the repeated values of P1 from table 1
tbl1_sample = sample(table1, len(values))
# Update values of P1 based on the samples for repeated values on table 1
for i, j in zip(values, list(tbl1_sample.index)):
table1.loc[j, 'P1'] = i
return table1, table2
# 50 %
p = 0.5
# value of N (relation size)
N = [3, 5, 10]
# value of M (relation size)
M = [3, 5, 10]
# percentage of rows that are involved in relation from table2 to table1
NP = [0.1, 0.25, 0.5]
# percentage of rows that are involved in relation from table1 to table2
MP = [0.1, 0.25, 0.5]
# number of rows per table
nrows = [1000, 3000, 10000, 50000] #, 100000]
for nrow in nrows:
subprocess.check_call('mkdir -p ../data/relation_type/N-M/'+ str(int(nrow/1000)) + 'k_rows', shell=True)
for np, mp in zip(NP, MP):
for n in N:
for m in M:
table1 = generate_table(nrow)
table2 = generate_table(nrow, P1start=nrow+1)
table1, table2 = update_joinable_n_m_relation_rows(table1, table2, nrow, p, n, m, np, mp)
table1.to_csv('../data/relation_type/N-M/'+ str(int(nrow/1000)) + 'k_rows/table1_' + \
str(int(100*p)) + "_" + str(n)+ "_" + str(m) + "_" + str(int(100*np)) + '_percent.csv', index=False )
table2.to_csv('../data/relation_type/N-M/'+ str(int(nrow/1000)) + 'k_rows/table2_' + \
str(int(100*p)) + "_" + str(n)+ "_" + str(m) + "_" + str(int(100*mp)) + '_percent.csv', index=False )
```
|
github_jupyter
|
import pandas as pd
import subprocess
def sample(x, n):
"""
Get n number of rows as a sample
"""
import random
return x.iloc[random.sample(list(x.index), n)] #list(range(n))]
def generate_table(nrows, IDstart=1, P1start=1):
"""
Generate table which contain [ID,P1] columns
- nrows: number of rows per table
- IDstart: starting sequence for ID column
- P1start: starting number for P1 column
return generated table `table1`
"""
subjID = range(IDstart, nrows+IDstart)
P1 = ["V_" + str(i) for i in range(P1start, P1start + nrows)]
data = {"ID": subjID, "P1": P1}
table1 = pd.DataFrame(data)
return table1
def update_joinable_rows(table1, table2, nrows, percentage):
"""
Sample rows for percentage and update the sampled rows
- table1:
- table2:
- nrows: number of rows in each table
- percentage: ratio of rows in table1 that are involved in join condition to table2
return: updated table2
"""
prows = nrows * percentage
tbl1_sample = sample(table1, int(prows))
tbl2_sample = sample(table2, int(prows))
for i, j in zip(list(tbl1_sample.index), list(tbl2_sample.index)):
table2.loc[j, 'P1'] = table1.loc[i, 'P1']
return table2
# Number of rows per table
nrows = [1000, 3000 10000, 50000]# , 100000]
# percentage of rows that will have 1-1 matching
percentages = [0.25 , 0.5, 1.0]
for nrow in nrows:
subprocess.check_call('mkdir -p ../data/relation_type/one-one/'+ str(int(nrow/1000)) + 'k_rows', shell=True)
table1 = generate_table(nrow)
table1.to_csv('../data/relation_type/one-one/'+ str(int(nrow/1000)) + 'k_rows/table1.csv', index=False )
for rp in percentages:
table2 = generate_table(nrow, P1start=nrow+1)
table2 = update_joinable_rows(table1, table2, nrow, rp)
table2.to_csv('../data/relation_type/one-one/'+ str(int(nrow/1000)) + 'k_rows/' + \
'table2_' + str(int(100*rp)) + '_percent.csv', index=False )
def update_joinable_relation_rows(table1,
nrows,
selecivity_percentage,
N,
relation_from_percentage=-1,
relation_to_percentage=-1):
"""
Sample rows for percentage and update the sampled rows
return: updated table 1, table2
"""
prows = nrows * selecivity_percentage
tbl1_sample = sample(table1, int(prows))
rpercentage = nrows
if relation_to_percentage > 0:
rpercentage = nrows * relation_to_percentage
NumOfP1s = rpercentage / N
# print(int(NumOfP1s+0.7), len(tbl1_sample))
tbl1_sample = tbl1_sample.reset_index(drop=True)
P1ForJoin = sample(tbl1_sample, int(NumOfP1s+0.7))
values = list(set([row[1]['P1'] for row in P1ForJoin.iterrows()]))
values = values * N
if len(values) > nrows:
values = values[:nrows]
table2 = generate_table(nrows, P1start=nrows+1)
tbl2_sample = sample(table2, len(values))
# print(len(values), len(list(set((tbl2_sample.index)))))
for i, j in zip(values, list(tbl2_sample.index)):
table2.loc[j, 'P1'] = i
return table1, table2
# Number of rows per table
nrows = [1000, 3000 , 10000, 50000]#, 100000]
# value of N (relation size)
N = [5, 10, 15]
# 50 % selectivity - percentage of rows, overall, involvd in join from table1 to table2
p = 0.5
# percentage of rows that are involved in 1-N relation
percentages = [0.25 , 0.5]
for nrow in nrows:
subprocess.check_call('mkdir -p ../data/relation_type/one-N/'+ str(int(nrow/1000)) + 'k_rows', shell=True)
table1 = generate_table(nrow)
table1.to_csv('../data/relation_type/one-N/'+ str(int(nrow/1000)) + 'k_rows/table1.csv', index=False )
for rp in percentages:
for n in N:
table1, table2 = update_joinable_relation_rows(table1, nrow, p, n, -1, rp)
table2.to_csv('../data/relation_type/one-N/'+ str(int(nrow/1000)) + 'k_rows/table2_' + \
str(int(100*p)) + "_" + str(n) + "_" + str(int(100*rp)) + '_percent.csv', index=False )
def update_joinable_relation_rows(table1,
nrows,
selecivity_percentage,
N,
relation_from_percentage=-1,
relation_to_percentage=-1):
"""
Sample rows for percentage and update the sampled rows
return: updated table 1, table2
"""
prows = nrows * selecivity_percentage
tbl1_sample = sample(table1, int(prows))
rpercentage = nrows
if relation_to_percentage > 0:
rpercentage = nrows * relation_to_percentage
NumOfP1s = rpercentage / N
# print(NumOfP1s, prows, rpercentage)
tbl1_sample = tbl1_sample.reset_index(drop=True)
P1ForJoin = sample(tbl1_sample, int(NumOfP1s+0.7))
values = list(set([row[1]['P1'] for row in P1ForJoin.iterrows()]))
values = values * N
if len(values) > nrows:
values = values[:nrows]
table2 = generate_table(nrows, P1start=nrows+1)
tbl2_sample = sample(table2, len(values))
# print(len(values), len(list(set((tbl2_sample.index)))))
for i, j in zip(values, list(tbl2_sample.index)):
table2.loc[j, 'P1'] = i
return table1, table2
# Number of rows per table
nrows = [1000, 3000, 10000, 50000] #, 100000]
# value of N (relation size)
N = [5, 10, 15]
# 50 % selectivity - percentage of rows, overall, involvd in join from table1 to table2
p = 0.5
# percentage of rows that are involved in 1-N relation
percentages = [0.25 , 0.5]
for nrow in nrows:
subprocess.check_call('mkdir -p ../data/relation_type/N-one/'+ str(int(nrow/1000)) + 'k_rows', shell=True)
table1 = generate_table(nrow)
table1.to_csv('../data/relation_type/N-one/'+ str(int(nrow/1000)) + 'k_rows/table2.csv', index=False )
for rp in percentages:
for n in N:
table1, table2 = update_joinable_relation_rows(table1, nrow, p, n, -1, rp)
table2.to_csv('../data/relation_type/N-one/'+ str(int(nrow/1000)) + 'k_rows/table1_' + \
str(int(100*p)) + "_" + str(n) + "_" + str(int(100*rp)) + '_percent.csv', index=False )
def update_joinable_n_m_relation_rows(table1,
table2,
nrows=1000,
selecivity_percentage=0.5,
N = 3,
M = 5,
relation_from_percentage=0.1,
relation_to_percentage=0.1):
"""
Sample rows for percentage and update the sampled rows
return: updated table 1, table2
"""
prows = nrows * selecivity_percentage
# Sample selecivity_percentage of rows in the first table
tbl1_sample = sample(table1, int(prows))
# Sample selecivity_percentage of rows from second table to make them joinable (pudate P1 same values as frist table)
tbl2_sample = sample(table2, int(prows))
rpercentagen = nrows
rpercentagem = nrows
if relation_from_percentage > 0:
rpercentagen = nrows * relation_from_percentage
if relation_to_percentage > 0:
rpercentagem = nrows * relation_to_percentage
NumOfP1sN = rpercentagen / N
NumOfP1sM = rpercentagem / M
tbl1_sample_v = tbl1_sample.reset_index(drop=True)
# Sample relation_to_percentage of rows
P1ForJoinM = sample(tbl1_sample_v, int(NumOfP1sM+0.7))
# Extract unique values of P1 from table1, only those sampled for percentage to table 2
values = list(set([row[1]['P1'] for row in P1ForJoinM.iterrows()]))
# Select values that are not in rows that participate in the many relations
# restvalues = tbl1_sample[~tbl1_sample.isin(values)]
# Repeat them M times
values = values * M
if len(values) > nrows:
values = values[:nrows]
# Sample as much as the repeated values of P1 from table 2
tbl2_sample = sample(table2, len(values))
# Update values of P1 based on the samples for repeated values on table 2
for i, j in zip(values, list(tbl2_sample.index)):
table2.loc[j, 'P1'] = i
tbl2_sample_v = tbl2_sample.reset_index(drop=True)
# Sample relation_from_percentage of rows
P1ForJoinN = sample(tbl2_sample_v, int(NumOfP1sN+0.7))
# Extract unique values of P1 from table2, only those sampled from percentage to table 1
values = list(set([row[1]['P1'] for row in P1ForJoinN.iterrows()]))
# Repeat them N times (relation is N-M)
values = values * N
if len(values) > nrows:
values = values[:nrows]
# Sample as much as the repeated values of P1 from table 1
tbl1_sample = sample(table1, len(values))
# Update values of P1 based on the samples for repeated values on table 1
for i, j in zip(values, list(tbl1_sample.index)):
table1.loc[j, 'P1'] = i
return table1, table2
# 50 %
p = 0.5
# value of N (relation size)
N = [3, 5, 10]
# value of M (relation size)
M = [3, 5, 10]
# percentage of rows that are involved in relation from table2 to table1
NP = [0.1, 0.25, 0.5]
# percentage of rows that are involved in relation from table1 to table2
MP = [0.1, 0.25, 0.5]
# number of rows per table
nrows = [1000, 3000, 10000, 50000] #, 100000]
for nrow in nrows:
subprocess.check_call('mkdir -p ../data/relation_type/N-M/'+ str(int(nrow/1000)) + 'k_rows', shell=True)
for np, mp in zip(NP, MP):
for n in N:
for m in M:
table1 = generate_table(nrow)
table2 = generate_table(nrow, P1start=nrow+1)
table1, table2 = update_joinable_n_m_relation_rows(table1, table2, nrow, p, n, m, np, mp)
table1.to_csv('../data/relation_type/N-M/'+ str(int(nrow/1000)) + 'k_rows/table1_' + \
str(int(100*p)) + "_" + str(n)+ "_" + str(m) + "_" + str(int(100*np)) + '_percent.csv', index=False )
table2.to_csv('../data/relation_type/N-M/'+ str(int(nrow/1000)) + 'k_rows/table2_' + \
str(int(100*p)) + "_" + str(n)+ "_" + str(m) + "_" + str(int(100*mp)) + '_percent.csv', index=False )
| 0.214527 | 0.747409 |
```
import pandas as pd
from pandas.plotting import scatter_matrix
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
data_wca = pd.read_csv("Wholesale Customer.csv")
data_wca.head()
data_wca.columns
```
### 1.1 Use methods of descriptive statistics to summarize data. Which Region and which Channel seems to spend more? Which Region and which Channel seems to spend less?
```
data_wca.shape
data_wca.info()
data_wca.describe().T
pd.isnull(data_wca).sum()
sns.distplot(data_wca['Fresh']);
sns.distplot(data_wca['Milk']);
sns.distplot(data_wca['Grocery']);
sns.distplot(data_wca['Frozen']);
sns.distplot(data_wca['Detergents_Paper']);
sns.distplot(data_wca['Delicatessen']);
sns.boxplot(x="Buyer/Spender", y="Channel", data=data_wca);
sns.boxplot(x="Buyer/Spender", y="Region", data=data_wca);
sns.boxplot(x="Region", y="Fresh", data=data_wca);
sns.boxplot(x="Region", y="Milk", data=data_wca);
sns.boxplot(x="Region", y="Grocery", data=data_wca);
sns.boxplot(x="Region", y="Frozen", data=data_wca);
sns.boxplot(x="Region", y="Detergents_Paper", data=data_wca);
sns.boxplot(x="Region", y="Delicatessen", data=data_wca);
data_wca.hist(figsize = (17,12));
```
### 1.1 Which Region and which Channel seems to spend more? Which Region and which Channel seems to spend less?
```
data_wca.groupby(['Region','Channel'])['Region'].sum().sort_values(ascending=False).head(1)
data_wca.groupby(['Region','Channel'])['Channel'].sum().sort_values(ascending=False).head(1)
data_wca.groupby(['Region','Channel'])['Region'].sum().sort_values(ascending=False).tail(1)
data_wca.groupby(['Region','Channel'])['Channel'].sum().sort_values(ascending=False).tail(1)
```
### 1.2 There are 6 different varieties of items are considered. Do all varieties show similar behaviour across Region and Channel?
```
data_wca.columns
import seaborn as sns
corr = data_wca.corr()
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask, 1)] = True
plt.figure(figsize=(17,12))
with sns.axes_style("white"):
ax = sns.heatmap(corr, mask=mask, square=True, annot=True, cmap='RdBu', fmt='+.3f');
```
### 1.3 based on a descriptive measure of variability, which item shows the most inconsistent behaviour? Which items show the least inconsistent behaviour?
```
pd.plotting.scatter_matrix(data_wca, alpha = 0.3, figsize = (17,12), diagonal = 'kde');
sns.pairplot(data_wca)
```
### 1.4 Are there any outliers in the data?
```
plt.figure(figsize=(15,12))
sns.boxplot(data=data_wca, orient="h");
print('Below items hold outliers \n1.Fresh\n2.Milk\n3.Grocery\n4.Frozen\n5.Detergents_Paper\n6.Delicatessen')
```
### 1.5 based on this report, what are the recommendations?
```
print('Below are recommendations \n1.Detergents_Paper is highly correlated with Grocery\n2.Grocery is highly correlated with Detergents_Paper')
```
## Problem 2
```
data = pd.read_csv('Survey-1.csv')
data.shape
```
#### 2.1.1. Gender and Major
```
data_211 = pd.crosstab(data['Gender'],data['Major'],margins = False)
print(data_211)
```
#### 2.1.2. Gender and Grad Intention
```
data_212 = pd.crosstab(data['Gender'],data['Grad Intention'],margins = False)
print(data_212)
```
#### 2.1.3. Gender and Employment
```
data_213 = pd.crosstab(data['Gender'],data['Employment'],margins = False)
print(data_213)
```
#### 2.1.4. Gender and Computer
```
data_214 = pd.crosstab(data['Gender'],data['Computer'],margins = False)
print(data_214)
```
#### 2.2.1. What is the probability that a randomly selected CMSU student will be male?
```
prob221 = round(data.groupby('Gender').count()['ID']['Male']/data.shape[0]*100,2)
print('Probability that a randomly selected CMSU student will be male is ',prob221,'%')
```
#### 2.2.2. What is the probability that a randomly selected CMSU student will be female?
```
prob222 = round(data.groupby('Gender').count()['ID']['Female']/data.shape[0]*100,2)
print('Probability that a randomly selected CMSU student will be Female is ',prob222,'%')
```
#### 2.3.1. Find the conditional probability of different majors among the male students in CMSU.
```
print('The conditional probability of different majors among the male students in CMSU is as below\n Accounting: ',round(data.groupby(['Major','Gender'])['Gender'].count()[1]/data.groupby('Major').count()['ID']['Accounting']*100,2),
'\n CIS: ',round(data.groupby(['Major','Gender'])['Gender'].count()[3]/data.groupby('Major').count()['ID']['CIS']*100,2),
'\n Economics/Finance: ',round(data.groupby(['Major','Gender'])['Gender'].count()[5]/data.groupby('Major').count()['ID']['Economics/Finance']*100,2),
'\n International Business: ',round(data.groupby(['Major','Gender'])['Gender'].count()[7]/data.groupby('Major').count()['ID']['International Business']*100,2),
'\n Management: ',round(data.groupby(['Major','Gender'])['Gender'].count()[9]/data.groupby('Major').count()['ID']['Management']*100,2),
'\n Other: ',round(data.groupby(['Major','Gender'])['Gender'].count()[11]/data.groupby('Major').count()['ID']['Other']*100,2),
'\n Retailing/Marketing: ',round(data.groupby(['Major','Gender'])['Gender'].count()[13]/data.groupby('Major').count()['ID']['Retailing/Marketing']*100,2),
'\n Undecided: ',round(data.groupby(['Major','Gender'])['Gender'].count()[14]/data.groupby('Major').count()['ID']['Undecided']*100,2))
```
#### 2.3.2 Find the conditional probability of different majors among the female students of CMSU.
```
print('The conditional probability of different majors among the female students in CMSU is as below\n Accounting: ',round(data.groupby(['Major','Gender'])['Gender'].count()[0]/data.groupby('Major').count()['ID']['Accounting']*100,2),
'\n CIS: ',round(data.groupby(['Major','Gender'])['Gender'].count()[2]/data.groupby('Major').count()['ID']['CIS']*100,2),
'\n Economics/Finance: ',round(data.groupby(['Major','Gender'])['Gender'].count()[4]/data.groupby('Major').count()['ID']['Economics/Finance']*100,2),
'\n International Business: ',round(data.groupby(['Major','Gender'])['Gender'].count()[6]/data.groupby('Major').count()['ID']['International Business']*100,2),
'\n Management: ',round(data.groupby(['Major','Gender'])['Gender'].count()[8]/data.groupby('Major').count()['ID']['Management']*100,2),
'\n Other: ',round(data.groupby(['Major','Gender'])['Gender'].count()[10]/data.groupby('Major').count()['ID']['Other']*100,2),
'\n Retailing/Marketing: ',round(data.groupby(['Major','Gender'])['Gender'].count()[12]/data.groupby('Major').count()['ID']['Retailing/Marketing']*100,2),
'\n Undecided: ',round(0/data.groupby('Major').count()['ID']['Undecided']*100,2))
```
#### 2.4.1. Find the probability That a randomly chosen student is a male and intends to graduate.
```
total_males = data.groupby('Gender').count()['ID']['Male'];
intention_graducation = data.groupby(['Gender','Grad Intention'])['Gender'].count()['Male']['Yes'];
prob241 = intention_graducation/total_males*100
print("Probability That a randomly chosen student is a male and intends to graduate is",round(prob241,2))
```
#### 2.4.2 Find the probability that a randomly selected student is a female and does NOT have a laptop.
```
total_females = data.groupby('Gender').count()['ID']['Female']
total_females_withlaptop = data.groupby(['Gender','Computer'])['Gender'].count()['Female']['Laptop']
prob242 = round((1-(total_females_withlaptop/total_females))*100,2)
print("The probability that a randomly selected student is a female and does NOT have a laptop",prob242)
```
#### 2.5.1. Find the probability that a randomly chosen student is either a male or has full-time employment?
```
total_males = data.groupby('Gender').count()['ID']['Male']
total_males_fulltimeemploy = data.groupby(['Gender','Employment'])['Gender'].count()['Male']['Full-Time']
print('The probability that a randomly chosen student is either a male or has full-time employment is',round((1/total_males)+(1/total_males_fulltimeemploy)*100,2))
#### 2.5.2. Find the conditional probability that given a female student is randomly chosen, she is majoring in international business or management.
total_females = data.groupby('Gender').count()['ID']['Female']
total_females_interBiz = data.groupby(['Gender','Major'])['Gender'].count()['Female']['International Business']
total_females_manage = data.groupby(['Gender','Major'])['Gender'].count()['Female']['Management']
print('The conditional probability that given a female student is randomly chosen, she is majoring in international business or management is',(round((total_females_interBiz+total_females_manage)/total_females,2)*100))
```
#### 2.6.1. If a student is chosen randomly, what is the probability that his/her GPA is less than 3?
```
total_student = data.shape[0]
tatal_students_less3gpa = data[data['GPA'] < 3]['ID'].count()
print('If a student is chosen randomly, the probability that his/her GPA is less than 3 is',round((tatal_students_less3gpa / total_student),2)*100)
```
#### 2.6.2. Find the conditional probability that a randomly selected male earns 50 or more. Find the conditional probability that a randomly selected female earns 50 or more.
```
total_males = data.groupby('Gender').count()['ID']['Male']
tatal_students_salarygt50 = data[(data['Salary'] > 49.99) & (data['Gender'] == 'Male')].count()[0]
print("The conditional probability that a randomly selected male earns 50 or more is",round((tatal_students_salarygt50/total_males)*100,2))
total_males = data.groupby('Gender').count()['ID']['Female']
tatal_students_salarygt50 = data[(data['Salary'] > 49.99) & (data['Gender'] == 'Female')].count()[0]
print("The conditional probability that a randomly selected female earns 50 or more is",round((tatal_students_salarygt50/total_males)*100,2))
```
#### 2.8. Note that there are four numerical (continuous) variables in the data set, GPA, Salary, Spending, and Text Messages. For each of them comment whether they follow a normal distribution. Write a note summarizing your conclusions.
```
data[['GPA','Salary','Spending','Text Messages']].hist(figsize=(15,12));
```
## Problem 3
```
import scipy.stats as sp
mydata = pd.read_csv('A & B shingles-1.csv')
```
#### 3.1 Do you think there is evidence that means moisture contents in both types of shingles are within the permissible limits? State your conclusions clearly showing all steps
### Step 1: Define null and alternative hypotheses
#### H0 = mean moisture content is not equal to 0.35 pound per 100 square feet
#### H1 = mean moisture content is less than 0.35 pound per 100 square feet
### Step 2: Decide the significance level
#### Here we select 𝛼 = 0.05 and the population standard deviation is not known
### Step 3: Identify the test statistic
* We have two samples and we do not know the population standard deviation.
* Sample sizes for both samples are not same. n1=36 n1=31
* We use two sample t-test.
### Step 4: Calculate the p - value and test statistic
```
t_statistic, p_value = sp.ttest_rel(mydata['A'], mydata['B'], nan_policy='omit')
print('tstat %1.3f' % t_statistic)
print("p-value for one-tail:", p_value/2)
```
### Step 5: Decide to reject or accept null hypothesis¶
```
# p_value < 0.05 => alternative hypothesis:
# they don't have the same mean at the 5% significance level
print ("Paired two-sample t-test p-value=", p_value/2)
alpha_level = 0.05
if (p_value/2) < alpha_level:
print('We have enough evidence to reject the null hypothesis in favour of alternative hypothesis')
else:
print('We do not have enough evidence to reject the null hypothesis in favour of alternative hypothesis')
print('We need to accept alternate hypothesis "mean moisture content is less than 0.35 pound per 100 square feet" ')
```
#### 3.2 Do you think that the population mean for shingles A and B are equal? Form the hypothesis and conduct the test of the hypothesis. What assumption do you need to check before the test for equality of means is performed?
### Step 1: Define null and alternative hypotheses
* $H_0$: $\mu_{A}$ - $\mu_{B}$ $\neq$ 0
* $H_A$: $\mu_{A}$ - $\mu_{B}$ = 0
### Step 2: Decide the significance level
#### Here we select 𝛼 = 0.05 and the population standard deviation is not known
### Step 3: Identify the test statistic
* We have two samples and we do not know the population standard deviation.
* Sample sizes for both samples are not same. n1=36 n1=31
* We use Two-Sample t-Test for Equal Means
### Step 4: Calculate the p - value and test statistic
```
t_statistic, p_value = sp.ttest_ind(mydata['A'], mydata['B'], nan_policy='omit')
print('tstat',t_statistic)
print('P Value',p_value)
```
### Step 5: Decide to reject or accept null hypothesis
```
alpha_level = 0.05
if (p_value/2) < alpha_level:
print('We have enough evidence to reject the null hypothesis in favour of alternative hypothesis')
else:
print('We do not have enough evidence to reject the null hypothesis in favour of alternative hypothesis')
```
|
github_jupyter
|
import pandas as pd
from pandas.plotting import scatter_matrix
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
data_wca = pd.read_csv("Wholesale Customer.csv")
data_wca.head()
data_wca.columns
data_wca.shape
data_wca.info()
data_wca.describe().T
pd.isnull(data_wca).sum()
sns.distplot(data_wca['Fresh']);
sns.distplot(data_wca['Milk']);
sns.distplot(data_wca['Grocery']);
sns.distplot(data_wca['Frozen']);
sns.distplot(data_wca['Detergents_Paper']);
sns.distplot(data_wca['Delicatessen']);
sns.boxplot(x="Buyer/Spender", y="Channel", data=data_wca);
sns.boxplot(x="Buyer/Spender", y="Region", data=data_wca);
sns.boxplot(x="Region", y="Fresh", data=data_wca);
sns.boxplot(x="Region", y="Milk", data=data_wca);
sns.boxplot(x="Region", y="Grocery", data=data_wca);
sns.boxplot(x="Region", y="Frozen", data=data_wca);
sns.boxplot(x="Region", y="Detergents_Paper", data=data_wca);
sns.boxplot(x="Region", y="Delicatessen", data=data_wca);
data_wca.hist(figsize = (17,12));
data_wca.groupby(['Region','Channel'])['Region'].sum().sort_values(ascending=False).head(1)
data_wca.groupby(['Region','Channel'])['Channel'].sum().sort_values(ascending=False).head(1)
data_wca.groupby(['Region','Channel'])['Region'].sum().sort_values(ascending=False).tail(1)
data_wca.groupby(['Region','Channel'])['Channel'].sum().sort_values(ascending=False).tail(1)
data_wca.columns
import seaborn as sns
corr = data_wca.corr()
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask, 1)] = True
plt.figure(figsize=(17,12))
with sns.axes_style("white"):
ax = sns.heatmap(corr, mask=mask, square=True, annot=True, cmap='RdBu', fmt='+.3f');
pd.plotting.scatter_matrix(data_wca, alpha = 0.3, figsize = (17,12), diagonal = 'kde');
sns.pairplot(data_wca)
plt.figure(figsize=(15,12))
sns.boxplot(data=data_wca, orient="h");
print('Below items hold outliers \n1.Fresh\n2.Milk\n3.Grocery\n4.Frozen\n5.Detergents_Paper\n6.Delicatessen')
print('Below are recommendations \n1.Detergents_Paper is highly correlated with Grocery\n2.Grocery is highly correlated with Detergents_Paper')
data = pd.read_csv('Survey-1.csv')
data.shape
data_211 = pd.crosstab(data['Gender'],data['Major'],margins = False)
print(data_211)
data_212 = pd.crosstab(data['Gender'],data['Grad Intention'],margins = False)
print(data_212)
data_213 = pd.crosstab(data['Gender'],data['Employment'],margins = False)
print(data_213)
data_214 = pd.crosstab(data['Gender'],data['Computer'],margins = False)
print(data_214)
prob221 = round(data.groupby('Gender').count()['ID']['Male']/data.shape[0]*100,2)
print('Probability that a randomly selected CMSU student will be male is ',prob221,'%')
prob222 = round(data.groupby('Gender').count()['ID']['Female']/data.shape[0]*100,2)
print('Probability that a randomly selected CMSU student will be Female is ',prob222,'%')
print('The conditional probability of different majors among the male students in CMSU is as below\n Accounting: ',round(data.groupby(['Major','Gender'])['Gender'].count()[1]/data.groupby('Major').count()['ID']['Accounting']*100,2),
'\n CIS: ',round(data.groupby(['Major','Gender'])['Gender'].count()[3]/data.groupby('Major').count()['ID']['CIS']*100,2),
'\n Economics/Finance: ',round(data.groupby(['Major','Gender'])['Gender'].count()[5]/data.groupby('Major').count()['ID']['Economics/Finance']*100,2),
'\n International Business: ',round(data.groupby(['Major','Gender'])['Gender'].count()[7]/data.groupby('Major').count()['ID']['International Business']*100,2),
'\n Management: ',round(data.groupby(['Major','Gender'])['Gender'].count()[9]/data.groupby('Major').count()['ID']['Management']*100,2),
'\n Other: ',round(data.groupby(['Major','Gender'])['Gender'].count()[11]/data.groupby('Major').count()['ID']['Other']*100,2),
'\n Retailing/Marketing: ',round(data.groupby(['Major','Gender'])['Gender'].count()[13]/data.groupby('Major').count()['ID']['Retailing/Marketing']*100,2),
'\n Undecided: ',round(data.groupby(['Major','Gender'])['Gender'].count()[14]/data.groupby('Major').count()['ID']['Undecided']*100,2))
print('The conditional probability of different majors among the female students in CMSU is as below\n Accounting: ',round(data.groupby(['Major','Gender'])['Gender'].count()[0]/data.groupby('Major').count()['ID']['Accounting']*100,2),
'\n CIS: ',round(data.groupby(['Major','Gender'])['Gender'].count()[2]/data.groupby('Major').count()['ID']['CIS']*100,2),
'\n Economics/Finance: ',round(data.groupby(['Major','Gender'])['Gender'].count()[4]/data.groupby('Major').count()['ID']['Economics/Finance']*100,2),
'\n International Business: ',round(data.groupby(['Major','Gender'])['Gender'].count()[6]/data.groupby('Major').count()['ID']['International Business']*100,2),
'\n Management: ',round(data.groupby(['Major','Gender'])['Gender'].count()[8]/data.groupby('Major').count()['ID']['Management']*100,2),
'\n Other: ',round(data.groupby(['Major','Gender'])['Gender'].count()[10]/data.groupby('Major').count()['ID']['Other']*100,2),
'\n Retailing/Marketing: ',round(data.groupby(['Major','Gender'])['Gender'].count()[12]/data.groupby('Major').count()['ID']['Retailing/Marketing']*100,2),
'\n Undecided: ',round(0/data.groupby('Major').count()['ID']['Undecided']*100,2))
total_males = data.groupby('Gender').count()['ID']['Male'];
intention_graducation = data.groupby(['Gender','Grad Intention'])['Gender'].count()['Male']['Yes'];
prob241 = intention_graducation/total_males*100
print("Probability That a randomly chosen student is a male and intends to graduate is",round(prob241,2))
total_females = data.groupby('Gender').count()['ID']['Female']
total_females_withlaptop = data.groupby(['Gender','Computer'])['Gender'].count()['Female']['Laptop']
prob242 = round((1-(total_females_withlaptop/total_females))*100,2)
print("The probability that a randomly selected student is a female and does NOT have a laptop",prob242)
total_males = data.groupby('Gender').count()['ID']['Male']
total_males_fulltimeemploy = data.groupby(['Gender','Employment'])['Gender'].count()['Male']['Full-Time']
print('The probability that a randomly chosen student is either a male or has full-time employment is',round((1/total_males)+(1/total_males_fulltimeemploy)*100,2))
#### 2.5.2. Find the conditional probability that given a female student is randomly chosen, she is majoring in international business or management.
total_females = data.groupby('Gender').count()['ID']['Female']
total_females_interBiz = data.groupby(['Gender','Major'])['Gender'].count()['Female']['International Business']
total_females_manage = data.groupby(['Gender','Major'])['Gender'].count()['Female']['Management']
print('The conditional probability that given a female student is randomly chosen, she is majoring in international business or management is',(round((total_females_interBiz+total_females_manage)/total_females,2)*100))
total_student = data.shape[0]
tatal_students_less3gpa = data[data['GPA'] < 3]['ID'].count()
print('If a student is chosen randomly, the probability that his/her GPA is less than 3 is',round((tatal_students_less3gpa / total_student),2)*100)
total_males = data.groupby('Gender').count()['ID']['Male']
tatal_students_salarygt50 = data[(data['Salary'] > 49.99) & (data['Gender'] == 'Male')].count()[0]
print("The conditional probability that a randomly selected male earns 50 or more is",round((tatal_students_salarygt50/total_males)*100,2))
total_males = data.groupby('Gender').count()['ID']['Female']
tatal_students_salarygt50 = data[(data['Salary'] > 49.99) & (data['Gender'] == 'Female')].count()[0]
print("The conditional probability that a randomly selected female earns 50 or more is",round((tatal_students_salarygt50/total_males)*100,2))
data[['GPA','Salary','Spending','Text Messages']].hist(figsize=(15,12));
import scipy.stats as sp
mydata = pd.read_csv('A & B shingles-1.csv')
t_statistic, p_value = sp.ttest_rel(mydata['A'], mydata['B'], nan_policy='omit')
print('tstat %1.3f' % t_statistic)
print("p-value for one-tail:", p_value/2)
# p_value < 0.05 => alternative hypothesis:
# they don't have the same mean at the 5% significance level
print ("Paired two-sample t-test p-value=", p_value/2)
alpha_level = 0.05
if (p_value/2) < alpha_level:
print('We have enough evidence to reject the null hypothesis in favour of alternative hypothesis')
else:
print('We do not have enough evidence to reject the null hypothesis in favour of alternative hypothesis')
print('We need to accept alternate hypothesis "mean moisture content is less than 0.35 pound per 100 square feet" ')
t_statistic, p_value = sp.ttest_ind(mydata['A'], mydata['B'], nan_policy='omit')
print('tstat',t_statistic)
print('P Value',p_value)
alpha_level = 0.05
if (p_value/2) < alpha_level:
print('We have enough evidence to reject the null hypothesis in favour of alternative hypothesis')
else:
print('We do not have enough evidence to reject the null hypothesis in favour of alternative hypothesis')
| 0.355551 | 0.89115 |
**Data Analysis**: Linear Regression using normal equation vs. using gradient descent
```
import pandas as pd
import numpy as np
np.set_printoptions(suppress=True)
from numpy.linalg import svd, norm
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
import seaborn as sns
pd.set_option("display.max_rows", None, "display.max_columns", None)
pd.set_option('display.float_format', lambda x: '%.2f' % x)
ames = pd.read_csv('https://raw.githubusercontent.com/catabia/cs391_spring21/main/AmesHousing.csv')
ames = ames[['Lot Frontage','Lot Area', 'Overall Cond', 'Year Built', 'Year Remod/Add', 'Gr Liv Area', 'Full Bath',
'Half Bath', 'Bedroom AbvGr', 'TotRms AbvGrd', 'BsmtFin SF 1', 'Bsmt Unf SF', 'Bsmt Full Bath',
'Bsmt Half Bath', '1st Flr SF', '2nd Flr SF', 'Fireplaces', 'Garage Cars', 'Wood Deck SF',
'Open Porch SF','SalePrice']]
ames.columns = ['lot_frontage','lot_area', 'overall_cond', 'year_built', 'year_remod', 'sqft', 'full_bath',
'half_bath', 'bedroom', 'total_rooms', 'bsmt_fin_sqft', 'bsmt_unfin_sqft', 'bsmt_full_bath',
'bsmt_half_bath', 'first_fl_sqft', 'second_fl_sqft', 'fireplaces', 'garage_cars', 'wood_deck_sqft',
'open_porch_sqft','sale_price']
ames=ames.dropna()
ames.describe()
import time
start = time.time()
print('Hello word!')
end = time.time()
print('It took', end-start, ' seconds to print "Hello world!" once. Pretty speedy, eh?')
# Answer:
from sklearn.preprocessing import MinMaxScaler
import numpy as np
#normalize
scaler = MinMaxScaler(feature_range=(0,1))
X = scaler.fit_transform(ames.drop(['sale_price'], axis=1))
y = scaler.fit_transform(ames.sale_price.values.reshape(ames.shape[0], 1))
# add a column of 1s so we can estimte the intercept
#X = np.c_[np.ones(X.shape[0]), X]
#print(X)
# Answer:
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
start = time.time()
model = LinearRegression()
model.fit(X, y)
model.coef_
model.intercept_
model.predict(X)
MSE =np.square(np.subtract(y, model.predict(X))).mean()
print(MSE)
end = time.time()
print('It took', end-start)
# Answer:
from sklearn.linear_model import SGDRegressor
start = time.time()
# add a column of 1s so we can estimte the intercept
#print(X)
# Stochastic Gradient Descent for linear regression using SciKit Learn
from sklearn.linear_model import SGDRegressor
model2 = SGDRegressor(alpha = 0.0001, max_iter = 100000)
y=y.reshape(y.shape[0],)
model2.fit(X, y)
mse = ((model2.predict(X) - y)**2).mean()
print(mse)
end = time.time()
print('It took', end-start)
# Answer:
from sklearn.preprocessing import PolynomialFeatures
poly_feat_fit = PolynomialFeatures(degree=3)
Xpoly = poly_feat_fit.fit_transform(X)
print("the number of columns are after degree 3", Xpoly.shape[1])
print("the past numebr of columns before making it degree 3", X.shape[1])
# Answer:
start = time.time()
pol_reg = LinearRegression()
pol_reg.fit(Xpoly, y)
pol_reg.predict(Xpoly)
MSE =np.square(np.subtract(y, pol_reg.predict(Xpoly))).mean()
print(MSE)
end = time.time()
print('the normal equation took', end-start)
# Stochastic Gradient Descent for linear regression using SciKit Learn
from sklearn.linear_model import SGDRegressor
start = time.time()
model1 = SGDRegressor(alpha = 0.0001, max_iter = 100000)
y=y.reshape(y.shape[0],)
model1.fit(Xpoly, y)
mse = ((model1.predict(Xpoly) - y)**2).mean()
print(mse)
end = time.time()
print('the gradient descent one took', end-start)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
np.set_printoptions(suppress=True)
from numpy.linalg import svd, norm
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
import seaborn as sns
pd.set_option("display.max_rows", None, "display.max_columns", None)
pd.set_option('display.float_format', lambda x: '%.2f' % x)
ames = pd.read_csv('https://raw.githubusercontent.com/catabia/cs391_spring21/main/AmesHousing.csv')
ames = ames[['Lot Frontage','Lot Area', 'Overall Cond', 'Year Built', 'Year Remod/Add', 'Gr Liv Area', 'Full Bath',
'Half Bath', 'Bedroom AbvGr', 'TotRms AbvGrd', 'BsmtFin SF 1', 'Bsmt Unf SF', 'Bsmt Full Bath',
'Bsmt Half Bath', '1st Flr SF', '2nd Flr SF', 'Fireplaces', 'Garage Cars', 'Wood Deck SF',
'Open Porch SF','SalePrice']]
ames.columns = ['lot_frontage','lot_area', 'overall_cond', 'year_built', 'year_remod', 'sqft', 'full_bath',
'half_bath', 'bedroom', 'total_rooms', 'bsmt_fin_sqft', 'bsmt_unfin_sqft', 'bsmt_full_bath',
'bsmt_half_bath', 'first_fl_sqft', 'second_fl_sqft', 'fireplaces', 'garage_cars', 'wood_deck_sqft',
'open_porch_sqft','sale_price']
ames=ames.dropna()
ames.describe()
import time
start = time.time()
print('Hello word!')
end = time.time()
print('It took', end-start, ' seconds to print "Hello world!" once. Pretty speedy, eh?')
# Answer:
from sklearn.preprocessing import MinMaxScaler
import numpy as np
#normalize
scaler = MinMaxScaler(feature_range=(0,1))
X = scaler.fit_transform(ames.drop(['sale_price'], axis=1))
y = scaler.fit_transform(ames.sale_price.values.reshape(ames.shape[0], 1))
# add a column of 1s so we can estimte the intercept
#X = np.c_[np.ones(X.shape[0]), X]
#print(X)
# Answer:
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
start = time.time()
model = LinearRegression()
model.fit(X, y)
model.coef_
model.intercept_
model.predict(X)
MSE =np.square(np.subtract(y, model.predict(X))).mean()
print(MSE)
end = time.time()
print('It took', end-start)
# Answer:
from sklearn.linear_model import SGDRegressor
start = time.time()
# add a column of 1s so we can estimte the intercept
#print(X)
# Stochastic Gradient Descent for linear regression using SciKit Learn
from sklearn.linear_model import SGDRegressor
model2 = SGDRegressor(alpha = 0.0001, max_iter = 100000)
y=y.reshape(y.shape[0],)
model2.fit(X, y)
mse = ((model2.predict(X) - y)**2).mean()
print(mse)
end = time.time()
print('It took', end-start)
# Answer:
from sklearn.preprocessing import PolynomialFeatures
poly_feat_fit = PolynomialFeatures(degree=3)
Xpoly = poly_feat_fit.fit_transform(X)
print("the number of columns are after degree 3", Xpoly.shape[1])
print("the past numebr of columns before making it degree 3", X.shape[1])
# Answer:
start = time.time()
pol_reg = LinearRegression()
pol_reg.fit(Xpoly, y)
pol_reg.predict(Xpoly)
MSE =np.square(np.subtract(y, pol_reg.predict(Xpoly))).mean()
print(MSE)
end = time.time()
print('the normal equation took', end-start)
# Stochastic Gradient Descent for linear regression using SciKit Learn
from sklearn.linear_model import SGDRegressor
start = time.time()
model1 = SGDRegressor(alpha = 0.0001, max_iter = 100000)
y=y.reshape(y.shape[0],)
model1.fit(Xpoly, y)
mse = ((model1.predict(Xpoly) - y)**2).mean()
print(mse)
end = time.time()
print('the gradient descent one took', end-start)
| 0.55929 | 0.815967 |
## Problem Statement
Q.1 Write a Python Program to implement your own myreduce() function which works exactly like
Python's built-in function reduce()
```
def myreduce(function, sequence):
temp_result = sequence[0]
for ele in sequence[1:]:
temp_result = function(temp_result, ele)
return temp_result
from functools import reduce
reduce(lambda x, y : x + y, [1,2,3,4])
myreduce(lambda x, y : x + y, [1,2,3,4])
```
Q.2 Write a Python program to implement your own myfilter() function which works exactly like
Python's built-in function filter()
```
def myfilter(function, sequence):
return [x for x in sequence if function(x)]
filter_ = filter(lambda x : True if x % 2 == 0 else False, [1,2,3,4])
list(filter_)
myfilter(lambda x : True if x % 2 == 0 else False, [1,2,3,4])
```
Q.3 Implement List comprehensions to produce the following lists.
Write List comprehensions to produce the following Lists
['A', 'C', 'A', 'D', 'G', 'I', ’L’, ‘ D’]
['x', 'xx', 'xxx', 'xxxx', 'y', 'yy', 'yyy', 'yyyy', 'z', 'zz', 'zzz', 'zzzz']
['x', 'y', 'z', 'xx', 'yy', 'zz', 'xx', 'yy', 'zz', 'xxxx', 'yyyy', 'zzzz']
[[2], [3], [4], [3], [4], [5], [4], [5], [6]]
[[2, 3, 4, 5], [3, 4, 5, 6], [4, 5, 6, 7], [5, 6, 7, 8]]
[(1, 1), (2, 1), (3, 1), (1, 2), (2, 2), (3, 2), (1, 3), (2, 3), (3, 3)]
```
a = [x for x in "ACADGILD"]
b = [char * times for char in "xyz" for times in range(1,5)]
c = [char * times for times in range(1,5) for char in "xyz"]
d = [[x + i] for x in range(2,5) for i in range(0,3)]
e = [[x+i for x in range(2,6)] for i in range(0,4) ]
f = [(x,y) for y in range(1,4) for x in range(1,4)]
g = [a,b,c,d,e,f]
for i in g:
print(i,end="\n\n")
```
Q.4 Implement a function longestWord() that takes a list of words and returns the longest one.
```
def longestWord(sequence):
result = sequence[0]
for ele in sequence[1:]:
if len(ele) > len(result):
result = ele
return result
longestWord(["A","hdjsjd","jfhferlhhef"])
```
Q.5 Write a Python Program(with class concepts) to find the area of the triangle using the below
formula.
area = (s*(s-a)*(s-b)*(s-c)) ** 0.5
Function to take the length of the sides of triangle from user should be defined in the parent
class and function to calculate the area should be defined in subclass.
```
class Triangle:
def __init__(self, a , b, c):
self.a = float(a)
self.b = float(b)
self.c = float(c)
self.s = float((a + b + c)/2)
def area(self):
self.area = (self.s * (self.s-self.a) * (self.s-self.b) * (self.s-self.c)) ** 0.5
print(f"Area of triangle : {self.area}")
t = Triangle(3,4,5)
t.area()
```
Q.6 Write a function filter_long_words() that takes a list of words and an integer n and returns the list
of words that are longer than n.
```
def filter_long_words(sequence, n):
filtered_list = [x for x in sequence if len(x)>n]
return filtered_list
filter_long_words(["Apple","Ball","Bat","Doll","ab","cd"],2)
```
Q.7 Write a Python program using function concept that maps list of words into a list of integers
representing the lengths of the corresponding words.
Hint: If a list [ ab,cde,erty] is passed on to the python function output should come as [2,3,4]
Here 2,3 and 4 are the lengths of the words in the list.
```
list(map(lambda x : len(x), ["ab", "cde", "erty"]))
```
Q.8 Write a Python function which takes a character (i.e. a string of length 1) and returns True if it is
a vowel, False otherwise.
```
def vowel_fun(char):
if char.lower() in "aeiou":
return True
else:
return False
vowel_fun("a")
vowel_fun("c")
```
## Great job!
|
github_jupyter
|
def myreduce(function, sequence):
temp_result = sequence[0]
for ele in sequence[1:]:
temp_result = function(temp_result, ele)
return temp_result
from functools import reduce
reduce(lambda x, y : x + y, [1,2,3,4])
myreduce(lambda x, y : x + y, [1,2,3,4])
def myfilter(function, sequence):
return [x for x in sequence if function(x)]
filter_ = filter(lambda x : True if x % 2 == 0 else False, [1,2,3,4])
list(filter_)
myfilter(lambda x : True if x % 2 == 0 else False, [1,2,3,4])
a = [x for x in "ACADGILD"]
b = [char * times for char in "xyz" for times in range(1,5)]
c = [char * times for times in range(1,5) for char in "xyz"]
d = [[x + i] for x in range(2,5) for i in range(0,3)]
e = [[x+i for x in range(2,6)] for i in range(0,4) ]
f = [(x,y) for y in range(1,4) for x in range(1,4)]
g = [a,b,c,d,e,f]
for i in g:
print(i,end="\n\n")
def longestWord(sequence):
result = sequence[0]
for ele in sequence[1:]:
if len(ele) > len(result):
result = ele
return result
longestWord(["A","hdjsjd","jfhferlhhef"])
class Triangle:
def __init__(self, a , b, c):
self.a = float(a)
self.b = float(b)
self.c = float(c)
self.s = float((a + b + c)/2)
def area(self):
self.area = (self.s * (self.s-self.a) * (self.s-self.b) * (self.s-self.c)) ** 0.5
print(f"Area of triangle : {self.area}")
t = Triangle(3,4,5)
t.area()
def filter_long_words(sequence, n):
filtered_list = [x for x in sequence if len(x)>n]
return filtered_list
filter_long_words(["Apple","Ball","Bat","Doll","ab","cd"],2)
list(map(lambda x : len(x), ["ab", "cde", "erty"]))
def vowel_fun(char):
if char.lower() in "aeiou":
return True
else:
return False
vowel_fun("a")
vowel_fun("c")
| 0.299105 | 0.943608 |
# Cloud Object Store Creation and Configuration.
This notebook walks through how to configuration of Streams to access the Cloud Object Storage (COS). This notebook assumes that you have configured access to Streams by completing the [HealthcareSetup.ipynb](HealthcareSetup.ipynb)
This walks through :
* Creating a 'Cloud Object Store'(COS) Resource
* Creating SQL Query service in order that you may query the data written to COS.
* Creating a 'bucket' in COS where the data will be written.
* Creating a COS credential
* Setting the COS credentials in a Streams instance.
<a id="setup"></a>
## Create a COS resource.
We'll use Object Store in order to store data. Object Storage data is stored in buckets that can be accesses using a SQL interface [Object Store](https://console.bluemix.net/catalog/services/cloud-object-storage). This walk
through will...
- Create the COS resource.
- Create bucket in the COS where data written and retrieved.
### Create Cloud Object Storage
[Create Object Storage](https://console.bluemix.net/catalog/services/cloud-object-storage) resource, by opening the page and selecting 'Create', make a note of the service name.

## Create SQL Query
The SQL Query service enables SQL queries against the COS, we'll enable this sevice so that we may access the data written to COS by Streams.
Select 'Create' on the [Create SQL Query](https://console.bluemix.net/catalog/services/sql-query) to enable SQL Query Service

## Configuration : Buckets and Credentials
On completion of you'll be brought back the catalog, select the Cloud Object Storage just created.

Data is stored in buckets on COS, to create bucket, select the 'Buckets' tab followed by the 'Create bucket' button.

Specify the bucket name for the demo, in this case I'm using 'vitals' followed by 'Create bucket'.

Select the 'Service credentials' tab followed by the 'New credential' button.

Select the 'Add' button of the 'Add new credential' popup.

You'll be brought back the 'Service credentials' page the new created credential. Select the 'View credentials' to expand the credential and the 'copy' icon to copy the credentials to your paste buffer.

Create a [https://console.bluemix.net/catalog/services/cloud-object-storage](Cloud Object Store) resource.
## Streams 'Application Configuration'
We've create the 'Cloud Object' and the 'bucket' that application will write. The next step exposes the configuration to the the Streams application. Go to the Streams console of your Streams instance and selct the 'Application Configuration' tab on the left side of console, followed by the "boxed" 'Application Configuraion'

The 'New appliction configuration...' pop up will appear, fill in the following fields.
- Name : cos
- Name in the properties section : cos.creds
- Into Value, paste the credential, in the paste buffer, copied in the previous step.
Configuration' tab on the left side of console, followed by the "boxed" 'Application Configuraion'

The 'cos' and 'cos.creds' are the default names used by the COS toolkit to locate the credential.
When complete select 'Save App Config'

# Next Step
In this page:
- Created a bucket within Cloud Object Storage to store data. The bucket name 'vitals' will written to the HealthcareCOS
application.
- Create credentials to be used by Streams to write to the Object Storage
- Setup out Streams instance Object Storage credentials.
- Enabled SQL Query in order to view the data Stored streams.
The next step is to run the Create a [HealthcareCOS.ipynb](HealthcareCOS.ipynb) notebook that will write data to
the bucket 'vitals' in the Clould Object Store..
|
github_jupyter
|
# Cloud Object Store Creation and Configuration.
This notebook walks through how to configuration of Streams to access the Cloud Object Storage (COS). This notebook assumes that you have configured access to Streams by completing the [HealthcareSetup.ipynb](HealthcareSetup.ipynb)
This walks through :
* Creating a 'Cloud Object Store'(COS) Resource
* Creating SQL Query service in order that you may query the data written to COS.
* Creating a 'bucket' in COS where the data will be written.
* Creating a COS credential
* Setting the COS credentials in a Streams instance.
<a id="setup"></a>
## Create a COS resource.
We'll use Object Store in order to store data. Object Storage data is stored in buckets that can be accesses using a SQL interface [Object Store](https://console.bluemix.net/catalog/services/cloud-object-storage). This walk
through will...
- Create the COS resource.
- Create bucket in the COS where data written and retrieved.
### Create Cloud Object Storage
[Create Object Storage](https://console.bluemix.net/catalog/services/cloud-object-storage) resource, by opening the page and selecting 'Create', make a note of the service name.

## Create SQL Query
The SQL Query service enables SQL queries against the COS, we'll enable this sevice so that we may access the data written to COS by Streams.
Select 'Create' on the [Create SQL Query](https://console.bluemix.net/catalog/services/sql-query) to enable SQL Query Service

## Configuration : Buckets and Credentials
On completion of you'll be brought back the catalog, select the Cloud Object Storage just created.

Data is stored in buckets on COS, to create bucket, select the 'Buckets' tab followed by the 'Create bucket' button.

Specify the bucket name for the demo, in this case I'm using 'vitals' followed by 'Create bucket'.

Select the 'Service credentials' tab followed by the 'New credential' button.

Select the 'Add' button of the 'Add new credential' popup.

You'll be brought back the 'Service credentials' page the new created credential. Select the 'View credentials' to expand the credential and the 'copy' icon to copy the credentials to your paste buffer.

Create a [https://console.bluemix.net/catalog/services/cloud-object-storage](Cloud Object Store) resource.
## Streams 'Application Configuration'
We've create the 'Cloud Object' and the 'bucket' that application will write. The next step exposes the configuration to the the Streams application. Go to the Streams console of your Streams instance and selct the 'Application Configuration' tab on the left side of console, followed by the "boxed" 'Application Configuraion'

The 'New appliction configuration...' pop up will appear, fill in the following fields.
- Name : cos
- Name in the properties section : cos.creds
- Into Value, paste the credential, in the paste buffer, copied in the previous step.
Configuration' tab on the left side of console, followed by the "boxed" 'Application Configuraion'

The 'cos' and 'cos.creds' are the default names used by the COS toolkit to locate the credential.
When complete select 'Save App Config'

# Next Step
In this page:
- Created a bucket within Cloud Object Storage to store data. The bucket name 'vitals' will written to the HealthcareCOS
application.
- Create credentials to be used by Streams to write to the Object Storage
- Setup out Streams instance Object Storage credentials.
- Enabled SQL Query in order to view the data Stored streams.
The next step is to run the Create a [HealthcareCOS.ipynb](HealthcareCOS.ipynb) notebook that will write data to
the bucket 'vitals' in the Clould Object Store..
| 0.825871 | 0.876317 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Terrain/srtm_landforms.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Terrain/srtm_landforms.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Datasets/Terrain/srtm_landforms.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Terrain/srtm_landforms.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
dataset = ee.Image('CSP/ERGo/1_0/Global/SRTM_landforms')
landforms = dataset.select('constant')
landformsVis = {
'min': 11.0,
'max': 42.0,
'palette': [
'141414', '383838', '808080', 'EBEB8F', 'F7D311', 'AA0000', 'D89382',
'DDC9C9', 'DCCDCE', '1C6330', '68AA63', 'B5C98E', 'E1F0E5', 'a975ba',
'6f198c'
],
}
Map.setCenter(-105.58, 40.5498, 11)
Map.addLayer(landforms, landformsVis, 'Landforms')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
|
github_jupyter
|
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
# Add Earth Engine dataset
dataset = ee.Image('CSP/ERGo/1_0/Global/SRTM_landforms')
landforms = dataset.select('constant')
landformsVis = {
'min': 11.0,
'max': 42.0,
'palette': [
'141414', '383838', '808080', 'EBEB8F', 'F7D311', 'AA0000', 'D89382',
'DDC9C9', 'DCCDCE', '1C6330', '68AA63', 'B5C98E', 'E1F0E5', 'a975ba',
'6f198c'
],
}
Map.setCenter(-105.58, 40.5498, 11)
Map.addLayer(landforms, landformsVis, 'Landforms')
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
| 0.547464 | 0.958924 |
```
# Load libraries
import numpy as np;
import pandas as pd;
import seaborn as sns;
import matplotlib.pyplot as plt;
from sklearn.preprocessing import StandardScaler, LabelEncoder;
from sklearn.linear_model import LinearRegression, Lasso, Ridge;
from sklearn.model_selection import train_test_split, cross_val_score, KFold;
from sklearn.metrics import mean_squared_error;
import warnings;
pd.set_option('max_column', None);
warnings.filterwarnings('ignore');
```
## 1. Load data
```
# Load dataset
data = pd.read_csv('AirQualityUCI/AirQualityUCI.csv', sep=';');
# Shape the dataset
print(data.shape);
# Peek at the dataset
data.head()
# Let's check duplicated values
data[data.duplicated(keep=False)].shape
# Delete duplicated rows
data.drop(index=data[data.duplicated(keep=False)].index.values, columns=['Unnamed: 15','Unnamed: 16'], inplace=True);
data.reset_index(drop=True);
data.shape
# Let's look at types of dataset
data.dtypes
'papa va au marché'.replace(' ', ':')
'18.00.00'.strip('')
type(data['Time'][0])
data['Time'].str.replace('.00.00', '').astype('int8')
# Convert into correct type
data['Date'] = pd.to_datetime(data['Date']);
for col in ['Time', 'CO(GT)', 'C6H6(GT)', 'T', 'RH','AH']:
if col == 'Time':
data[col] = data[col].str.replace('.00.00', '').astype('int8');
else:
data[col] = data[col].str.replace(',', '.').astype(('float16'));
data.head()
data.info()
```
## 2. EDA
```
# Check if there any missing values in train set
ax = data.isna().sum().sort_values().plot(kind = 'barh', figsize = (10, 7))
plt.title('Percentage of Missing Values Per Column in Train Set', fontdict={'size':15})
for p in ax.patches:
percentage ='{:,.0f}%'.format((p.get_width()/data.shape[0])*100)
width, height =p.get_width(),p.get_height()
x=p.get_x()+width+0.02
y=p.get_y()+height/2
ax.annotate(percentage,(x,y))
data.columns.difference(['Date', 'Time']).values
# Histogram
cols = data.columns.difference(['Date', 'Time']).values;
data[cols].hist(figsize=(15, 11));
plt.show()
# Density
data[cols].plot(kind='density', subplots=True, layout=(5,5), sharex=False, legend=True, fontsize=1, figsize=(17,11));
plt.show()
# Box and whisker plots
data[cols].plot(kind='box', subplots=True, layout=(3, 5), sharex=False, sharey=False, fontsize=8, figsize=(16,10));
plt.show()
# Correlation matrix
fig = plt.figure(figsize=(15, 10));
ax = fig.add_subplot(111);
cax = ax.matshow(data[cols].corr(method='pearson'), vmin=-1, vmax=1);
fig.colorbar(cax);
ticks = np.arange(0, 13, 1);
ax.set_xticks(ticks);
ax.set_yticks(ticks);
ax.set_xticklabels(cols, rotation=25);
ax.set_yticklabels(cols);
plt.show()
```
## 3. Processing
```
# Extract date features from the date columns
for date_feature in ['year', 'month', 'day']:
data[date_feature] = getattr(data['Date'].dt, date_feature)
```
## 4. Transformation
## 5. Split data & Fitting models
```
# Select main columns to be used in training
main_cols = data.columns.difference(['C6H6(GT)', 'Date']);
X = data[main_cols];
y = data['C6H6(GT)']
# Split out test and validation dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42); # test
# X_tr, X_val, y_tr, y_val = train_test_split(X_train, y_train, test_size=0.3, random_state=42); # validation
# Cross validation
kfold = KFold(n_splits=3, shuffle=True, random_state=21);
# Fitting
linears = [];
linears.append(('LR', LinearRegression()));
linears.append(('RIDGE', Ridge()));
linears.append(('LASSO', Lasso()));
# Evaluate
for name, model in linears:
scores = cross_val_score(model, X_train, y_train, scoring='neg_mean_squared_error', cv=kfold);
print('%s : %.4f(%.4f)' % (name, -1 * scores.mean(), scores.std()));
```
## 6. Evaluate on test data
```
# Fit the best model
mod = LinearRegression();
mod = mod.fit(X_train, y_train);
y_pred = mod.predict(X_test);
# Evaluate each model in turn with kfold
print('Score :', mean_squared_error(y_test, y_pred));
# Plotting predicted and true values
plt.figure(figsize=(17, 5))
plt.plot(np.arange(0, len(y_pred)), y_pred, 'o', linestyle='dashed', linewidth=1, markersize=5, label='Prediction')
plt.plot(np.arange(0, len(y_test)), y_test, 'o', linestyle='dashed', linewidth=1, markersize=5, label='True')
plt.legend()
plt.show()
```
## 7. Make persitent preprocessing data
```
data.to_csv('pre_air.csv', index=False);
```
|
github_jupyter
|
# Load libraries
import numpy as np;
import pandas as pd;
import seaborn as sns;
import matplotlib.pyplot as plt;
from sklearn.preprocessing import StandardScaler, LabelEncoder;
from sklearn.linear_model import LinearRegression, Lasso, Ridge;
from sklearn.model_selection import train_test_split, cross_val_score, KFold;
from sklearn.metrics import mean_squared_error;
import warnings;
pd.set_option('max_column', None);
warnings.filterwarnings('ignore');
# Load dataset
data = pd.read_csv('AirQualityUCI/AirQualityUCI.csv', sep=';');
# Shape the dataset
print(data.shape);
# Peek at the dataset
data.head()
# Let's check duplicated values
data[data.duplicated(keep=False)].shape
# Delete duplicated rows
data.drop(index=data[data.duplicated(keep=False)].index.values, columns=['Unnamed: 15','Unnamed: 16'], inplace=True);
data.reset_index(drop=True);
data.shape
# Let's look at types of dataset
data.dtypes
'papa va au marché'.replace(' ', ':')
'18.00.00'.strip('')
type(data['Time'][0])
data['Time'].str.replace('.00.00', '').astype('int8')
# Convert into correct type
data['Date'] = pd.to_datetime(data['Date']);
for col in ['Time', 'CO(GT)', 'C6H6(GT)', 'T', 'RH','AH']:
if col == 'Time':
data[col] = data[col].str.replace('.00.00', '').astype('int8');
else:
data[col] = data[col].str.replace(',', '.').astype(('float16'));
data.head()
data.info()
# Check if there any missing values in train set
ax = data.isna().sum().sort_values().plot(kind = 'barh', figsize = (10, 7))
plt.title('Percentage of Missing Values Per Column in Train Set', fontdict={'size':15})
for p in ax.patches:
percentage ='{:,.0f}%'.format((p.get_width()/data.shape[0])*100)
width, height =p.get_width(),p.get_height()
x=p.get_x()+width+0.02
y=p.get_y()+height/2
ax.annotate(percentage,(x,y))
data.columns.difference(['Date', 'Time']).values
# Histogram
cols = data.columns.difference(['Date', 'Time']).values;
data[cols].hist(figsize=(15, 11));
plt.show()
# Density
data[cols].plot(kind='density', subplots=True, layout=(5,5), sharex=False, legend=True, fontsize=1, figsize=(17,11));
plt.show()
# Box and whisker plots
data[cols].plot(kind='box', subplots=True, layout=(3, 5), sharex=False, sharey=False, fontsize=8, figsize=(16,10));
plt.show()
# Correlation matrix
fig = plt.figure(figsize=(15, 10));
ax = fig.add_subplot(111);
cax = ax.matshow(data[cols].corr(method='pearson'), vmin=-1, vmax=1);
fig.colorbar(cax);
ticks = np.arange(0, 13, 1);
ax.set_xticks(ticks);
ax.set_yticks(ticks);
ax.set_xticklabels(cols, rotation=25);
ax.set_yticklabels(cols);
plt.show()
# Extract date features from the date columns
for date_feature in ['year', 'month', 'day']:
data[date_feature] = getattr(data['Date'].dt, date_feature)
# Select main columns to be used in training
main_cols = data.columns.difference(['C6H6(GT)', 'Date']);
X = data[main_cols];
y = data['C6H6(GT)']
# Split out test and validation dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42); # test
# X_tr, X_val, y_tr, y_val = train_test_split(X_train, y_train, test_size=0.3, random_state=42); # validation
# Cross validation
kfold = KFold(n_splits=3, shuffle=True, random_state=21);
# Fitting
linears = [];
linears.append(('LR', LinearRegression()));
linears.append(('RIDGE', Ridge()));
linears.append(('LASSO', Lasso()));
# Evaluate
for name, model in linears:
scores = cross_val_score(model, X_train, y_train, scoring='neg_mean_squared_error', cv=kfold);
print('%s : %.4f(%.4f)' % (name, -1 * scores.mean(), scores.std()));
# Fit the best model
mod = LinearRegression();
mod = mod.fit(X_train, y_train);
y_pred = mod.predict(X_test);
# Evaluate each model in turn with kfold
print('Score :', mean_squared_error(y_test, y_pred));
# Plotting predicted and true values
plt.figure(figsize=(17, 5))
plt.plot(np.arange(0, len(y_pred)), y_pred, 'o', linestyle='dashed', linewidth=1, markersize=5, label='Prediction')
plt.plot(np.arange(0, len(y_test)), y_test, 'o', linestyle='dashed', linewidth=1, markersize=5, label='True')
plt.legend()
plt.show()
data.to_csv('pre_air.csv', index=False);
| 0.841598 | 0.810741 |
```
## Importing the necesssary Libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import psycopg2 as pg2
import datetime as dt
# package used for converting the data into datetime format
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_selection import RFE, f_regression
from sklearn.linear_model import (LinearRegression, Ridge, Lasso, RandomizedLasso)
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import RandomForestRegressor
import warnings
warnings.filterwarnings("ignore")
##Establish connection to the postgres database
conn= pg2.connect('dbname = Amazon user= postgres password =data host= 127.0.0.1')
cur=conn.cursor()
df_raw = pd.read_sql_query('select * from public.keepa', conn)
#Check the dimension of the raw data to see if its properly imported
print('Starting size of our Dataset ')
df_raw.shape
# Print out count of each datatype in the dataframe
df_raw.dtypes.value_counts()
```
### Price Aggregator
Price column is divided into three different coumns depending on availability and condition of books carried by Amazon.
According to keepa, prise is set based on availability of information in the following order. The aggregator function adds
a new column called 'price' to the dataset and assigns the value that appers first from the following list and finally
deletes the three price columns.
* amazon_Price
* marketplace_new
* marketplace_used_price
```
def PriceAggregator(original_df):
df=original_df
# create a copy of the three columns to choose amazon price from
df_copy=df[['amazon_price','marketplace_new_price','marketplace_used_price']]
# Replace missing price denoted by -1 to Null in all three price columns
for item in df_copy:
df_copy[item].replace('-1',np.nan, inplace=True)
# Add a new column to store the aggregated price with default value of 'amazon_price'
df.insert(79,'price',df_copy['amazon_price'].astype('float'))
#Loop throgh all three columns to assign non-null value to the newly created price column.
#Keep amazon_price as is if not null, otherwise assign marketplace_new_price as the new price.
#Where both 'amazon_price' and 'marketplace_new_price' are null, price will be set to
#'marketplace_used_price' regardless of its value.
for i in range(df['price'].size):
if pd.isnull(df['price'][i]):
if pd.isnull(df_copy['marketplace_new_price'][i]):
if pd.isnull(df_copy['marketplace_used_price'][i]):
pass
else:
df['price'][i]=df_copy['marketplace_used_price'][i]
else:
df['price'][i]=df_copy['marketplace_new_price'][i]
else:
pass
#Delete records where price record is missing since there is no value to cross check
#the accuracy of the model in the test set.
df.dropna(subset=['price'], axis=0, inplace=True)
#Reset index after dropping rows with missing price
df.reset_index(drop= True, inplace=True)
#Delete old price columns after assigning aggregated price to a brand new column
df.drop(['amazon_price','marketplace_new_price','marketplace_used_price'], axis=1 , inplace=True)
#Return the a dataframe with a new price column added to the original dataframe
return df
df=PriceAggregator(df_raw)
df.shape
```
## Delete duplicate records if there are any
```
df.drop_duplicates(inplace = True)
df.shape
## categorical valued features
cat=['author', 'binding','brand','categoryTree_0','categoryTree_1','categoryTree_2','categoryTree_3','categoryTree_4',
'color','edition','features','format','genre','label','languages_0', 'manufacturer','productGroup','publisher','studio',
'title','type']
df[cat].head(5)
```
## Replace every missing value with Null Values for further analysis.
```
df.replace('', np.NaN, inplace=True)
#df.fillna(np.NaN)
df.isna().sum().sort_values(ascending=False).to_frame(name='Count of Null Values')
```
We can delete those columns that contain Null values for the majority of the record set since those features are not common
behaviour to our instances.
```
Null_features=['coupon','offers','liveOffersOrder','promotions','buyBoxSellerIdHistory','features','upcList','variations',
'hazardousMaterialType','genre','platform','variationCSV','parentAsin','department','size','model','color'
,'partNumber','mpn','brand','edition','format']
df[Null_features].isna().sum()
```
We can delete these features without losing any useful information from our data since more than 50% of the records in the above list contain null values.
```
df.drop(Null_features, axis=1, inplace=True)
df.shape
```
For the remaining null values in our data where the total count is relatively small, we will replace them by a statistically representative values like mean or mode.
* Mode for categorical values where there is a clear majority or filled with 'Unknown'
* Mean value is used for numerical columns
```
with_Nulls=df.loc[:, df.isna().sum()!=0].columns.tolist()
df[with_Nulls].isna().sum().sort_values(ascending=False)
## For our records sets mainly comprised of string or categorival data
Nulls2Unknown=['categoryTree_4','categoryTree_3','categoryTree_2','author','studio','publisher','manufacturer',
'label']
df[with_Nulls].head(3)
for item in with_Nulls:
print (f'{item}\t\t{df[item].value_counts().max()}')
```
Given that our data contains 100,000 records we can clearly see the high mode value for some of the features to replace the nulls
```
Nulls2Mode=['languages_0','categoryTree_0','categoryTree_1']
mode = df.filter(['languages_0','categoryTree_0','categoryTree_1']).mode()
df[Nulls2Mode]=df[Nulls2Mode].fillna(df.mode().iloc[0])
```
For the following features since there is no one single category with a high frequency(Mode) in the group, we are filling the missing(Null) values with 'Unknown'.
```
NullswithNoMode=df.loc[:, df.isna().sum()!=0].columns.tolist()
for item in NullswithNoMode:
print(item)
print(df[item].value_counts().nlargest(3))
df[NullswithNoMode]=df[NullswithNoMode].fillna('Unknown')
# Check if there are still missing or null values in the dataset
df[df.loc[:, df.isna().sum()!=0].columns].isna().sum()
```
We have entirely replaced the null and missing values in the dataset by statistically representative values.
## Data Type Conversion
```
df.dtypes.value_counts()
```
Lets group all those features that are in string (object) format and convert them to numeric
```
#df[strings] = df[strings].apply(pd.to_numeric, errors='coerce', axis=1)
df.dtypes.value_counts()
#Convert columns that contain numerical values to numeric data type using pandsas to_numeric
numeric=['availabilityAmazon',
'ean','hasReviews', 'isEligibleForSuperSaverShipping', 'isEligibleForTradeIn',
'isRedirectASIN', 'isSNS', 'lastPriceChange','lastRatingUpdate', 'lastUpdate', 'listedSince',
'newPriceIsMAP', 'numberOfItems','numberOfPages', 'offersSuccessful', 'packageHeight',
'packageLength', 'packageQuantity', 'packageWeight', 'packageWidth',
'publicationDate', 'releaseDate', 'rootCategory','stats_atIntervalStart', 'stats_avg', 'stats_avg30', 'stats_avg90',
'stats_avg180', 'stats_current', 'stats_outOfStockPercentage30',
'stats_outOfStockPercentage90', 'stats_outOfStockPercentageInInterval',
'trackingSince','sales_rank', 'price']
#cols = ['productType','rootCategory','stats_atIntervalStart','availabilityAmazon','hasReviews','isRedirectASIN','isSNS','isEligibleForTradeIn','isEligibleForSuperSaverShipping', 'ean','hasReviews', 'availabilityAmazon','isEligibleForTradeIn','lastPriceChange','lastRatingUpdate','lastUpdate','lastRatingUpdate','lastUpdate','listedSince',"newPriceIsMAP", "numberOfItems", "numberOfPages","packageHeight", "packageLength","packageQuantity", "packageWeight", "packageWidth",'stats_avg', 'stats_avg30', 'stats_avg90', 'stats_avg180', 'stats_current',"stats_outOfStockPercentage30", "stats_outOfStockPercentage90","stats_outOfStockPercentageInInterval","trackingSince",'upc','price','amazon_price', 'marketplace_new_price', 'marketplace_used_price', 'sales_rank']
df[numeric] = df[numeric].apply(pd.to_numeric, errors='coerce', axis=1)
df.dtypes.value_counts()
strings=df.loc[:, df.dtypes == np.object].columns.tolist()
print('\n'+ 'Sample of the dataset with only categorical information'+'\n')
df[strings].head(3)
```
We can delete asin, ean and imageCSV columns since the information contained in them is not characteristic discription of books.
```
df.drop(['asin','imagesCSV','ean', 'upc'], axis=1, inplace=True)
#upc might break code watch for it
df.shape
df.dtypes.value_counts()
df.loc[:, df.dtypes == np.object].columns
```
The language_0 column contains aggregated information separated by comma, we are going to split it into 2 parts;
```
df['languages_0'].head(5)
new = df['languages_0'].str.split(",", n = 1, expand = True)
df['language_1']=new[0]
df['language_2']=new[1]
# reduced categories froom 9 to 6 grouping related categories together
#df['language_1'].value_counts().to_frame()
#group English, english and Middle English to one categry
df['language_1'].replace(('English', 'english','Middle English'),'English', inplace = True)
#grouping Spanish,Portuguese and Latin under "Spanish"
df['language_1'].replace(('Spanish', 'Portuguese','Latin'),'Spanish', inplace = True)
#grouping Chinese, mandarin Chinese and simplified chinese to Chinese
df['language_1'].replace(('Simplified Chinese', 'Mandarin Chinese','Chinese'),'Chinese', inplace = True)
#grouping Arabic,Hebrew and Turkish under Middle Eastern
df['language_1'].replace(('Arabic', 'Hebrew','Turkish'),'Middle Eastern', inplace = True)
# group languages with single entry record in to one group called 'Others'
df['language_1'].replace(('Hindi', 'Scots','Filipino','Malay','Dutch','Greek','Korean','Romanian','Czech'),'Others', inplace = True)
#grouping Danish and Norwegian into one group of 'Scandinavian'
df['language_1'].replace(('Danish', 'Norwegian'),'Scandinavian', inplace=True)
#replaced ('published','Published,Dolby Digital 1.0','Published,DTS-HD 5.1') by Published
df['language_2'].replace(('published','Published,Dolby Digital 1.0','Published,DTS-HD 5.1'),'Published', inplace=True)
df[['language_1','language_2']].head(5)
#Since we have copied the information into new columns we can delete the languages_0 column
df.drop(['languages_0'], axis=1 , inplace=True)
df.columns
df.shape
df.binding.value_counts()
```
The binding column contains 73 differnt categories that are mostly related and some of them contain very small elements, we will aggregate closely related categories to reduce the dimension of our variables to avoid curse of dimentioanlity
```
df.binding.nunique()
dict={'Unknown':['Printed Access Code', 'Unknown','Health and Beauty', 'Lawn & Patio', 'Workbook', 'Kitchen', 'Automotive', 'Jewelry'],
'spiral':[ 'Spiral-bound', 'Staple Bound', 'Ring-bound', 'Plastic Comb', 'Loose Leaf', 'Thread Bound'],
'magazines':[ 'Journal', 'Single Issue Magazine', 'Print Magazine'],
'audios':[ 'Audible Audiobook', 'Audio CD', 'DVD', 'Album', 'MP3 CD', 'Audio CD Library Binding'],
'digital_prints':[ 'CD-ROM', 'Blu-ray', 'DVD-ROM', 'Kindle Edition', 'Video Game', 'Sheet music', 'Software Download',
'Personal Computers', 'Electronics', 'Game', 'Wireless Phone Accessory'],
'hardcovers':['Hardcover', 'Hardcover-spiral', 'Turtleback', 'Roughcut'],
'others':[ 'Cards', 'Pamphlet', 'Calendar', 'Map', 'Stationery', 'Accessory', 'Misc. Supplies', 'Office Product', 'Poster',
'Wall Chart', 'Bookmark', 'JP Oversized'],
'paperbacks':[ 'Paperback', 'Perfect Paperback', 'Mass Market Paperback', 'Flexibound', 'Print on Demand (Paperback)',
'Comic', 'Puzzle', 'Paperback Bunko'],
'leather_bonded':[ 'Bonded Leather', 'Leather Bound', 'Imitation Leather', 'Vinyl Bound'],
'board_book':[ 'Board book', 'Baby Product', 'Toy', 'Rag Book', 'Card Book', 'Bath Book', 'Pocket Book'],
'schoolLibrary_binding':[ 'School & Library Binding', 'Library Binding', 'Textbook Binding']}
for key,val in dict.items():
df.binding.replace(val,key, inplace=True)
df.binding.value_counts()
df.head()
#catTree_under10.categoryTree_2.values= 'Other'
def groupUnder10(x):
cond = df[x].value_counts()
threshold = 10
df[x] = np.where(df[x].isin(cond.index[cond > threshold ]), df[x], 'Others')
return('All the different categories that contain less than 10 items in the %s column are renamed to Others \n inorder to avoid curse of dimensionality' %x)
df[['categoryTree_1','categoryTree_2','categoryTree_3','categoryTree_4']].nunique()
groupUnder10('categoryTree_2')
#group under 10 counts in to one for categoryTree_3 column
groupUnder10('categoryTree_3')
groupUnder10('categoryTree_4')
df[['categoryTree_0','categoryTree_1','categoryTree_2','categoryTree_3','categoryTree_4']].nunique()
## Some features are duplicated within the dataset, lets delete those duplicated columns
## Delete duplicated features
duplicates=df[['label', 'manufacturer', 'publisher', 'studio']]
df['label'].equals(df['manufacturer'])
df['label'].equals(duplicates['publisher'])
df['label'].equals(duplicates['studio'])
duplicates.describe(include='all')
df.duplicated(subset=['label', 'manufacturer', 'publisher', 'studio'],keep='first').value_counts()
```
Since the above 4 columns contain duplicated informartion in 89493 out of 99600 total records we can keep one of those and drop the reamining ones without losing useful information.
```
# Keep publisher and drop the rest
df.drop(['label', 'manufacturer','studio'], axis =1, inplace=True)
df.shape
df.describe(include='all').transpose()
```
## Encoding categorical columns
```
cat_cols=['author','language_1','language_2','binding','categoryTree_0', 'categoryTree_1', 'categoryTree_2', 'categoryTree_3',
'categoryTree_4','productGroup','publisher','title','type','language_1','language_2']
df[cat_cols].head()
#might not be necessary
df['author']=df['author'].astype(str)
df['language_2']=df['language_2'].astype(str)
df['categoryTree_1']=df['categoryTree_1'].astype(str)
df['categoryTree_2']=df['categoryTree_2'].astype(str)
df['categoryTree_3']=df['categoryTree_3'].astype(str)
df['categoryTree_4']=df['categoryTree_4'].astype(str)
```
## Outlier detection and transformation
Before we decide whether to use standard deviation or interquntile range to identify outliers, lets plot the data points using a distribution plot.
```
def distWithBox(data):
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="ticks")
x = df[data]
f, (ax_box, ax_hist) = plt.subplots(2, sharex=True,
gridspec_kw={"height_ratios": (.15, .85)})
sns.boxplot(x, ax=ax_box)
sns.distplot(x, ax=ax_hist)
ax_box.set(yticks=[])
sns.despine(ax=ax_hist)
sns.despine(ax=ax_box, left=True)
## Distribution and box plot of the raw data with outliers
distWithBox('price')
```
We can see from the graph that the distribution is not normal so we have to use interquantile range to cutoff outliers
```
from numpy import percentile
data=df['price']
q25, q75 = percentile(data, 25), percentile(data, 75)
iqr = q75 - q25
print('Percentiles: 25th=%.3f, 75th=%.3f, IQR=%.3f' % (q25, q75, iqr))
# calculate the outlier cutoff
cut_off = iqr * 1.5
lower, upper = q25 - cut_off, q75 + cut_off
# identify outliers
outliers = [x for x in data if x < lower or x > upper]
print('Identified outliers: %d' % len(outliers))
outliers_removed = [x for x in data if x >= lower and x <= upper]
print('Non-outlier observations: %d' % len(outliers_removed))
outliers=[]
data_1=df['price']
for item in data_1:
if item <lower or item>upper:
outliers.append(item)
x=df['price']
outlier_indices=list(data_1.index[(x<lower) | (x> upper)])
len(outlier_indices)
df.drop(axis=0,index=outlier_indices, inplace=True)
df.shape
## lets plot distribution with and box plot to see the change after we trim down the outliers
distWithBox('price')
```
### Label Encoding
```
df[cat_cols]= df[cat_cols].apply(LabelEncoder().fit_transform)
```
### Feature Selection
VarianceThreshold is a simple baseline approach to feature selection. It removes all features whose variance doesn’t meet some threshold. By default, it removes all zero-variance features, i.e. features that have the same value in all samples.
so we can select using the threshold .8 * (1 - .8):
```
df_X=df.loc[:, df.columns != 'price']
df_y=df['price']
from sklearn.feature_selection import VarianceThreshold
print('%s Number of features before VarianceThreshhold'%len(df_X.columns))
selector=VarianceThreshold(threshold=(.8*(1-.8)))
FeaturesTransformed=selector.fit_transform(df_X)
## print the support and shape of the transformed features
print(selector.get_support())
data=df_X[df_X.columns[selector.get_support(indices=True)]]
cols=data.columns
df_reduced=pd.DataFrame(FeaturesTransformed, columns=cols)
df_reduced.shape
from sklearn.model_selection import train_test_split as split
X=df[['language_1', 'sales_rank', 'type', 'trackingSince', 'title',
'stats_outOfStockPercentageInInterval', 'stats_outOfStockPercentage90',
'stats_outOfStockPercentage30', 'stats_current', 'stats_avg180',
'stats_avg90', 'stats_avg30', 'stats_avg', 'stats_atIntervalStart',
'rootCategory', 'releaseDate', 'publisher', 'publicationDate',
'productGroup', 'packageWidth', 'packageWeight', 'packageQuantity',
'packageLength', 'packageHeight', 'numberOfPages', 'numberOfItems',
'listedSince', 'lastUpdate', 'lastRatingUpdate', 'lastPriceChange',
'isEligibleForSuperSaverShipping', 'categoryTree_4', 'categoryTree_3',
'categoryTree_2', 'binding', 'author', 'productType']]
Y=df['price']
```
Standardization rescales data to have a mean (μ) of 0 and standard deviation (σ) of 1 (unit variance).
Xchanged = (X−μ)/σ
For most applications standardization is recommended. Especially when trying to reduce the dimensionality of the data by applying algorithms such as principal component analysis (pca). The general procedure is to standardize the input data.
```
# The standardization is only applied for independent variables i.e. the variables on which the output value is predicted (price)
from sklearn.preprocessing import StandardScaler
names = X.columns
scale = StandardScaler()
X_df = scale.fit_transform(X)
X_df = pd.DataFrame(X_df, columns=names)
Y=df['price']
X_df.head(5)
```
Different supervised regression based machine learning models are implemented on the standardized data.
```
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
import math
def ModelScores(data,target):
X = data
Y=target
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test= split(X_df,Y,test_size=0.25, random_state=40)
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoLars
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.svm import SVR
from sklearn.linear_model import BayesianRidge
from sklearn.linear_model import RANSACRegressor
models={'Gradient Boost': GradientBoostingRegressor(),
'Random Forest': RandomForestRegressor(),
'Decision Tree': DecisionTreeRegressor(),
'Linear Regression': LinearRegression(),
'MLP': MLPRegressor(),
'Ridge CV': RidgeCV(),
'LassoLars':LassoLars(),
'Lasso':Lasso(),
'Elastic Search': ElasticNet(),
'Bayesian Ridge':BayesianRidge(),
'Ransac':RANSACRegressor()
}
for name,model in models.items():
mdl=model
mdl.fit(X_train, Y_train)
prediction = mdl.predict(X_test)
print(name)
print("Accuracy Score", r2_score(Y_test, prediction))
mse3 = mean_squared_error(Y_test, prediction)
print("The root mean square value", math.sqrt(mse3))
ModelScores(X_df,Y)
```
After running the algorithms, the two algorithms are selected based on the Accuracy Score and RMSE values. For these algorithms the hyper parameter tuning using GridSearchCV is applied to find the best parameters which can improve the metrics of the model.
```
# The GridSearchCV for Gradient Boosting Regressor model
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import GradientBoostingRegressor
GradientBoosting = GradientBoostingRegressor(random_state = 40)
alphas = [0.001, 0.01, 0.1, 0.5, 0.9]
sample_split = [2,3,4,5,6,7,8]
max_depth = [4,5,6,7,8,9]
learning_rate = [0.1, 0.3, 0.5, 0.7]
tuned_params = [{'alpha': alphas}, {'min_samples_split': sample_split}, {'max_depth': max_depth}, {'learning_rate':learning_rate}]
n_folds = 5
grid = GridSearchCV(
GradientBoosting, tuned_params, cv=n_folds
)
grid.fit(X_df, Y)
print(grid.best_estimator_)
# The GridSearchCV for the RandomForest Regressor model
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestRegressor
RandomForest = RandomForestRegressor(random_state = 40)
estimators = [10,50,100]
sample_split = [2,3,4,5,6,7,8]
sample_leaf = [1,2,3,4,5]
max_depth = [4,5,6,7,8,9]
tuned_params = [{'n_estimators': estimators}, {'min_samples_split': sample_split}, {'min_samples_leaf': sample_leaf},{'max_leaf_nodes': max_depth}]
n_folds = 5
grid = GridSearchCV(
RandomForest, tuned_params, cv=n_folds
)
grid.fit(X_df, Y)
print(grid.best_estimator_)
```
Now, the results of the GridSearchCV are added to the description and the values of the metrics are displayed below for both the random forest and graident boosting algorithms
```
Y = df['price']
Y.head()
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test = split(X_df, Y, test_size = 0.25, random_state = 40)
from sklearn.ensemble import GradientBoostingRegressor
gbr = GradientBoostingRegressor(alpha=0.001, criterion='friedman_mse', init=None,
learning_rate=0.1, loss='ls', max_depth=3, max_features=None,
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=100, n_iter_no_change=None, presort='auto',
random_state=40, subsample=1.0, tol=0.0001,
validation_fraction=0.1, verbose=0, warm_start=False)
gbr.fit(X_train, Y_train)
pred = gbr.predict(X_test)
print("Accuracy Score", r2_score(Y_test, pred))
mse5 = mean_squared_error(Y_test, pred)
print("The root mean square value", math.sqrt(mse5))
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None,
oob_score=False, random_state=40, verbose=0, warm_start=False)
rfr.fit(X_train, Y_train)
prediction = rfr.predict(X_test)
print("Accuracy Score", r2_score(Y_test, prediction))
mse6 = mean_squared_error(Y_test, prediction)
print("The root mean square value", math.sqrt(mse6))
```
Applying the principal component analysis to reduce the dimensionality of the data on the standardized data.
```
from sklearn.decomposition import PCA
pca = PCA()
X = pca.fit_transform(X_df)
exp_variance = pca.explained_variance_ratio_
print(exp_variance)
# Repeating the step of applying the different algorithms on the PCA applied data
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
import math
def StandardModelScores(data,target):
X = data
Y=target
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test= split(X,Y,test_size=0.25, random_state=40)
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoLars
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.svm import SVR
from sklearn.linear_model import BayesianRidge
from sklearn.linear_model import RANSACRegressor
models={'Gradient Boost': GradientBoostingRegressor(),
'Random Forest': RandomForestRegressor(),
'Decision Tree': DecisionTreeRegressor(),
'Linear Regression': LinearRegression(),
'MLP': MLPRegressor(),
'Ridge CV': RidgeCV(),
'LassoLars':LassoLars(),
'Lasso':Lasso(),
'Elastic Search': ElasticNet(),
'Bayesian Ridge':BayesianRidge(),
'Ransac':RANSACRegressor()
}
for name,model in models.items():
mdl=model
mdl.fit(X_train, Y_train)
prediction = mdl.predict(X_test)
print(name)
print("Accuracy Score", r2_score(Y_test, prediction))
mse3 = mean_squared_error(Y_test, prediction)
print("The root mean square value", math.sqrt(mse3))
StandardModelScores(X,Y)
```
Now applying the GridSearchCV values to the top 2 performing alogirthms i.e. Random Forest and Gradient Boosting regression algorithms.
```
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test= split(X,Y,test_size=0.25, random_state=40)
from sklearn.ensemble import RandomForestRegressor
rfg = RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None,
oob_score=False, random_state=40, verbose=0, warm_start=False)
rfg.fit(X_train, Y_train)
prediction14 = rfg.predict(X_test)
print("Accuracy Score", r2_score(Y_test, prediction14))
mse2 = mean_squared_error(Y_test, prediction14)
print("The root mean square value", math.sqrt(mse2))
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test= split(X,Y,test_size=0.25, random_state=40)
from sklearn.ensemble import GradientBoostingRegressor
modelS = GradientBoostingRegressor(alpha=0.9, criterion='friedman_mse', init=None,
learning_rate=0.1, loss='ls', max_depth=6, max_features=None,
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=100, n_iter_no_change=None, presort='auto',
random_state=40, subsample=1.0, tol=0.0001,
validation_fraction=0.1, verbose=0, warm_start=False)
modelS.fit(X_train,Y_train)
predictionS1 = modelS.predict(X_test)
print("Accuracy Score", r2_score(Y_test, predictionS1))
mse = mean_squared_error(Y_test, predictionS1)
print("The root mean square value", math.sqrt(mse))
```
|
github_jupyter
|
## Importing the necesssary Libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import psycopg2 as pg2
import datetime as dt
# package used for converting the data into datetime format
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_selection import RFE, f_regression
from sklearn.linear_model import (LinearRegression, Ridge, Lasso, RandomizedLasso)
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import RandomForestRegressor
import warnings
warnings.filterwarnings("ignore")
##Establish connection to the postgres database
conn= pg2.connect('dbname = Amazon user= postgres password =data host= 127.0.0.1')
cur=conn.cursor()
df_raw = pd.read_sql_query('select * from public.keepa', conn)
#Check the dimension of the raw data to see if its properly imported
print('Starting size of our Dataset ')
df_raw.shape
# Print out count of each datatype in the dataframe
df_raw.dtypes.value_counts()
def PriceAggregator(original_df):
df=original_df
# create a copy of the three columns to choose amazon price from
df_copy=df[['amazon_price','marketplace_new_price','marketplace_used_price']]
# Replace missing price denoted by -1 to Null in all three price columns
for item in df_copy:
df_copy[item].replace('-1',np.nan, inplace=True)
# Add a new column to store the aggregated price with default value of 'amazon_price'
df.insert(79,'price',df_copy['amazon_price'].astype('float'))
#Loop throgh all three columns to assign non-null value to the newly created price column.
#Keep amazon_price as is if not null, otherwise assign marketplace_new_price as the new price.
#Where both 'amazon_price' and 'marketplace_new_price' are null, price will be set to
#'marketplace_used_price' regardless of its value.
for i in range(df['price'].size):
if pd.isnull(df['price'][i]):
if pd.isnull(df_copy['marketplace_new_price'][i]):
if pd.isnull(df_copy['marketplace_used_price'][i]):
pass
else:
df['price'][i]=df_copy['marketplace_used_price'][i]
else:
df['price'][i]=df_copy['marketplace_new_price'][i]
else:
pass
#Delete records where price record is missing since there is no value to cross check
#the accuracy of the model in the test set.
df.dropna(subset=['price'], axis=0, inplace=True)
#Reset index after dropping rows with missing price
df.reset_index(drop= True, inplace=True)
#Delete old price columns after assigning aggregated price to a brand new column
df.drop(['amazon_price','marketplace_new_price','marketplace_used_price'], axis=1 , inplace=True)
#Return the a dataframe with a new price column added to the original dataframe
return df
df=PriceAggregator(df_raw)
df.shape
df.drop_duplicates(inplace = True)
df.shape
## categorical valued features
cat=['author', 'binding','brand','categoryTree_0','categoryTree_1','categoryTree_2','categoryTree_3','categoryTree_4',
'color','edition','features','format','genre','label','languages_0', 'manufacturer','productGroup','publisher','studio',
'title','type']
df[cat].head(5)
df.replace('', np.NaN, inplace=True)
#df.fillna(np.NaN)
df.isna().sum().sort_values(ascending=False).to_frame(name='Count of Null Values')
Null_features=['coupon','offers','liveOffersOrder','promotions','buyBoxSellerIdHistory','features','upcList','variations',
'hazardousMaterialType','genre','platform','variationCSV','parentAsin','department','size','model','color'
,'partNumber','mpn','brand','edition','format']
df[Null_features].isna().sum()
df.drop(Null_features, axis=1, inplace=True)
df.shape
with_Nulls=df.loc[:, df.isna().sum()!=0].columns.tolist()
df[with_Nulls].isna().sum().sort_values(ascending=False)
## For our records sets mainly comprised of string or categorival data
Nulls2Unknown=['categoryTree_4','categoryTree_3','categoryTree_2','author','studio','publisher','manufacturer',
'label']
df[with_Nulls].head(3)
for item in with_Nulls:
print (f'{item}\t\t{df[item].value_counts().max()}')
Nulls2Mode=['languages_0','categoryTree_0','categoryTree_1']
mode = df.filter(['languages_0','categoryTree_0','categoryTree_1']).mode()
df[Nulls2Mode]=df[Nulls2Mode].fillna(df.mode().iloc[0])
NullswithNoMode=df.loc[:, df.isna().sum()!=0].columns.tolist()
for item in NullswithNoMode:
print(item)
print(df[item].value_counts().nlargest(3))
df[NullswithNoMode]=df[NullswithNoMode].fillna('Unknown')
# Check if there are still missing or null values in the dataset
df[df.loc[:, df.isna().sum()!=0].columns].isna().sum()
df.dtypes.value_counts()
#df[strings] = df[strings].apply(pd.to_numeric, errors='coerce', axis=1)
df.dtypes.value_counts()
#Convert columns that contain numerical values to numeric data type using pandsas to_numeric
numeric=['availabilityAmazon',
'ean','hasReviews', 'isEligibleForSuperSaverShipping', 'isEligibleForTradeIn',
'isRedirectASIN', 'isSNS', 'lastPriceChange','lastRatingUpdate', 'lastUpdate', 'listedSince',
'newPriceIsMAP', 'numberOfItems','numberOfPages', 'offersSuccessful', 'packageHeight',
'packageLength', 'packageQuantity', 'packageWeight', 'packageWidth',
'publicationDate', 'releaseDate', 'rootCategory','stats_atIntervalStart', 'stats_avg', 'stats_avg30', 'stats_avg90',
'stats_avg180', 'stats_current', 'stats_outOfStockPercentage30',
'stats_outOfStockPercentage90', 'stats_outOfStockPercentageInInterval',
'trackingSince','sales_rank', 'price']
#cols = ['productType','rootCategory','stats_atIntervalStart','availabilityAmazon','hasReviews','isRedirectASIN','isSNS','isEligibleForTradeIn','isEligibleForSuperSaverShipping', 'ean','hasReviews', 'availabilityAmazon','isEligibleForTradeIn','lastPriceChange','lastRatingUpdate','lastUpdate','lastRatingUpdate','lastUpdate','listedSince',"newPriceIsMAP", "numberOfItems", "numberOfPages","packageHeight", "packageLength","packageQuantity", "packageWeight", "packageWidth",'stats_avg', 'stats_avg30', 'stats_avg90', 'stats_avg180', 'stats_current',"stats_outOfStockPercentage30", "stats_outOfStockPercentage90","stats_outOfStockPercentageInInterval","trackingSince",'upc','price','amazon_price', 'marketplace_new_price', 'marketplace_used_price', 'sales_rank']
df[numeric] = df[numeric].apply(pd.to_numeric, errors='coerce', axis=1)
df.dtypes.value_counts()
strings=df.loc[:, df.dtypes == np.object].columns.tolist()
print('\n'+ 'Sample of the dataset with only categorical information'+'\n')
df[strings].head(3)
df.drop(['asin','imagesCSV','ean', 'upc'], axis=1, inplace=True)
#upc might break code watch for it
df.shape
df.dtypes.value_counts()
df.loc[:, df.dtypes == np.object].columns
df['languages_0'].head(5)
new = df['languages_0'].str.split(",", n = 1, expand = True)
df['language_1']=new[0]
df['language_2']=new[1]
# reduced categories froom 9 to 6 grouping related categories together
#df['language_1'].value_counts().to_frame()
#group English, english and Middle English to one categry
df['language_1'].replace(('English', 'english','Middle English'),'English', inplace = True)
#grouping Spanish,Portuguese and Latin under "Spanish"
df['language_1'].replace(('Spanish', 'Portuguese','Latin'),'Spanish', inplace = True)
#grouping Chinese, mandarin Chinese and simplified chinese to Chinese
df['language_1'].replace(('Simplified Chinese', 'Mandarin Chinese','Chinese'),'Chinese', inplace = True)
#grouping Arabic,Hebrew and Turkish under Middle Eastern
df['language_1'].replace(('Arabic', 'Hebrew','Turkish'),'Middle Eastern', inplace = True)
# group languages with single entry record in to one group called 'Others'
df['language_1'].replace(('Hindi', 'Scots','Filipino','Malay','Dutch','Greek','Korean','Romanian','Czech'),'Others', inplace = True)
#grouping Danish and Norwegian into one group of 'Scandinavian'
df['language_1'].replace(('Danish', 'Norwegian'),'Scandinavian', inplace=True)
#replaced ('published','Published,Dolby Digital 1.0','Published,DTS-HD 5.1') by Published
df['language_2'].replace(('published','Published,Dolby Digital 1.0','Published,DTS-HD 5.1'),'Published', inplace=True)
df[['language_1','language_2']].head(5)
#Since we have copied the information into new columns we can delete the languages_0 column
df.drop(['languages_0'], axis=1 , inplace=True)
df.columns
df.shape
df.binding.value_counts()
df.binding.nunique()
dict={'Unknown':['Printed Access Code', 'Unknown','Health and Beauty', 'Lawn & Patio', 'Workbook', 'Kitchen', 'Automotive', 'Jewelry'],
'spiral':[ 'Spiral-bound', 'Staple Bound', 'Ring-bound', 'Plastic Comb', 'Loose Leaf', 'Thread Bound'],
'magazines':[ 'Journal', 'Single Issue Magazine', 'Print Magazine'],
'audios':[ 'Audible Audiobook', 'Audio CD', 'DVD', 'Album', 'MP3 CD', 'Audio CD Library Binding'],
'digital_prints':[ 'CD-ROM', 'Blu-ray', 'DVD-ROM', 'Kindle Edition', 'Video Game', 'Sheet music', 'Software Download',
'Personal Computers', 'Electronics', 'Game', 'Wireless Phone Accessory'],
'hardcovers':['Hardcover', 'Hardcover-spiral', 'Turtleback', 'Roughcut'],
'others':[ 'Cards', 'Pamphlet', 'Calendar', 'Map', 'Stationery', 'Accessory', 'Misc. Supplies', 'Office Product', 'Poster',
'Wall Chart', 'Bookmark', 'JP Oversized'],
'paperbacks':[ 'Paperback', 'Perfect Paperback', 'Mass Market Paperback', 'Flexibound', 'Print on Demand (Paperback)',
'Comic', 'Puzzle', 'Paperback Bunko'],
'leather_bonded':[ 'Bonded Leather', 'Leather Bound', 'Imitation Leather', 'Vinyl Bound'],
'board_book':[ 'Board book', 'Baby Product', 'Toy', 'Rag Book', 'Card Book', 'Bath Book', 'Pocket Book'],
'schoolLibrary_binding':[ 'School & Library Binding', 'Library Binding', 'Textbook Binding']}
for key,val in dict.items():
df.binding.replace(val,key, inplace=True)
df.binding.value_counts()
df.head()
#catTree_under10.categoryTree_2.values= 'Other'
def groupUnder10(x):
cond = df[x].value_counts()
threshold = 10
df[x] = np.where(df[x].isin(cond.index[cond > threshold ]), df[x], 'Others')
return('All the different categories that contain less than 10 items in the %s column are renamed to Others \n inorder to avoid curse of dimensionality' %x)
df[['categoryTree_1','categoryTree_2','categoryTree_3','categoryTree_4']].nunique()
groupUnder10('categoryTree_2')
#group under 10 counts in to one for categoryTree_3 column
groupUnder10('categoryTree_3')
groupUnder10('categoryTree_4')
df[['categoryTree_0','categoryTree_1','categoryTree_2','categoryTree_3','categoryTree_4']].nunique()
## Some features are duplicated within the dataset, lets delete those duplicated columns
## Delete duplicated features
duplicates=df[['label', 'manufacturer', 'publisher', 'studio']]
df['label'].equals(df['manufacturer'])
df['label'].equals(duplicates['publisher'])
df['label'].equals(duplicates['studio'])
duplicates.describe(include='all')
df.duplicated(subset=['label', 'manufacturer', 'publisher', 'studio'],keep='first').value_counts()
# Keep publisher and drop the rest
df.drop(['label', 'manufacturer','studio'], axis =1, inplace=True)
df.shape
df.describe(include='all').transpose()
cat_cols=['author','language_1','language_2','binding','categoryTree_0', 'categoryTree_1', 'categoryTree_2', 'categoryTree_3',
'categoryTree_4','productGroup','publisher','title','type','language_1','language_2']
df[cat_cols].head()
#might not be necessary
df['author']=df['author'].astype(str)
df['language_2']=df['language_2'].astype(str)
df['categoryTree_1']=df['categoryTree_1'].astype(str)
df['categoryTree_2']=df['categoryTree_2'].astype(str)
df['categoryTree_3']=df['categoryTree_3'].astype(str)
df['categoryTree_4']=df['categoryTree_4'].astype(str)
def distWithBox(data):
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="ticks")
x = df[data]
f, (ax_box, ax_hist) = plt.subplots(2, sharex=True,
gridspec_kw={"height_ratios": (.15, .85)})
sns.boxplot(x, ax=ax_box)
sns.distplot(x, ax=ax_hist)
ax_box.set(yticks=[])
sns.despine(ax=ax_hist)
sns.despine(ax=ax_box, left=True)
## Distribution and box plot of the raw data with outliers
distWithBox('price')
from numpy import percentile
data=df['price']
q25, q75 = percentile(data, 25), percentile(data, 75)
iqr = q75 - q25
print('Percentiles: 25th=%.3f, 75th=%.3f, IQR=%.3f' % (q25, q75, iqr))
# calculate the outlier cutoff
cut_off = iqr * 1.5
lower, upper = q25 - cut_off, q75 + cut_off
# identify outliers
outliers = [x for x in data if x < lower or x > upper]
print('Identified outliers: %d' % len(outliers))
outliers_removed = [x for x in data if x >= lower and x <= upper]
print('Non-outlier observations: %d' % len(outliers_removed))
outliers=[]
data_1=df['price']
for item in data_1:
if item <lower or item>upper:
outliers.append(item)
x=df['price']
outlier_indices=list(data_1.index[(x<lower) | (x> upper)])
len(outlier_indices)
df.drop(axis=0,index=outlier_indices, inplace=True)
df.shape
## lets plot distribution with and box plot to see the change after we trim down the outliers
distWithBox('price')
df[cat_cols]= df[cat_cols].apply(LabelEncoder().fit_transform)
df_X=df.loc[:, df.columns != 'price']
df_y=df['price']
from sklearn.feature_selection import VarianceThreshold
print('%s Number of features before VarianceThreshhold'%len(df_X.columns))
selector=VarianceThreshold(threshold=(.8*(1-.8)))
FeaturesTransformed=selector.fit_transform(df_X)
## print the support and shape of the transformed features
print(selector.get_support())
data=df_X[df_X.columns[selector.get_support(indices=True)]]
cols=data.columns
df_reduced=pd.DataFrame(FeaturesTransformed, columns=cols)
df_reduced.shape
from sklearn.model_selection import train_test_split as split
X=df[['language_1', 'sales_rank', 'type', 'trackingSince', 'title',
'stats_outOfStockPercentageInInterval', 'stats_outOfStockPercentage90',
'stats_outOfStockPercentage30', 'stats_current', 'stats_avg180',
'stats_avg90', 'stats_avg30', 'stats_avg', 'stats_atIntervalStart',
'rootCategory', 'releaseDate', 'publisher', 'publicationDate',
'productGroup', 'packageWidth', 'packageWeight', 'packageQuantity',
'packageLength', 'packageHeight', 'numberOfPages', 'numberOfItems',
'listedSince', 'lastUpdate', 'lastRatingUpdate', 'lastPriceChange',
'isEligibleForSuperSaverShipping', 'categoryTree_4', 'categoryTree_3',
'categoryTree_2', 'binding', 'author', 'productType']]
Y=df['price']
# The standardization is only applied for independent variables i.e. the variables on which the output value is predicted (price)
from sklearn.preprocessing import StandardScaler
names = X.columns
scale = StandardScaler()
X_df = scale.fit_transform(X)
X_df = pd.DataFrame(X_df, columns=names)
Y=df['price']
X_df.head(5)
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
import math
def ModelScores(data,target):
X = data
Y=target
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test= split(X_df,Y,test_size=0.25, random_state=40)
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoLars
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.svm import SVR
from sklearn.linear_model import BayesianRidge
from sklearn.linear_model import RANSACRegressor
models={'Gradient Boost': GradientBoostingRegressor(),
'Random Forest': RandomForestRegressor(),
'Decision Tree': DecisionTreeRegressor(),
'Linear Regression': LinearRegression(),
'MLP': MLPRegressor(),
'Ridge CV': RidgeCV(),
'LassoLars':LassoLars(),
'Lasso':Lasso(),
'Elastic Search': ElasticNet(),
'Bayesian Ridge':BayesianRidge(),
'Ransac':RANSACRegressor()
}
for name,model in models.items():
mdl=model
mdl.fit(X_train, Y_train)
prediction = mdl.predict(X_test)
print(name)
print("Accuracy Score", r2_score(Y_test, prediction))
mse3 = mean_squared_error(Y_test, prediction)
print("The root mean square value", math.sqrt(mse3))
ModelScores(X_df,Y)
# The GridSearchCV for Gradient Boosting Regressor model
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import GradientBoostingRegressor
GradientBoosting = GradientBoostingRegressor(random_state = 40)
alphas = [0.001, 0.01, 0.1, 0.5, 0.9]
sample_split = [2,3,4,5,6,7,8]
max_depth = [4,5,6,7,8,9]
learning_rate = [0.1, 0.3, 0.5, 0.7]
tuned_params = [{'alpha': alphas}, {'min_samples_split': sample_split}, {'max_depth': max_depth}, {'learning_rate':learning_rate}]
n_folds = 5
grid = GridSearchCV(
GradientBoosting, tuned_params, cv=n_folds
)
grid.fit(X_df, Y)
print(grid.best_estimator_)
# The GridSearchCV for the RandomForest Regressor model
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestRegressor
RandomForest = RandomForestRegressor(random_state = 40)
estimators = [10,50,100]
sample_split = [2,3,4,5,6,7,8]
sample_leaf = [1,2,3,4,5]
max_depth = [4,5,6,7,8,9]
tuned_params = [{'n_estimators': estimators}, {'min_samples_split': sample_split}, {'min_samples_leaf': sample_leaf},{'max_leaf_nodes': max_depth}]
n_folds = 5
grid = GridSearchCV(
RandomForest, tuned_params, cv=n_folds
)
grid.fit(X_df, Y)
print(grid.best_estimator_)
Y = df['price']
Y.head()
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test = split(X_df, Y, test_size = 0.25, random_state = 40)
from sklearn.ensemble import GradientBoostingRegressor
gbr = GradientBoostingRegressor(alpha=0.001, criterion='friedman_mse', init=None,
learning_rate=0.1, loss='ls', max_depth=3, max_features=None,
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=100, n_iter_no_change=None, presort='auto',
random_state=40, subsample=1.0, tol=0.0001,
validation_fraction=0.1, verbose=0, warm_start=False)
gbr.fit(X_train, Y_train)
pred = gbr.predict(X_test)
print("Accuracy Score", r2_score(Y_test, pred))
mse5 = mean_squared_error(Y_test, pred)
print("The root mean square value", math.sqrt(mse5))
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None,
oob_score=False, random_state=40, verbose=0, warm_start=False)
rfr.fit(X_train, Y_train)
prediction = rfr.predict(X_test)
print("Accuracy Score", r2_score(Y_test, prediction))
mse6 = mean_squared_error(Y_test, prediction)
print("The root mean square value", math.sqrt(mse6))
from sklearn.decomposition import PCA
pca = PCA()
X = pca.fit_transform(X_df)
exp_variance = pca.explained_variance_ratio_
print(exp_variance)
# Repeating the step of applying the different algorithms on the PCA applied data
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
import math
def StandardModelScores(data,target):
X = data
Y=target
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test= split(X,Y,test_size=0.25, random_state=40)
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoLars
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.svm import SVR
from sklearn.linear_model import BayesianRidge
from sklearn.linear_model import RANSACRegressor
models={'Gradient Boost': GradientBoostingRegressor(),
'Random Forest': RandomForestRegressor(),
'Decision Tree': DecisionTreeRegressor(),
'Linear Regression': LinearRegression(),
'MLP': MLPRegressor(),
'Ridge CV': RidgeCV(),
'LassoLars':LassoLars(),
'Lasso':Lasso(),
'Elastic Search': ElasticNet(),
'Bayesian Ridge':BayesianRidge(),
'Ransac':RANSACRegressor()
}
for name,model in models.items():
mdl=model
mdl.fit(X_train, Y_train)
prediction = mdl.predict(X_test)
print(name)
print("Accuracy Score", r2_score(Y_test, prediction))
mse3 = mean_squared_error(Y_test, prediction)
print("The root mean square value", math.sqrt(mse3))
StandardModelScores(X,Y)
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test= split(X,Y,test_size=0.25, random_state=40)
from sklearn.ensemble import RandomForestRegressor
rfg = RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None,
oob_score=False, random_state=40, verbose=0, warm_start=False)
rfg.fit(X_train, Y_train)
prediction14 = rfg.predict(X_test)
print("Accuracy Score", r2_score(Y_test, prediction14))
mse2 = mean_squared_error(Y_test, prediction14)
print("The root mean square value", math.sqrt(mse2))
from sklearn.model_selection import train_test_split as split
X_train, X_test, Y_train, Y_test= split(X,Y,test_size=0.25, random_state=40)
from sklearn.ensemble import GradientBoostingRegressor
modelS = GradientBoostingRegressor(alpha=0.9, criterion='friedman_mse', init=None,
learning_rate=0.1, loss='ls', max_depth=6, max_features=None,
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=100, n_iter_no_change=None, presort='auto',
random_state=40, subsample=1.0, tol=0.0001,
validation_fraction=0.1, verbose=0, warm_start=False)
modelS.fit(X_train,Y_train)
predictionS1 = modelS.predict(X_test)
print("Accuracy Score", r2_score(Y_test, predictionS1))
mse = mean_squared_error(Y_test, predictionS1)
print("The root mean square value", math.sqrt(mse))
| 0.436862 | 0.837354 |
# 对抗训练
```
import os
import torch
from torch import nn, optim
from tqdm import tqdm
import torchvision
import utils
white_1_model_dict_path = "./white_model_1.pt"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
```
## 加载之前训练的的白盒模型
```
net1 = utils.white_model()
net1.to(device)
net1.load_state_dict(torch.load(white_1_model_dict_path))
batch_size = 256
train_iter, test_iter = utils.load_data_fashion_mnist(batch_size=batch_size)
print("原白盒模型测试集准确率: ", utils.evaluate_accuracy(test_iter, net1))
```
## 从训练集中选出1000张能被正确分类的样本
```
net1.eval()
X_train, y_train = utils.select_right_sample(net1, train_iter, 1000)
assert (net1(X_train.to(device)).argmax(dim=1) == y_train.to(device)).float().sum().cpu().item() == 1000.0
```
## 对选出的1000个训练样本进行对抗攻击
```
attack_lr, max_attack_step = 0.01, 50
before_atk_y, after_atk_X, after_atk_y = [], [], []
for i in tqdm(range(1000)):
X_, y_, success = utils.white_box_attack(net1, X_train[i:i+1], y_train[i:i+1], attack_lr, max_attack_step)
if success:
before_atk_y.append(y_train[i])
after_atk_X.append(X_[0])
after_atk_y.append(y_[0])
print("学习率%.3f,最大步长%d的白盒攻击成功率: %.2f%%" % (attack_lr, max_attack_step, len(after_atk_y) / 10))
```
## 将对抗样本掺入训练集中
```
def new_train_iter(batch_size, img_list, label_list, root='~/Datasets/FashionMNIST'):
transform = torchvision.transforms.ToTensor()
mnist_train = torchvision.datasets.FashionMNIST(root=root, train=True, download=True, transform=transform)
extra_dataset = torch.utils.data.TensorDataset(torch.stack(img_list), torch.stack(label_list))
dataset = torch.utils.data.ConcatDataset([mnist_train, extra_dataset])
print("添加对抗样本后的训练集大小:", len(dataset))
num_workers = 4
train_iter = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
return train_iter
new_train_iter = new_train_iter(batch_size, after_atk_X, before_atk_y)
```
## 用新训练集重新训练一个白盒模型
```
net2 = utils.white_model()
net2.to(device)
lr, num_epochs = 0.001, 10
optimizer = torch.optim.Adam(net2.parameters(), lr=lr)
utils.train(net2, new_train_iter, test_iter, batch_size, optimizer, device, num_epochs)
```
## 对新的白盒模型进行白盒对抗攻击
```
net2.eval()
X_test, y_test = utils.select_right_sample(net2, test_iter)
assert (net2(X_test.to(device)).argmax(dim=1) == y_test.to(device)).float().sum().cpu().item() == 1000.0
attack_lr, max_attack_step = 0.01, 50
before_atk_X, before_atk_y = [], []
after_atk_X, after_atk_y = [], []
for i in tqdm(range(1000)):
X_, y_, success = utils.white_box_attack(net2, X_test[i:i+1], y_test[i:i+1], attack_lr, max_attack_step)
if success:
before_atk_X.append(X_test[i])
before_atk_y.append(y_test[i])
after_atk_X.append(X_)
after_atk_y.append(y_)
print("学习率%.3f,最大步数%d的白盒攻击成功率: %.2f%%" % (attack_lr, max_attack_step, len(after_atk_y) / 10))
print("成功样本举例:\n")
print("攻击前:")
utils.show_fashion_mnist(before_atk_X[:10], utils.get_fashion_mnist_labels(before_atk_y[:10]))
print("攻击后:")
utils.show_fashion_mnist(after_atk_X[:10], utils.get_fashion_mnist_labels(after_atk_y[:10]))
```
## 对新的白盒模型进行黑盒对抗攻击
```
def apply_black_attack(net, right_X, right_y, attack_sigma = 0.01, max_attack_step = 100):
before_atk_X, before_atk_y = [], []
after_atk_X, after_atk_y = [], []
for i in tqdm(range(len(right_y))):
X_, y_, success = utils.black_box_attack(net, right_X[i:i+1], right_y[i:i+1], attack_sigma, max_attack_step)
if success:
before_atk_X.append(right_X[i])
before_atk_y.append(right_y[i])
after_atk_X.append(X_)
after_atk_y.append(y_)
print("标准差%.3f,最大步数%d的MCMC黑盒攻击成功率: %.2f%%" % (attack_sigma, max_attack_step, len(after_atk_y) / (len(right_y) / 100) ))
show_num = min(10, len(after_atk_y))
print("成功样本举例:\n")
print("攻击前:")
utils.show_fashion_mnist(before_atk_X[:show_num], utils.get_fashion_mnist_labels(before_atk_y[:show_num]))
print("攻击后:")
utils.show_fashion_mnist(after_atk_X[:show_num], utils.get_fashion_mnist_labels(after_atk_y[:show_num]))
net2.eval()
X_test, y_test = utils.select_right_sample(net2, test_iter)
assert (net2(X_test.to(device)).argmax(dim=1) == y_test.to(device)).float().sum().cpu().item() == 1000.0
apply_black_attack(net2, X_test, y_test, attack_sigma = 0.05, max_attack_step = 100)
```
|
github_jupyter
|
import os
import torch
from torch import nn, optim
from tqdm import tqdm
import torchvision
import utils
white_1_model_dict_path = "./white_model_1.pt"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
net1 = utils.white_model()
net1.to(device)
net1.load_state_dict(torch.load(white_1_model_dict_path))
batch_size = 256
train_iter, test_iter = utils.load_data_fashion_mnist(batch_size=batch_size)
print("原白盒模型测试集准确率: ", utils.evaluate_accuracy(test_iter, net1))
net1.eval()
X_train, y_train = utils.select_right_sample(net1, train_iter, 1000)
assert (net1(X_train.to(device)).argmax(dim=1) == y_train.to(device)).float().sum().cpu().item() == 1000.0
attack_lr, max_attack_step = 0.01, 50
before_atk_y, after_atk_X, after_atk_y = [], [], []
for i in tqdm(range(1000)):
X_, y_, success = utils.white_box_attack(net1, X_train[i:i+1], y_train[i:i+1], attack_lr, max_attack_step)
if success:
before_atk_y.append(y_train[i])
after_atk_X.append(X_[0])
after_atk_y.append(y_[0])
print("学习率%.3f,最大步长%d的白盒攻击成功率: %.2f%%" % (attack_lr, max_attack_step, len(after_atk_y) / 10))
def new_train_iter(batch_size, img_list, label_list, root='~/Datasets/FashionMNIST'):
transform = torchvision.transforms.ToTensor()
mnist_train = torchvision.datasets.FashionMNIST(root=root, train=True, download=True, transform=transform)
extra_dataset = torch.utils.data.TensorDataset(torch.stack(img_list), torch.stack(label_list))
dataset = torch.utils.data.ConcatDataset([mnist_train, extra_dataset])
print("添加对抗样本后的训练集大小:", len(dataset))
num_workers = 4
train_iter = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
return train_iter
new_train_iter = new_train_iter(batch_size, after_atk_X, before_atk_y)
net2 = utils.white_model()
net2.to(device)
lr, num_epochs = 0.001, 10
optimizer = torch.optim.Adam(net2.parameters(), lr=lr)
utils.train(net2, new_train_iter, test_iter, batch_size, optimizer, device, num_epochs)
net2.eval()
X_test, y_test = utils.select_right_sample(net2, test_iter)
assert (net2(X_test.to(device)).argmax(dim=1) == y_test.to(device)).float().sum().cpu().item() == 1000.0
attack_lr, max_attack_step = 0.01, 50
before_atk_X, before_atk_y = [], []
after_atk_X, after_atk_y = [], []
for i in tqdm(range(1000)):
X_, y_, success = utils.white_box_attack(net2, X_test[i:i+1], y_test[i:i+1], attack_lr, max_attack_step)
if success:
before_atk_X.append(X_test[i])
before_atk_y.append(y_test[i])
after_atk_X.append(X_)
after_atk_y.append(y_)
print("学习率%.3f,最大步数%d的白盒攻击成功率: %.2f%%" % (attack_lr, max_attack_step, len(after_atk_y) / 10))
print("成功样本举例:\n")
print("攻击前:")
utils.show_fashion_mnist(before_atk_X[:10], utils.get_fashion_mnist_labels(before_atk_y[:10]))
print("攻击后:")
utils.show_fashion_mnist(after_atk_X[:10], utils.get_fashion_mnist_labels(after_atk_y[:10]))
def apply_black_attack(net, right_X, right_y, attack_sigma = 0.01, max_attack_step = 100):
before_atk_X, before_atk_y = [], []
after_atk_X, after_atk_y = [], []
for i in tqdm(range(len(right_y))):
X_, y_, success = utils.black_box_attack(net, right_X[i:i+1], right_y[i:i+1], attack_sigma, max_attack_step)
if success:
before_atk_X.append(right_X[i])
before_atk_y.append(right_y[i])
after_atk_X.append(X_)
after_atk_y.append(y_)
print("标准差%.3f,最大步数%d的MCMC黑盒攻击成功率: %.2f%%" % (attack_sigma, max_attack_step, len(after_atk_y) / (len(right_y) / 100) ))
show_num = min(10, len(after_atk_y))
print("成功样本举例:\n")
print("攻击前:")
utils.show_fashion_mnist(before_atk_X[:show_num], utils.get_fashion_mnist_labels(before_atk_y[:show_num]))
print("攻击后:")
utils.show_fashion_mnist(after_atk_X[:show_num], utils.get_fashion_mnist_labels(after_atk_y[:show_num]))
net2.eval()
X_test, y_test = utils.select_right_sample(net2, test_iter)
assert (net2(X_test.to(device)).argmax(dim=1) == y_test.to(device)).float().sum().cpu().item() == 1000.0
apply_black_attack(net2, X_test, y_test, attack_sigma = 0.05, max_attack_step = 100)
| 0.41253 | 0.738905 |
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# _*Quantum Tic-Tac-Toe*_
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
***
### Contributors
[Maor Ben-Shahar](https://github.com/MA0R)
***
An example game for quantum Tic-Tac-Toe game is provided below, with explanations of the game workings following after. Despite the ability to superimpose moves, a winning strategy still exists for both players (meaning the game will be a draw if both implement it). See if you can work it out.
```
#Import the game!
import sys
sys.path.append('game_engines')
from q_tic_tac_toe import Board
#inputs are (X,Y,print_info).
#X,Y are the dimensions of the board. print_info boolean controls if to print instructions at game launch.
B = Board(3,3,True)
B.run()
```
When playing the game, the two players are asked in turn if to make a classical (1 cell) or quantum move (1 or 2 cells at most, for now). When making any move there are several scenarios that can happen, they are explained below. The terminology used:
- Each turn a "move" is made
- Each move consists of one or two "cells", the location(s) where the move is made. It is a superposition of classical moves.
Quantum moves are restricted to two cells only due to them requiring an increasing number of qubits, which is slow to simulate.
## One move on an empty cell
This is the simplest move, it is a "classical" move. The game registers this move as a set of coordinates, and the player who made the move. No qubits are used here.
It is registered as such:
`Play in one or two cells?1
x index: 0
y index: 0`
And the board registers it as
`
[['O1','',''],
['','',''],
['','','']]
`
This move is *always* present at the end of the game.
## Two-cell moves in empty cells
This is a quantum move, the game stores a move that is in a superposition of being played at *two* cells. Ordered coordinates for the two cells to be occupied need to be provided. A row in the board with a superposition move would look like so
`[X1,X1,'']`
Two qubits were used in order to register this move. They are in a state $|10>+|01>$, if the first qubit is measured to be 1 then the board becomes `[X1,'','']` and vice versa. Why can we not use just one qubit to record this? We can, and the qubit would have to be put into a state $|0>+|1>$ but I did not implement this yet since this is easier (you will soon see why!).
```
B = Board(3,3)
B.run()
```
The game outcome is almost 50% in each cell, as we would expect. There is a redundant bit at the end of the bit code (to be removed soon!). And note that the bit strings are the inversed order to what we write here, this is because the quantum register in qiskit has positions $|q_n,...,q_0>$.
## One-cell move plyed in a maybe-occupied cell
It is possible that after the game is in state `[X1,X1,'']` one would definitely want to make a move at position (0,0). This could be when the game is completely full perhaps, since it is not a very good strategy. Such a move can be erased! Let us see how it is recorded. The first row of the board is now
`[X1 O2,X1,'']`
and the state of the game qubits is
$$ |100>+|011> $$
with the first qubit recording sucess of the first move at cell (0,0), the second qubit is the success of the first move in cell (0,1) and the third qubit is the move by player O, which is anti correlated with the move by X at cell (0,0).
Notice that this move can be completely erased!
```
B = Board(3,3)
B.add_move([[0,0],[0,1]],0) #Directly adding moves, ([indx1,indx2],player) 0=X, 1=O.
B.add_move([[0,0]],1)
B.compute_winner()
```
Once again note that the move could be erased completely! In fact this happens 50% of the time. Notice how the bit string output from QISKIT is translated into a board state.
## Two-cell moves in maybe-occupied cells
Instead of the above, player O might like to choose a better strategy. Perhaps O is interested in a quantum move on cells (0,0) and (0,2). In such a case the game record the two moves in the order they are entered.
- In order (0,0) then (0,2): The state of the game is first made into $ |100>+|011> $ as above, with the third qubit recording the sucess of player O getting position (0,0). Then the (0,2) position is registered, anti-correlated with suceeding in position (0,0) $|1001>+|0110>$. Now, unlike before, player O suceeds i registering a move regardless of the outcome.
- In order (0,2) then (0,0): Now playing at (0,2) is not dependent on anything, and so the game state is $(|10>+|01>)\otimes (|1>+|0>) = |101>+|100>+|011>+|010>$. And when the move in position (0,0) is added too, it is anti correlated with BOTH the move in (0,2) AND the pre-existing move in (0,0). So qubit state becomes $|1010>+|1000>+|0110>+|0101>$. Notice how now the move could be erased, so order does matter!
```
B = Board(3,3)
B.add_move([[0,0],[0,1]],0) #Directly adding moves, ([[y1,x1],[x2,y2]],player) with player=0->X, 1->O.
B.add_move([[0,0],[0,2]],1)
B.compute_winner()
```
### Exercise: what if player O chose coordinates (x=0,y=0) and (x=1,y=0) instead?
### Exercise: Can player X ensure that no matter what O plays, both (x=0,y=0) and (x=1,y=0) are occupied by X?
|
github_jupyter
|
#Import the game!
import sys
sys.path.append('game_engines')
from q_tic_tac_toe import Board
#inputs are (X,Y,print_info).
#X,Y are the dimensions of the board. print_info boolean controls if to print instructions at game launch.
B = Board(3,3,True)
B.run()
B = Board(3,3)
B.run()
B = Board(3,3)
B.add_move([[0,0],[0,1]],0) #Directly adding moves, ([indx1,indx2],player) 0=X, 1=O.
B.add_move([[0,0]],1)
B.compute_winner()
B = Board(3,3)
B.add_move([[0,0],[0,1]],0) #Directly adding moves, ([[y1,x1],[x2,y2]],player) with player=0->X, 1->O.
B.add_move([[0,0],[0,2]],1)
B.compute_winner()
| 0.116136 | 0.984321 |
# Elastic net Regression
Elastic Net first emerged as a result of critique on lasso, whose variable selection can be too dependent
on data and thus unstable. The solution is to combine the penalties of ridge regression and lasso to get
the best of both worlds.
The method linearly combines the L1 and L2 penalties of the LASSO and Ridge Regression. Including the Elastic Net, these methods are especially powerful when applied to very large data where the number of variables might be in the thousands or even millions.
In mathematics, sparse and dense often refer to the number of zero vs. non-zero elements in an array (e.g. vector or matrix). A sparse array is one that contains mostly zeros and few non-zero entries, and a dense array contains mostly non-zeros. LASSO and Ridge encourage sparse and dense model, respectively, but since it never be that clear how the true model looks like, it’s typical to apply both methods and determine the best model.
Import libraries
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import ElasticNet
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn import metrics
from sklearn.metrics import r2_score
```
Import dataset
```
dataset = pd.read_csv('/home/webtunix/Desktop/Regression/random.csv')
print(len(dataset))
```
Split data into x and y
```
x = dataset.iloc[:,1:4].values
y = dataset.iloc[:,4].values
```
Split x and y into training and testing sets
```
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)
```
Apply Elastic net Regression
```
ENreg = ElasticNet(alpha=1, l1_ratio=0.5, normalize=False)
ENreg.fit(X_train,y_train)
pred = ENreg.predict(X_test)
print(pred)
```
Accuracy of model
```
print("Accuracy:",r2_score(y_test,pred))
```
Plotting the scatter graph of actual values and predicting values
```
colors = np.random.rand(72)
#plot target and predicted values
plt.scatter(colors,y_test, c='orange',label='target')
plt.scatter(colors,pred, c='green',label='predicted')
#plot x and y lables
plt.xlabel('x')
plt.ylabel('y')
#plot title
plt.title('Elastic net Regression')
plt.legend()
plt.show()
```
# Research Infinite Solutions LLP
by Research Infinite Solutions (https://www.ris-ai.com//)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import ElasticNet
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn import metrics
from sklearn.metrics import r2_score
dataset = pd.read_csv('/home/webtunix/Desktop/Regression/random.csv')
print(len(dataset))
x = dataset.iloc[:,1:4].values
y = dataset.iloc[:,4].values
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)
ENreg = ElasticNet(alpha=1, l1_ratio=0.5, normalize=False)
ENreg.fit(X_train,y_train)
pred = ENreg.predict(X_test)
print(pred)
print("Accuracy:",r2_score(y_test,pred))
colors = np.random.rand(72)
#plot target and predicted values
plt.scatter(colors,y_test, c='orange',label='target')
plt.scatter(colors,pred, c='green',label='predicted')
#plot x and y lables
plt.xlabel('x')
plt.ylabel('y')
#plot title
plt.title('Elastic net Regression')
plt.legend()
plt.show()
| 0.466603 | 0.987326 |
# Greater than the sum of its parts?
*How does the aggregate model compare to the best individual classification?*
In this notebook we will optimized both the aggregate model for a galaxy, as well as its best individual classification. We'll then compare the residuals and mean squared errors of the two, and see how they stack up!
**Warning:** The fitting step here takes a long time (~15 minutes) to complete. Which sucks.
First, define some useful magic commands and import needed modules
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import json
from copy import deepcopy
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import splprep, splev
import lib.galaxy_utilities as gu
import lib.python_model_renderer.parse_annotation as pa
import lib.python_model_renderer.render_galaxy as rg
from model_fitting import Model, ModelFitter
from sklearn.metrics import mean_squared_error
import warnings
from astropy.utils.exceptions import AstropyWarning
warnings.simplefilter('ignore', category=AstropyWarning)
```
Define the subject id of the galaxy we'll be working on
```
subject_id = 20902040
```
Load all the required metadata for plotting etc...
```
gal, angle = gu.get_galaxy_and_angle(subject_id)
pic_array, deprojected_image = gu.get_image(gal, subject_id, angle)
psf = gu.get_psf(subject_id)
diff_data = gu.get_image_data(subject_id)
pixel_mask = 1 - np.array(diff_data['mask'])[::-1]
galaxy_data = np.array(diff_data['imageData'])[::-1]
size_diff = diff_data['width'] / diff_data['imageWidth']
# arcseconds per pixel for zooniverse image
pix_size = pic_array.shape[0] / (gal['PETRO_THETA'].iloc[0] * 4)
# arcseconds per pixel for galaxy data
pix_size2 = galaxy_data.shape[0] / (gal['PETRO_THETA'].iloc[0] * 4)
imshow_kwargs = {
'cmap': 'gray_r', 'origin': 'lower',
'extent': (
# left of image in arcseconds from centre
-pic_array.shape[0]/2 / pix_size,
pic_array.shape[0]/2 / pix_size, # right...
-pic_array.shape[1]/2 / pix_size, # bottom...
pic_array.shape[1]/2 / pix_size # top...
),
}
plt.imshow(pic_array, **imshow_kwargs)
```
Grab the aggregate model
```
with open(
'../component-clustering/cluster-output/{}.json'.format(subject_id)
) as f:
aggregate_model = json.load(f)
agg_model = pa.parse_aggregate_model(aggregate_model, size_diff=size_diff)
```
And the best individual classification
```
with open('lib/best-classifications.json') as f:
all_best_cls = json.load(f)
best_cls = gu.classifications[
gu.classifications.classification_id == all_best_cls.get(str(subject_id))
].iloc[0]
best_model = pa.parse_annotation(json.loads(best_cls['annotations']), size_diff)
```
Define a helper function that will perform the model optimization
```
def fit_model(model, n=100):
m = deepcopy(model)
m['spiral'] = []
mf = ModelFitter(m, galaxy_data, psf, pixel_mask)
new_model, res = mf.fit(options={'maxiter': n})
print('{}, {}, N steps: {}'.format(res['success'], str(res['message']), res['nit']))
return new_model
```
Perform the optimization, warning: this takes a while.
```
%time fitted_best_model = fit_model(best_model)
%time fitted_agg_model = fit_model(agg_model)
```
Define a helper function that will do the post-processing of the models for plotting
```
conv = lambda arr: rg.convolve2d(arr, psf, mode='same', boundary='symm')
```
Calculate the rendered models and residuals to be plotted
```
fitted_best_rendered = rg.calculate_model(fitted_best_model, diff_data['width'])
fitted_agg_rendered = rg.calculate_model(fitted_agg_model, diff_data['width'])
fitted_best_comparison = rg.compare_to_galaxy(fitted_best_rendered, psf, galaxy_data, pixel_mask=pixel_mask, stretch=False)
fitted_agg_comparison = rg.compare_to_galaxy(fitted_agg_rendered, psf, galaxy_data, pixel_mask=pixel_mask, stretch=False)
```
Grab a value to use for limits on the residuals plot
```
l = max(fitted_best_comparison.max(), fitted_agg_comparison.max())
from sklearn.metrics import mean_squared_error
def make_suptitle(arr, pre=None):
s = mean_squared_error(0.8 * galaxy_data, arr)
plt.suptitle((pre + ' ' if pre else '') + 'Mean Squared Error: {:.8f}'.format(s))
fig, ax = plt.subplots(ncols=3, sharey=True, figsize=(15, 6))
ax[0].imshow(0.8 * galaxy_data, **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[1].imshow(conv(fitted_best_rendered), **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[2].imshow(
fitted_best_comparison,
**{**imshow_kwargs, 'cmap': 'RdGy'},
vmin=-l, vmax=l
)
make_suptitle(fitted_best_comparison, 'Best individual model:')
plt.tight_layout()
fig, ax = plt.subplots(ncols=3, sharey=True, figsize=(15, 6))
ax[0].imshow(0.8 * galaxy_data, **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[1].imshow(conv(fitted_agg_rendered), **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[2].imshow(
fitted_agg_comparison,
**{**imshow_kwargs, 'cmap': 'RdGy'},
vmin=-l, vmax=l
)
make_suptitle(fitted_agg_comparison, 'Aggregate model:')
plt.tight_layout();
fig, ax = plt.subplots(ncols=3, sharey=True, figsize=(15, 6))
ax[0].imshow(0.8 * galaxy_data, **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[1].imshow(conv(fitted_agg_rendered), **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[2].imshow(
fitted_agg_comparison,
**{**imshow_kwargs, 'cmap': 'RdGy'},
vmin=-l, vmax=l
)
make_suptitle(fitted_agg_comparison)
plt.tight_layout();
Model(fitted_best_model, galaxy_data, psf, pixel_mask)
Model(fitted_agg_model, galaxy_data, psf, pixel_mask)
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import json
from copy import deepcopy
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import splprep, splev
import lib.galaxy_utilities as gu
import lib.python_model_renderer.parse_annotation as pa
import lib.python_model_renderer.render_galaxy as rg
from model_fitting import Model, ModelFitter
from sklearn.metrics import mean_squared_error
import warnings
from astropy.utils.exceptions import AstropyWarning
warnings.simplefilter('ignore', category=AstropyWarning)
subject_id = 20902040
gal, angle = gu.get_galaxy_and_angle(subject_id)
pic_array, deprojected_image = gu.get_image(gal, subject_id, angle)
psf = gu.get_psf(subject_id)
diff_data = gu.get_image_data(subject_id)
pixel_mask = 1 - np.array(diff_data['mask'])[::-1]
galaxy_data = np.array(diff_data['imageData'])[::-1]
size_diff = diff_data['width'] / diff_data['imageWidth']
# arcseconds per pixel for zooniverse image
pix_size = pic_array.shape[0] / (gal['PETRO_THETA'].iloc[0] * 4)
# arcseconds per pixel for galaxy data
pix_size2 = galaxy_data.shape[0] / (gal['PETRO_THETA'].iloc[0] * 4)
imshow_kwargs = {
'cmap': 'gray_r', 'origin': 'lower',
'extent': (
# left of image in arcseconds from centre
-pic_array.shape[0]/2 / pix_size,
pic_array.shape[0]/2 / pix_size, # right...
-pic_array.shape[1]/2 / pix_size, # bottom...
pic_array.shape[1]/2 / pix_size # top...
),
}
plt.imshow(pic_array, **imshow_kwargs)
with open(
'../component-clustering/cluster-output/{}.json'.format(subject_id)
) as f:
aggregate_model = json.load(f)
agg_model = pa.parse_aggregate_model(aggregate_model, size_diff=size_diff)
with open('lib/best-classifications.json') as f:
all_best_cls = json.load(f)
best_cls = gu.classifications[
gu.classifications.classification_id == all_best_cls.get(str(subject_id))
].iloc[0]
best_model = pa.parse_annotation(json.loads(best_cls['annotations']), size_diff)
def fit_model(model, n=100):
m = deepcopy(model)
m['spiral'] = []
mf = ModelFitter(m, galaxy_data, psf, pixel_mask)
new_model, res = mf.fit(options={'maxiter': n})
print('{}, {}, N steps: {}'.format(res['success'], str(res['message']), res['nit']))
return new_model
%time fitted_best_model = fit_model(best_model)
%time fitted_agg_model = fit_model(agg_model)
conv = lambda arr: rg.convolve2d(arr, psf, mode='same', boundary='symm')
fitted_best_rendered = rg.calculate_model(fitted_best_model, diff_data['width'])
fitted_agg_rendered = rg.calculate_model(fitted_agg_model, diff_data['width'])
fitted_best_comparison = rg.compare_to_galaxy(fitted_best_rendered, psf, galaxy_data, pixel_mask=pixel_mask, stretch=False)
fitted_agg_comparison = rg.compare_to_galaxy(fitted_agg_rendered, psf, galaxy_data, pixel_mask=pixel_mask, stretch=False)
l = max(fitted_best_comparison.max(), fitted_agg_comparison.max())
from sklearn.metrics import mean_squared_error
def make_suptitle(arr, pre=None):
s = mean_squared_error(0.8 * galaxy_data, arr)
plt.suptitle((pre + ' ' if pre else '') + 'Mean Squared Error: {:.8f}'.format(s))
fig, ax = plt.subplots(ncols=3, sharey=True, figsize=(15, 6))
ax[0].imshow(0.8 * galaxy_data, **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[1].imshow(conv(fitted_best_rendered), **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[2].imshow(
fitted_best_comparison,
**{**imshow_kwargs, 'cmap': 'RdGy'},
vmin=-l, vmax=l
)
make_suptitle(fitted_best_comparison, 'Best individual model:')
plt.tight_layout()
fig, ax = plt.subplots(ncols=3, sharey=True, figsize=(15, 6))
ax[0].imshow(0.8 * galaxy_data, **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[1].imshow(conv(fitted_agg_rendered), **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[2].imshow(
fitted_agg_comparison,
**{**imshow_kwargs, 'cmap': 'RdGy'},
vmin=-l, vmax=l
)
make_suptitle(fitted_agg_comparison, 'Aggregate model:')
plt.tight_layout();
fig, ax = plt.subplots(ncols=3, sharey=True, figsize=(15, 6))
ax[0].imshow(0.8 * galaxy_data, **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[1].imshow(conv(fitted_agg_rendered), **imshow_kwargs, vmin=(0.8 * galaxy_data).min(), vmax=(0.8 * galaxy_data).max())
ax[2].imshow(
fitted_agg_comparison,
**{**imshow_kwargs, 'cmap': 'RdGy'},
vmin=-l, vmax=l
)
make_suptitle(fitted_agg_comparison)
plt.tight_layout();
Model(fitted_best_model, galaxy_data, psf, pixel_mask)
Model(fitted_agg_model, galaxy_data, psf, pixel_mask)
| 0.684053 | 0.90114 |
# Molprobity analysis
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import seaborn as sns
from scipy import stats
def get_ref_molprob():
"""returns a DataFrame containing all the reference structure values"""
ref_df = pd.read_csv("../RAW_DATA/reference-molprob.csv", index_col = 0)
return ref_df
def make_molprob_df(run):
# run = "analysis_{}".format(run)
df_3J95 = pd.read_csv(("../RAW_DATA/{}/3J95_molprob.csv").format(run), index_col = "structure")
df_3J96 = pd.read_csv(("../RAW_DATA/{}/3J96_molprob.csv").format(run), index_col = "structure")
df_5GRS = pd.read_csv(("../RAW_DATA/{}/5GRS_molprob.csv").format(run), index_col = "structure")
df_5HNY = pd.read_csv(("../RAW_DATA/{}/5HNY_molprob.csv").format(run), index_col = "structure")
df_5WCB = pd.read_csv(("../RAW_DATA/{}/5WCB_molprob.csv").format(run), index_col = "structure")
df_6ACG = pd.read_csv(("../RAW_DATA/{}/6ACG_molprob.csv").format(run), index_col = "structure")
df_6AHF = pd.read_csv(("../RAW_DATA/{}/6AHF_molprob.csv").format(run), index_col = "structure")
df_6IRF = pd.read_csv(("../RAW_DATA/{}/6IRF_molprob.csv").format(run), index_col = "structure")
df_6N1Q = pd.read_csv(("../RAW_DATA/{}/6N1Q_molprob.csv").format(run), index_col = "structure")
df_6N7G = pd.read_csv(("../RAW_DATA/{}/6N7G_molprob.csv").format(run), index_col = "structure")
df_6N8Z = pd.read_csv(("../RAW_DATA/{}/6N8Z_molprob.csv").format(run), index_col = "structure")
df_6R7I = pd.read_csv(("../RAW_DATA/{}/6R7I_molprob.csv").format(run), index_col = "structure")
df_6UBY = pd.read_csv(("../RAW_DATA/{}/6UBY_molprob.csv").format(run), index_col = "structure")
df_6UC0 = pd.read_csv(("../RAW_DATA/{}/6UC0_molprob.csv").format(run), index_col = "structure")
df_list = [
df_3J95,
df_3J96,
df_5GRS,
df_5HNY,
df_5WCB,
df_6ACG,
df_6AHF,
df_6IRF,
df_6N1Q,
df_6N7G,
df_6N8Z,
df_6R7I,
df_6UBY,
df_6UC0,
]
return df_list
def combine_dfs(analyzer):
final_df = pd.DataFrame()
ref_df = get_ref_molprob()
for run in runs:
df_list = make_molprob_df(run)
data_frame = pd.DataFrame()
for df in iter(df_list):
if df.shape[0] != 0:
structure = df.index[0][:4]
df = df.sort_values("HADDOCK-score").reset_index().drop("structure", axis=1)
df = df.sub(ref_df.loc[structure, analyzer])
data_frame = pd.concat([data_frame, df[analyzer]], ignore_index=True)
final_df = pd.concat([final_df, data_frame], axis=1, ignore_index=True)
return final_df
runs =[
"SA", "SA_CA",
"SA_SR","SA_CM", "SA_CTRD", "MD",
"CG", "CG_MD"
]
rota_df = combine_dfs("rotameroutliers")
rama_df = combine_dfs("ramaoutliers")
mol_df = combine_dfs("molprobety-score")
fig, ax = plt.subplots(nrows=3, figsize=(15,21))
sns.set(font_scale=1.5, style="whitegrid")
# fill the plots
rama_plot = sns.boxplot(data=rama_df, ax=ax[0], palette="Blues")
rota_plot = sns.boxplot(data = rota_df, ax=ax[1], palette="Blues")
mol_plot = sns.boxplot(data=mol_df, ax=ax[2], palette="Blues")
rama_plot.set_xticks([])
rota_plot.set_xticks([])
mol_plot.set_xticklabels(runs, fontsize=15)
rama_plot.set_ylabel("Δ Ramachandran outliers (%)", fontsize=20)
rota_plot.set_ylabel("Δ Rotamer outliers (%)", fontsize=20)
mol_plot.set_ylabel("Δ Molprobity-score", fontsize=20)
mol_plot.set_xlabel("Protocol", fontsize=20)
rama_plot.text(1.9,5.8,"Difference ramachandran outliers", fontsize=25, weight="semibold")
rama_plot.text(-1.13,5.8,"A", fontsize=25, weight="semibold")
rota_plot.text(2.1,21.3,"Difference rotamer outliers", fontsize=25, weight="semibold")
rota_plot.text(-1.13,21.3,"B", fontsize=25, weight="semibold")
mol_plot.text(2.1,1.5,"Difference molprobity score", fontsize=25, weight="semibold")
mol_plot.text(-1.13,1.5,"C", fontsize=25, weight="semibold")
fig.align_ylabels(ax[:])
plt.tight_layout()
plt.savefig("supplemental_figure_2.pdf", dpi=300, bbox_inches='tight')
mol_df.describe()
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import seaborn as sns
from scipy import stats
def get_ref_molprob():
"""returns a DataFrame containing all the reference structure values"""
ref_df = pd.read_csv("../RAW_DATA/reference-molprob.csv", index_col = 0)
return ref_df
def make_molprob_df(run):
# run = "analysis_{}".format(run)
df_3J95 = pd.read_csv(("../RAW_DATA/{}/3J95_molprob.csv").format(run), index_col = "structure")
df_3J96 = pd.read_csv(("../RAW_DATA/{}/3J96_molprob.csv").format(run), index_col = "structure")
df_5GRS = pd.read_csv(("../RAW_DATA/{}/5GRS_molprob.csv").format(run), index_col = "structure")
df_5HNY = pd.read_csv(("../RAW_DATA/{}/5HNY_molprob.csv").format(run), index_col = "structure")
df_5WCB = pd.read_csv(("../RAW_DATA/{}/5WCB_molprob.csv").format(run), index_col = "structure")
df_6ACG = pd.read_csv(("../RAW_DATA/{}/6ACG_molprob.csv").format(run), index_col = "structure")
df_6AHF = pd.read_csv(("../RAW_DATA/{}/6AHF_molprob.csv").format(run), index_col = "structure")
df_6IRF = pd.read_csv(("../RAW_DATA/{}/6IRF_molprob.csv").format(run), index_col = "structure")
df_6N1Q = pd.read_csv(("../RAW_DATA/{}/6N1Q_molprob.csv").format(run), index_col = "structure")
df_6N7G = pd.read_csv(("../RAW_DATA/{}/6N7G_molprob.csv").format(run), index_col = "structure")
df_6N8Z = pd.read_csv(("../RAW_DATA/{}/6N8Z_molprob.csv").format(run), index_col = "structure")
df_6R7I = pd.read_csv(("../RAW_DATA/{}/6R7I_molprob.csv").format(run), index_col = "structure")
df_6UBY = pd.read_csv(("../RAW_DATA/{}/6UBY_molprob.csv").format(run), index_col = "structure")
df_6UC0 = pd.read_csv(("../RAW_DATA/{}/6UC0_molprob.csv").format(run), index_col = "structure")
df_list = [
df_3J95,
df_3J96,
df_5GRS,
df_5HNY,
df_5WCB,
df_6ACG,
df_6AHF,
df_6IRF,
df_6N1Q,
df_6N7G,
df_6N8Z,
df_6R7I,
df_6UBY,
df_6UC0,
]
return df_list
def combine_dfs(analyzer):
final_df = pd.DataFrame()
ref_df = get_ref_molprob()
for run in runs:
df_list = make_molprob_df(run)
data_frame = pd.DataFrame()
for df in iter(df_list):
if df.shape[0] != 0:
structure = df.index[0][:4]
df = df.sort_values("HADDOCK-score").reset_index().drop("structure", axis=1)
df = df.sub(ref_df.loc[structure, analyzer])
data_frame = pd.concat([data_frame, df[analyzer]], ignore_index=True)
final_df = pd.concat([final_df, data_frame], axis=1, ignore_index=True)
return final_df
runs =[
"SA", "SA_CA",
"SA_SR","SA_CM", "SA_CTRD", "MD",
"CG", "CG_MD"
]
rota_df = combine_dfs("rotameroutliers")
rama_df = combine_dfs("ramaoutliers")
mol_df = combine_dfs("molprobety-score")
fig, ax = plt.subplots(nrows=3, figsize=(15,21))
sns.set(font_scale=1.5, style="whitegrid")
# fill the plots
rama_plot = sns.boxplot(data=rama_df, ax=ax[0], palette="Blues")
rota_plot = sns.boxplot(data = rota_df, ax=ax[1], palette="Blues")
mol_plot = sns.boxplot(data=mol_df, ax=ax[2], palette="Blues")
rama_plot.set_xticks([])
rota_plot.set_xticks([])
mol_plot.set_xticklabels(runs, fontsize=15)
rama_plot.set_ylabel("Δ Ramachandran outliers (%)", fontsize=20)
rota_plot.set_ylabel("Δ Rotamer outliers (%)", fontsize=20)
mol_plot.set_ylabel("Δ Molprobity-score", fontsize=20)
mol_plot.set_xlabel("Protocol", fontsize=20)
rama_plot.text(1.9,5.8,"Difference ramachandran outliers", fontsize=25, weight="semibold")
rama_plot.text(-1.13,5.8,"A", fontsize=25, weight="semibold")
rota_plot.text(2.1,21.3,"Difference rotamer outliers", fontsize=25, weight="semibold")
rota_plot.text(-1.13,21.3,"B", fontsize=25, weight="semibold")
mol_plot.text(2.1,1.5,"Difference molprobity score", fontsize=25, weight="semibold")
mol_plot.text(-1.13,1.5,"C", fontsize=25, weight="semibold")
fig.align_ylabels(ax[:])
plt.tight_layout()
plt.savefig("supplemental_figure_2.pdf", dpi=300, bbox_inches='tight')
mol_df.describe()
| 0.473657 | 0.558989 |
# Introduction
- nb40 の編集
- `distance-is-all-you-need-lb-1-481.ipynb` を参考に特徴量を生成する
# Import everything I need :)
```
import time
import multiprocessing
import glob
import gc
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import KFold
from sklearn.metrics import mean_absolute_error
import lightgbm as lgb
from fastprogress import progress_bar
```
# Preparation
```
nb = 44
isSmallSet = False
length = 2000
pd.set_option('display.max_columns', 100)
# use atomic numbers to recode atomic names
ATOMIC_NUMBERS = {
'H': 1,
'C': 6,
'N': 7,
'O': 8,
'F': 9
}
file_path = '../input/champs-scalar-coupling/'
glob.glob(file_path + '*')
# train
path = file_path + 'train.csv'
if isSmallSet:
train = pd.read_csv(path) [:length]
else:
train = pd.read_csv(path)
# test
path = file_path + 'test.csv'
if isSmallSet:
test = pd.read_csv(path)[:length]
else:
test = pd.read_csv(path)
# structure
path = file_path + 'structures.csv'
structures = pd.read_csv(path)
if isSmallSet:
print('using SmallSet !!')
print('-------------------')
print(f'There are {train.shape[0]} rows in train data.')
print(f'There are {test.shape[0]} rows in test data.')
print(f"There are {train['molecule_name'].nunique()} distinct molecules in train data.")
print(f"There are {test['molecule_name'].nunique()} distinct molecules in test data.")
print(f"There are {train['atom_index_0'].nunique()} unique atoms.")
print(f"There are {train['type'].nunique()} unique types.")
```
---
## myFunc
**metrics**
```
def kaggle_metric(df, preds):
df["prediction"] = preds
maes = []
for t in df.type.unique():
y_true = df[df.type==t].scalar_coupling_constant.values
y_pred = df[df.type==t].prediction.values
mae = np.log(mean_absolute_error(y_true, y_pred))
maes.append(mae)
return np.mean(maes)
```
---
**momory**
```
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
c_prec = df[col].apply(lambda x: np.finfo(x).precision).max()
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max and c_prec == np.finfo(np.float16).precision:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max and c_prec == np.finfo(np.float32).precision:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
```
# Feature Engineering
Build Distance Dataset
```
def build_type_dataframes(base, structures, coupling_type):
base = base[base['type'] == coupling_type].drop('type', axis=1).copy()
base = base.reset_index()
base['id'] = base['id'].astype('int32')
structures = structures[structures['molecule_name'].isin(base['molecule_name'])]
return base, structures
# a,b = build_type_dataframes(train, structures, '1JHN')
def add_coordinates(base, structures, index):
df = pd.merge(base, structures, how='inner',
left_on=['molecule_name', f'atom_index_{index}'],
right_on=['molecule_name', 'atom_index']).drop(['atom_index'], axis=1)
df = df.rename(columns={
'atom': f'atom_{index}',
'x': f'x_{index}',
'y': f'y_{index}',
'z': f'z_{index}'
})
return df
def add_atoms(base, atoms):
df = pd.merge(base, atoms, how='inner',
on=['molecule_name', 'atom_index_0', 'atom_index_1'])
return df
def merge_all_atoms(base, structures):
df = pd.merge(base, structures, how='left',
left_on=['molecule_name'],
right_on=['molecule_name'])
df = df[(df.atom_index_0 != df.atom_index) & (df.atom_index_1 != df.atom_index)]
return df
def add_center(df):
df['x_c'] = ((df['x_1'] + df['x_0']) * np.float32(0.5))
df['y_c'] = ((df['y_1'] + df['y_0']) * np.float32(0.5))
df['z_c'] = ((df['z_1'] + df['z_0']) * np.float32(0.5))
def add_distance_to_center(df):
df['d_c'] = ((
(df['x_c'] - df['x'])**np.float32(2) +
(df['y_c'] - df['y'])**np.float32(2) +
(df['z_c'] - df['z'])**np.float32(2)
)**np.float32(0.5))
def add_distance_between(df, suffix1, suffix2):
df[f'd_{suffix1}_{suffix2}'] = ((
(df[f'x_{suffix1}'] - df[f'x_{suffix2}'])**np.float32(2) +
(df[f'y_{suffix1}'] - df[f'y_{suffix2}'])**np.float32(2) +
(df[f'z_{suffix1}'] - df[f'z_{suffix2}'])**np.float32(2)
)**np.float32(0.5))
def add_distances(df):
n_atoms = 1 + max([int(c.split('_')[1]) for c in df.columns if c.startswith('x_')])
for i in range(1, n_atoms):
for vi in range(min(4, i)):
add_distance_between(df, i, vi)
def add_n_atoms(base, structures):
dfs = structures['molecule_name'].value_counts().rename('n_atoms').to_frame()
return pd.merge(base, dfs, left_on='molecule_name', right_index=True)
def build_couple_dataframe(some_csv, structures_csv, coupling_type, n_atoms=10):
base, structures = build_type_dataframes(some_csv, structures_csv, coupling_type)
base = add_coordinates(base, structures, 0)
base = add_coordinates(base, structures, 1)
base = base.drop(['atom_0', 'atom_1'], axis=1)
atoms = base.drop('id', axis=1).copy()
if 'scalar_coupling_constant' in some_csv:
atoms = atoms.drop(['scalar_coupling_constant'], axis=1)
add_center(atoms)
atoms = atoms.drop(['x_0', 'y_0', 'z_0', 'x_1', 'y_1', 'z_1'], axis=1)
atoms = merge_all_atoms(atoms, structures)
add_distance_to_center(atoms)
atoms = atoms.drop(['x_c', 'y_c', 'z_c', 'atom_index'], axis=1)
atoms.sort_values(['molecule_name', 'atom_index_0', 'atom_index_1', 'd_c'], inplace=True)
atom_groups = atoms.groupby(['molecule_name', 'atom_index_0', 'atom_index_1'])
atoms['num'] = atom_groups.cumcount() + 2
atoms = atoms.drop(['d_c'], axis=1)
atoms = atoms[atoms['num'] < n_atoms]
atoms = atoms.set_index(['molecule_name', 'atom_index_0', 'atom_index_1', 'num']).unstack()
atoms.columns = [f'{col[0]}_{col[1]}' for col in atoms.columns]
atoms = atoms.reset_index()
# # downcast back to int8
for col in atoms.columns:
if col.startswith('atom_'):
atoms[col] = atoms[col].fillna(0).astype('int8')
# atoms['molecule_name'] = atoms['molecule_name'].astype('int32')
full = add_atoms(base, atoms)
add_distances(full)
full.sort_values('id', inplace=True)
return full
def take_n_atoms(df, n_atoms, four_start=4):
labels = ['id', 'molecule_name', 'atom_index_1', 'atom_index_0']
for i in range(2, n_atoms):
label = f'atom_{i}'
labels.append(label)
for i in range(n_atoms):
num = min(i, 4) if i < four_start else 4
for j in range(num):
labels.append(f'd_{i}_{j}')
if 'scalar_coupling_constant' in df:
labels.append('scalar_coupling_constant')
return df[labels]
atoms = structures['atom'].values
types_train = train['type'].values
types_test = test['type'].values
structures['atom'] = structures['atom'].replace(ATOMIC_NUMBERS).astype('int8')
fulls_train = []
fulls_test = []
for type_ in progress_bar(train['type'].unique()):
full_train = build_couple_dataframe(train, structures, type_, n_atoms=10)
full_test = build_couple_dataframe(test, structures, type_, n_atoms=10)
full_train = take_n_atoms(full_train, 10)
full_test = take_n_atoms(full_test, 10)
fulls_train.append(full_train)
fulls_test.append(full_test)
structures['atom'] = atoms
train = pd.concat(fulls_train).sort_values(by=['id']) #, axis=0)
test = pd.concat(fulls_test).sort_values(by=['id']) #, axis=0)
train['type'] = types_train
test['type'] = types_test
train = train.fillna(0)
test = test.fillna(0)
```
<br>
<br>
basic
```
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
for atom_idx in [0,1]:
train = map_atom_info(train, structures, atom_idx)
test = map_atom_info(test, structures, atom_idx)
train = train.rename(columns={'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}'})
test = test.rename(columns={'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}'})
train.head()
```
<br>
<br>
type0
```
def create_type0(df):
df['type_0'] = df['type'].apply(lambda x : x[0])
return df
# train['type_0'] = train['type'].apply(lambda x: x[0])
# test['type_0'] = test['type'].apply(lambda x: x[0])
```
<br>
<br>
distances
```
def distances(df):
df_p_0 = df[['x_0', 'y_0', 'z_0']].values
df_p_1 = df[['x_1', 'y_1', 'z_1']].values
df['dist'] = np.linalg.norm(df_p_0 - df_p_1, axis=1)
df['dist_x'] = (df['x_0'] - df['x_1']) ** 2
df['dist_y'] = (df['y_0'] - df['y_1']) ** 2
df['dist_z'] = (df['z_0'] - df['z_1']) ** 2
return df
# train = distances(train)
# test = distances(test)
%%time
train = create_type0(train)
test = create_type0(test)
train = distances(train)
test = distances(test)
```
---
LabelEncode
- `atom_1` = {H, C, N}
- `type_0` = {1, 2, 3}
- `type` = {2JHC, ...}
```
for f in ['atom_1', 'type_0', 'type']:
if f in train.columns:
lbl = LabelEncoder()
lbl.fit(list(train[f].values) + list(test[f].values))
train[f] = lbl.transform(list(train[f].values))
test[f] = lbl.transform(list(test[f].values))
```
---
**show features**
```
print(train.columns)
```
# create train, test data
```
train = reduce_mem_usage(train)
test = reduce_mem_usage(test)
y = train['scalar_coupling_constant']
train = train.drop(['id', 'molecule_name', 'atom_0', 'scalar_coupling_constant'], axis=1)
test = test.drop(['id', 'molecule_name', 'atom_0'], axis=1)
X = train.copy()
X_test = test.copy()
assert len(X.columns) == len(X_test.columns), f'X と X_test のサイズが違います X: {len(X.columns)}, X_test: {len(X_test.columns)}'
del train, test, full_train, full_test
gc.collect()
```
# Training model
**params**
```
# Configuration
TARGET = 'scalar_coupling_constant'
CAT_FEATS = ['type']
N_ESTIMATORS = 1500
VERBOSE = 300
EARLY_STOPPING_ROUNDS = 200
RANDOM_STATE = 529
METRIC = mean_absolute_error
N_JOBS = multiprocessing.cpu_count() -4
# lightgbm params
lgb_params = {'num_leaves': 128,
'min_child_samples': 79,
'objective': 'regression',
'max_depth': 9,
'learning_rate': 0.2,
"boosting_type": "gbdt",
"subsample_freq": 1,
"subsample": 0.9,
"bagging_seed": 11,
"metric": 'mae',
"verbosity": -1,
'reg_alpha': 0.1,
'reg_lambda': 0.3,
'colsample_bytree': 1.0
}
n_folds = 4
folds = KFold(n_splits=n_folds, shuffle=True)
# init
def train_lgb(X, X_test, y, lgb_params, folds,
verbose, early_stopping_rounds, n_estimators):
result_dict = {}
oof = np.zeros(len(X))
prediction = np.zeros(len(X_test))
scores = []
models = []
feature_importance = pd.DataFrame()
for fold_n, (train_idx, valid_idx) in enumerate(folds.split(X)):
print('------------------')
print(f'- fold{fold_n + 1}' )
print(f'Fold {fold_n + 1} started at {time.ctime()}')
X_train, X_valid = X.iloc[train_idx], X.iloc[valid_idx]
y_train, y_valid = y[train_idx], y[valid_idx]
# from IPython.core.debugger import Pdb; Pdb().set_trace()
# Train the model
model = lgb.LGBMRegressor(**lgb_params, n_estimators=n_estimators, n_jobs=N_JOBS)
model.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_valid, y_valid)],
verbose=verbose,
early_stopping_rounds=early_stopping_rounds,
categorical_feature=CAT_FEATS)
# predict
y_valid_pred = model.predict(X_valid, num_iteration=model.best_iteration_)
y_test_pred = model.predict(X_test)
oof[valid_idx] = y_valid_pred.reshape(-1,) # oof: out of folds
scores.append(mean_absolute_error(y_valid, y_valid_pred))
prediction += y_test_pred
# feature_importance
fold_importance = pd.DataFrame()
fold_importance['feature'] = X.columns
fold_importance['importance'] = model.feature_importances_
fold_importance['fold'] = fold_n + 1
feature_importance = pd.concat([feature_importance, fold_importance], axis=0)
# result
prediction /= folds.n_splits
feature_importance["importance"] /= folds.n_splits
result_dict['oof'] = oof
result_dict['prediction'] = prediction
result_dict['scores'] = scores
result_dict['feature_importance'] = feature_importance
print('------------------')
print('====== finish ======')
print('score list:', scores)
X['scalar_coupling_constant'] = y
cv_score = kaggle_metric(X, oof)
# X = X.drop(['scalar_coupling_constant', 'prediction'], axis=1)
print('CV mean score(group log mae): {0:.4f}'.format(cv_score))
return result_dict, cv_score
%%time
# train
result_dict, cv_score = train_lgb(X=X, X_test=X_test, y=y, lgb_params=lgb_params, folds=folds,
verbose=VERBOSE, early_stopping_rounds=EARLY_STOPPING_ROUNDS,
n_estimators=N_ESTIMATORS)
X = X.drop(['scalar_coupling_constant', 'prediction'], axis=1)
```
## plot feature importance
```
# top n features
top_n = 50
feature_importance = result_dict['feature_importance']
cols = feature_importance[["feature", "importance"]].groupby("feature").mean().sort_values(
by="importance", ascending=False)[:top_n].index
best_features = feature_importance.loc[feature_importance.feature.isin(cols)]
plt.figure(figsize=(16, 12));
sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False));
plt.title('LGB Features (avg over folds)');
```
# Save
**submission**
```
path_submittion = '../output/' + 'nb{}_submission_lgb_{}.csv'.format(nb, cv_score)
# path_submittion = 'nb{}_submission_lgb_{}.csv'.format(nb, cv_score)
print(f'save pash: {path_submittion}')
submittion = pd.read_csv('../input/champs-scalar-coupling/sample_submission.csv')
# submittion = pd.read_csv('./input/champs-scalar-coupling/sample_submission.csv')[:100]
submittion['scalar_coupling_constant'] = result_dict['prediction']
submittion.to_csv(path_submittion, index=False) if not isSmallSet else print('using small set')
```
---
**result**
```
path_oof = '../output/' + 'nb{}_oof_lgb_{}.csv'.format(nb, cv_score)
print(f'save pash: {path_oof}')
oof = pd.DataFrame(result_dict['oof'])
oof.to_csv(path_oof, index=False) if not isSmallSet else print('using small set')
```
# analysis
```
plot_data = pd.DataFrame(y)
plot_data.index.name = 'id'
plot_data['yhat'] = result_dict['oof']
plot_data['type'] = lbl.inverse_transform(X['type'])
def plot_oof_preds(ctype, llim, ulim):
plt.figure(figsize=(6,6))
sns.scatterplot(x='scalar_coupling_constant',y='yhat',
data=plot_data.loc[plot_data['type']==ctype,
['scalar_coupling_constant', 'yhat']]);
plt.xlim((llim, ulim))
plt.ylim((llim, ulim))
plt.plot([llim, ulim], [llim, ulim])
plt.xlabel('scalar_coupling_constant')
plt.ylabel('predicted')
plt.title(f'{ctype}', fontsize=18)
plt.show()
plot_oof_preds('1JHC', 20, 250)
plot_oof_preds('1JHN', 10, 100)
plot_oof_preds('2JHC', -40, 50)
plot_oof_preds('2JHH', -50, 30)
plot_oof_preds('2JHN', -25, 25)
plot_oof_preds('3JHC', -40, 90)
plot_oof_preds('3JHH', -20, 20)
plot_oof_preds('3JHN', -10, 15)
```
|
github_jupyter
|
import time
import multiprocessing
import glob
import gc
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import KFold
from sklearn.metrics import mean_absolute_error
import lightgbm as lgb
from fastprogress import progress_bar
nb = 44
isSmallSet = False
length = 2000
pd.set_option('display.max_columns', 100)
# use atomic numbers to recode atomic names
ATOMIC_NUMBERS = {
'H': 1,
'C': 6,
'N': 7,
'O': 8,
'F': 9
}
file_path = '../input/champs-scalar-coupling/'
glob.glob(file_path + '*')
# train
path = file_path + 'train.csv'
if isSmallSet:
train = pd.read_csv(path) [:length]
else:
train = pd.read_csv(path)
# test
path = file_path + 'test.csv'
if isSmallSet:
test = pd.read_csv(path)[:length]
else:
test = pd.read_csv(path)
# structure
path = file_path + 'structures.csv'
structures = pd.read_csv(path)
if isSmallSet:
print('using SmallSet !!')
print('-------------------')
print(f'There are {train.shape[0]} rows in train data.')
print(f'There are {test.shape[0]} rows in test data.')
print(f"There are {train['molecule_name'].nunique()} distinct molecules in train data.")
print(f"There are {test['molecule_name'].nunique()} distinct molecules in test data.")
print(f"There are {train['atom_index_0'].nunique()} unique atoms.")
print(f"There are {train['type'].nunique()} unique types.")
def kaggle_metric(df, preds):
df["prediction"] = preds
maes = []
for t in df.type.unique():
y_true = df[df.type==t].scalar_coupling_constant.values
y_pred = df[df.type==t].prediction.values
mae = np.log(mean_absolute_error(y_true, y_pred))
maes.append(mae)
return np.mean(maes)
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
c_prec = df[col].apply(lambda x: np.finfo(x).precision).max()
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max and c_prec == np.finfo(np.float16).precision:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max and c_prec == np.finfo(np.float32).precision:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
def build_type_dataframes(base, structures, coupling_type):
base = base[base['type'] == coupling_type].drop('type', axis=1).copy()
base = base.reset_index()
base['id'] = base['id'].astype('int32')
structures = structures[structures['molecule_name'].isin(base['molecule_name'])]
return base, structures
# a,b = build_type_dataframes(train, structures, '1JHN')
def add_coordinates(base, structures, index):
df = pd.merge(base, structures, how='inner',
left_on=['molecule_name', f'atom_index_{index}'],
right_on=['molecule_name', 'atom_index']).drop(['atom_index'], axis=1)
df = df.rename(columns={
'atom': f'atom_{index}',
'x': f'x_{index}',
'y': f'y_{index}',
'z': f'z_{index}'
})
return df
def add_atoms(base, atoms):
df = pd.merge(base, atoms, how='inner',
on=['molecule_name', 'atom_index_0', 'atom_index_1'])
return df
def merge_all_atoms(base, structures):
df = pd.merge(base, structures, how='left',
left_on=['molecule_name'],
right_on=['molecule_name'])
df = df[(df.atom_index_0 != df.atom_index) & (df.atom_index_1 != df.atom_index)]
return df
def add_center(df):
df['x_c'] = ((df['x_1'] + df['x_0']) * np.float32(0.5))
df['y_c'] = ((df['y_1'] + df['y_0']) * np.float32(0.5))
df['z_c'] = ((df['z_1'] + df['z_0']) * np.float32(0.5))
def add_distance_to_center(df):
df['d_c'] = ((
(df['x_c'] - df['x'])**np.float32(2) +
(df['y_c'] - df['y'])**np.float32(2) +
(df['z_c'] - df['z'])**np.float32(2)
)**np.float32(0.5))
def add_distance_between(df, suffix1, suffix2):
df[f'd_{suffix1}_{suffix2}'] = ((
(df[f'x_{suffix1}'] - df[f'x_{suffix2}'])**np.float32(2) +
(df[f'y_{suffix1}'] - df[f'y_{suffix2}'])**np.float32(2) +
(df[f'z_{suffix1}'] - df[f'z_{suffix2}'])**np.float32(2)
)**np.float32(0.5))
def add_distances(df):
n_atoms = 1 + max([int(c.split('_')[1]) for c in df.columns if c.startswith('x_')])
for i in range(1, n_atoms):
for vi in range(min(4, i)):
add_distance_between(df, i, vi)
def add_n_atoms(base, structures):
dfs = structures['molecule_name'].value_counts().rename('n_atoms').to_frame()
return pd.merge(base, dfs, left_on='molecule_name', right_index=True)
def build_couple_dataframe(some_csv, structures_csv, coupling_type, n_atoms=10):
base, structures = build_type_dataframes(some_csv, structures_csv, coupling_type)
base = add_coordinates(base, structures, 0)
base = add_coordinates(base, structures, 1)
base = base.drop(['atom_0', 'atom_1'], axis=1)
atoms = base.drop('id', axis=1).copy()
if 'scalar_coupling_constant' in some_csv:
atoms = atoms.drop(['scalar_coupling_constant'], axis=1)
add_center(atoms)
atoms = atoms.drop(['x_0', 'y_0', 'z_0', 'x_1', 'y_1', 'z_1'], axis=1)
atoms = merge_all_atoms(atoms, structures)
add_distance_to_center(atoms)
atoms = atoms.drop(['x_c', 'y_c', 'z_c', 'atom_index'], axis=1)
atoms.sort_values(['molecule_name', 'atom_index_0', 'atom_index_1', 'd_c'], inplace=True)
atom_groups = atoms.groupby(['molecule_name', 'atom_index_0', 'atom_index_1'])
atoms['num'] = atom_groups.cumcount() + 2
atoms = atoms.drop(['d_c'], axis=1)
atoms = atoms[atoms['num'] < n_atoms]
atoms = atoms.set_index(['molecule_name', 'atom_index_0', 'atom_index_1', 'num']).unstack()
atoms.columns = [f'{col[0]}_{col[1]}' for col in atoms.columns]
atoms = atoms.reset_index()
# # downcast back to int8
for col in atoms.columns:
if col.startswith('atom_'):
atoms[col] = atoms[col].fillna(0).astype('int8')
# atoms['molecule_name'] = atoms['molecule_name'].astype('int32')
full = add_atoms(base, atoms)
add_distances(full)
full.sort_values('id', inplace=True)
return full
def take_n_atoms(df, n_atoms, four_start=4):
labels = ['id', 'molecule_name', 'atom_index_1', 'atom_index_0']
for i in range(2, n_atoms):
label = f'atom_{i}'
labels.append(label)
for i in range(n_atoms):
num = min(i, 4) if i < four_start else 4
for j in range(num):
labels.append(f'd_{i}_{j}')
if 'scalar_coupling_constant' in df:
labels.append('scalar_coupling_constant')
return df[labels]
atoms = structures['atom'].values
types_train = train['type'].values
types_test = test['type'].values
structures['atom'] = structures['atom'].replace(ATOMIC_NUMBERS).astype('int8')
fulls_train = []
fulls_test = []
for type_ in progress_bar(train['type'].unique()):
full_train = build_couple_dataframe(train, structures, type_, n_atoms=10)
full_test = build_couple_dataframe(test, structures, type_, n_atoms=10)
full_train = take_n_atoms(full_train, 10)
full_test = take_n_atoms(full_test, 10)
fulls_train.append(full_train)
fulls_test.append(full_test)
structures['atom'] = atoms
train = pd.concat(fulls_train).sort_values(by=['id']) #, axis=0)
test = pd.concat(fulls_test).sort_values(by=['id']) #, axis=0)
train['type'] = types_train
test['type'] = types_test
train = train.fillna(0)
test = test.fillna(0)
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
for atom_idx in [0,1]:
train = map_atom_info(train, structures, atom_idx)
test = map_atom_info(test, structures, atom_idx)
train = train.rename(columns={'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}'})
test = test.rename(columns={'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}'})
train.head()
def create_type0(df):
df['type_0'] = df['type'].apply(lambda x : x[0])
return df
# train['type_0'] = train['type'].apply(lambda x: x[0])
# test['type_0'] = test['type'].apply(lambda x: x[0])
def distances(df):
df_p_0 = df[['x_0', 'y_0', 'z_0']].values
df_p_1 = df[['x_1', 'y_1', 'z_1']].values
df['dist'] = np.linalg.norm(df_p_0 - df_p_1, axis=1)
df['dist_x'] = (df['x_0'] - df['x_1']) ** 2
df['dist_y'] = (df['y_0'] - df['y_1']) ** 2
df['dist_z'] = (df['z_0'] - df['z_1']) ** 2
return df
# train = distances(train)
# test = distances(test)
%%time
train = create_type0(train)
test = create_type0(test)
train = distances(train)
test = distances(test)
for f in ['atom_1', 'type_0', 'type']:
if f in train.columns:
lbl = LabelEncoder()
lbl.fit(list(train[f].values) + list(test[f].values))
train[f] = lbl.transform(list(train[f].values))
test[f] = lbl.transform(list(test[f].values))
print(train.columns)
train = reduce_mem_usage(train)
test = reduce_mem_usage(test)
y = train['scalar_coupling_constant']
train = train.drop(['id', 'molecule_name', 'atom_0', 'scalar_coupling_constant'], axis=1)
test = test.drop(['id', 'molecule_name', 'atom_0'], axis=1)
X = train.copy()
X_test = test.copy()
assert len(X.columns) == len(X_test.columns), f'X と X_test のサイズが違います X: {len(X.columns)}, X_test: {len(X_test.columns)}'
del train, test, full_train, full_test
gc.collect()
# Configuration
TARGET = 'scalar_coupling_constant'
CAT_FEATS = ['type']
N_ESTIMATORS = 1500
VERBOSE = 300
EARLY_STOPPING_ROUNDS = 200
RANDOM_STATE = 529
METRIC = mean_absolute_error
N_JOBS = multiprocessing.cpu_count() -4
# lightgbm params
lgb_params = {'num_leaves': 128,
'min_child_samples': 79,
'objective': 'regression',
'max_depth': 9,
'learning_rate': 0.2,
"boosting_type": "gbdt",
"subsample_freq": 1,
"subsample": 0.9,
"bagging_seed": 11,
"metric": 'mae',
"verbosity": -1,
'reg_alpha': 0.1,
'reg_lambda': 0.3,
'colsample_bytree': 1.0
}
n_folds = 4
folds = KFold(n_splits=n_folds, shuffle=True)
# init
def train_lgb(X, X_test, y, lgb_params, folds,
verbose, early_stopping_rounds, n_estimators):
result_dict = {}
oof = np.zeros(len(X))
prediction = np.zeros(len(X_test))
scores = []
models = []
feature_importance = pd.DataFrame()
for fold_n, (train_idx, valid_idx) in enumerate(folds.split(X)):
print('------------------')
print(f'- fold{fold_n + 1}' )
print(f'Fold {fold_n + 1} started at {time.ctime()}')
X_train, X_valid = X.iloc[train_idx], X.iloc[valid_idx]
y_train, y_valid = y[train_idx], y[valid_idx]
# from IPython.core.debugger import Pdb; Pdb().set_trace()
# Train the model
model = lgb.LGBMRegressor(**lgb_params, n_estimators=n_estimators, n_jobs=N_JOBS)
model.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_valid, y_valid)],
verbose=verbose,
early_stopping_rounds=early_stopping_rounds,
categorical_feature=CAT_FEATS)
# predict
y_valid_pred = model.predict(X_valid, num_iteration=model.best_iteration_)
y_test_pred = model.predict(X_test)
oof[valid_idx] = y_valid_pred.reshape(-1,) # oof: out of folds
scores.append(mean_absolute_error(y_valid, y_valid_pred))
prediction += y_test_pred
# feature_importance
fold_importance = pd.DataFrame()
fold_importance['feature'] = X.columns
fold_importance['importance'] = model.feature_importances_
fold_importance['fold'] = fold_n + 1
feature_importance = pd.concat([feature_importance, fold_importance], axis=0)
# result
prediction /= folds.n_splits
feature_importance["importance"] /= folds.n_splits
result_dict['oof'] = oof
result_dict['prediction'] = prediction
result_dict['scores'] = scores
result_dict['feature_importance'] = feature_importance
print('------------------')
print('====== finish ======')
print('score list:', scores)
X['scalar_coupling_constant'] = y
cv_score = kaggle_metric(X, oof)
# X = X.drop(['scalar_coupling_constant', 'prediction'], axis=1)
print('CV mean score(group log mae): {0:.4f}'.format(cv_score))
return result_dict, cv_score
%%time
# train
result_dict, cv_score = train_lgb(X=X, X_test=X_test, y=y, lgb_params=lgb_params, folds=folds,
verbose=VERBOSE, early_stopping_rounds=EARLY_STOPPING_ROUNDS,
n_estimators=N_ESTIMATORS)
X = X.drop(['scalar_coupling_constant', 'prediction'], axis=1)
# top n features
top_n = 50
feature_importance = result_dict['feature_importance']
cols = feature_importance[["feature", "importance"]].groupby("feature").mean().sort_values(
by="importance", ascending=False)[:top_n].index
best_features = feature_importance.loc[feature_importance.feature.isin(cols)]
plt.figure(figsize=(16, 12));
sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False));
plt.title('LGB Features (avg over folds)');
path_submittion = '../output/' + 'nb{}_submission_lgb_{}.csv'.format(nb, cv_score)
# path_submittion = 'nb{}_submission_lgb_{}.csv'.format(nb, cv_score)
print(f'save pash: {path_submittion}')
submittion = pd.read_csv('../input/champs-scalar-coupling/sample_submission.csv')
# submittion = pd.read_csv('./input/champs-scalar-coupling/sample_submission.csv')[:100]
submittion['scalar_coupling_constant'] = result_dict['prediction']
submittion.to_csv(path_submittion, index=False) if not isSmallSet else print('using small set')
path_oof = '../output/' + 'nb{}_oof_lgb_{}.csv'.format(nb, cv_score)
print(f'save pash: {path_oof}')
oof = pd.DataFrame(result_dict['oof'])
oof.to_csv(path_oof, index=False) if not isSmallSet else print('using small set')
plot_data = pd.DataFrame(y)
plot_data.index.name = 'id'
plot_data['yhat'] = result_dict['oof']
plot_data['type'] = lbl.inverse_transform(X['type'])
def plot_oof_preds(ctype, llim, ulim):
plt.figure(figsize=(6,6))
sns.scatterplot(x='scalar_coupling_constant',y='yhat',
data=plot_data.loc[plot_data['type']==ctype,
['scalar_coupling_constant', 'yhat']]);
plt.xlim((llim, ulim))
plt.ylim((llim, ulim))
plt.plot([llim, ulim], [llim, ulim])
plt.xlabel('scalar_coupling_constant')
plt.ylabel('predicted')
plt.title(f'{ctype}', fontsize=18)
plt.show()
plot_oof_preds('1JHC', 20, 250)
plot_oof_preds('1JHN', 10, 100)
plot_oof_preds('2JHC', -40, 50)
plot_oof_preds('2JHH', -50, 30)
plot_oof_preds('2JHN', -25, 25)
plot_oof_preds('3JHC', -40, 90)
plot_oof_preds('3JHH', -20, 20)
plot_oof_preds('3JHN', -10, 15)
| 0.461988 | 0.756785 |
# Forecasting II: state space models
This tutorial covers state space modeling with the [pyro.contrib.forecast](http://docs.pyro.ai/en/latest/contrib.forecast.html) module. This tutorial assumes the reader is already familiar with [SVI](http://pyro.ai/examples/svi_part_ii.html), [tensor shapes](http://pyro.ai/examples/tensor_shapes.html), and [univariate forecasting](http://pyro.ai/examples/forecasting_i.html).
See also:
- [Forecasting I: univariate, heavy tailed](http://pyro.ai/examples/forecasting_i.html)
- [Forecasting III: hierarchical models](http://pyro.ai/examples/forecasting_iii.html)
#### Summary
- Pyro's [ForecastingModel](http://docs.pyro.ai/en/latest/contrib.forecast.html#pyro.contrib.forecast.forecaster.ForecastingModel) can combine regression, variational inference, and exact inference.
- To model a linear-Gaussian dynamical system, use a [GaussianHMM](http://docs.pyro.ai/en/latest/distributions.html#gaussianhmm) `noise_dist`.
- To model a heavy-tailed linear dynamical system, use [LinearHMM](http://docs.pyro.ai/en/latest/distributions.html#linearhmm) with heavy-tailed distributions.
- To enable inference with [LinearHMM](http://docs.pyro.ai/en/latest/distributions.html#linearhmm), use a [LinearHMMReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.hmm.LinearHMMReparam) reparameterizer.
```
import math
import torch
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
from pyro.contrib.examples.bart import load_bart_od
from pyro.contrib.forecast import ForecastingModel, Forecaster, eval_crps
from pyro.infer.reparam import LinearHMMReparam, StableReparam, SymmetricStableReparam
from pyro.ops.tensor_utils import periodic_repeat
from pyro.ops.stats import quantile
import matplotlib.pyplot as plt
%matplotlib inline
assert pyro.__version__.startswith('1.8.1')
pyro.set_rng_seed(20200305)
```
## Intro to state space models
In the [univariate tutorial](http://pyro.ai/examples/forecasting_i.html) we saw how to model time series as regression plus a local level model, using variational inference. This tutorial covers a different way to model time series: state space models and exact inference. Pyro's forecasting module allows these two paradigms to be combined, for example modeling seasonality with regression, including a slow global trend, and using a state-space model for short-term local trend.
Pyro implements a few state space models, but the most important are the [GaussianHMM](http://docs.pyro.ai/en/latest/distributions.html#gaussianhmm) distribution and its heavy-tailed generalization the [LinearHMM](http://docs.pyro.ai/en/latest/distributions.html#linearhmm) distribution. Both of these model a linear dynamical system with hidden state; both are multivariate, and both allow learning of all process parameters. On top of these the [pyro.contrib.timeseries](http://docs.pyro.ai/en/latest/contrib.timeseries.html) module implements a variety of multivariate Gaussian Process models that compile down to `GaussianHMM`s.
Pyro's inference for `GaussianHMM` uses parallel-scan Kalman filtering, allowing fast analysis of very long time series. Similarly, Pyro's inference for `LinearHMM` uses entirely parallel auxiliary variable methods to reduce to a `GaussianHMM`, which then permits parallel-scan inference. Thus both methods allow parallelization of long time series analysis, even for a single univariate time series.
Let's again look at the [BART train](https://www.bart.gov/about/reports/ridership) ridership dataset:
```
dataset = load_bart_od()
print(dataset.keys())
print(dataset["counts"].shape)
print(" ".join(dataset["stations"]))
data = dataset["counts"].sum([-1, -2]).unsqueeze(-1).log1p()
print(data.shape)
plt.figure(figsize=(9, 3))
plt.plot(data, 'b.', alpha=0.1, markeredgewidth=0)
plt.title("Total hourly ridership over nine years")
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(0, len(data));
plt.figure(figsize=(9, 3))
plt.plot(data)
plt.title("Total hourly ridership over one month")
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(len(data) - 24 * 30, len(data));
```
## GaussianHMM
Let's start by modeling hourly seasonality together with a local linear trend, where we model seasonality via regression and local linear trend via a [GaussianHMM](http://docs.pyro.ai/en/latest/distributions.html#gaussianhmm). This noise model includes a mean-reverting hidden state (an [Ornstein-Uhlenbeck process](https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process)) plus Gaussian observation noise.
```
T0 = 0 # beginning
T2 = data.size(-2) # end
T1 = T2 - 24 * 7 * 2 # train/test split
means = data[:T1 // (24 * 7) * 24 * 7].reshape(-1, 24 * 7).mean(0)
class Model1(ForecastingModel):
def model(self, zero_data, covariates):
duration = zero_data.size(-2)
# We'll hard-code the periodic part of this model, learning only the local model.
prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1)
# On top of this mean prediction, we'll learn a linear dynamical system.
# This requires specifying five pieces of data, on which we will put structured priors.
init_dist = dist.Normal(0, 10).expand([1]).to_event(1)
timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1))
# Note timescale is a scalar but we need a 1x1 transition matrix (hidden_dim=1),
# thus we unsqueeze twice using [..., None, None].
trans_matrix = torch.exp(-1 / timescale)[..., None, None]
trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1))
trans_dist = dist.Normal(0, trans_scale.unsqueeze(-1)).to_event(1)
# Note the obs_matrix has shape hidden_dim x obs_dim = 1 x 1.
obs_matrix = torch.tensor([[1.]])
obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1))
obs_dist = dist.Normal(0, obs_scale.unsqueeze(-1)).to_event(1)
noise_dist = dist.GaussianHMM(
init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration)
self.predict(noise_dist, prediction)
```
We can then train the model on many years of data. Note that because we are being variational about only time-global variables, and exactly integrating out time-local variables (via `GaussianHMM`), stochastic gradients are very low variance; this allows us to use a large learning rate and few steps.
```
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
covariates = torch.zeros(len(data), 0) # empty
forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1, num_steps=400)
for name, value in forecaster.guide.median().items():
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.item()))
```
Plotting forecasts of the next two weeks of data, we see mostly reasonable forecasts, but an anomaly on Christmas when rides were overpredicted. This is to be expected, as we have not modeled yearly seasonality or holidays.
```
samples = forecaster(data[:T1], covariates, num_samples=100)
samples.clamp_(min=0) # apply domain knowledge: the samples must be positive
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
print(samples.shape, p10.shape)
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 7, T2)
plt.text(78732, 3.5, "Christmas", rotation=90, color="green")
plt.legend(loc="best");
```
Next let's change the model to use heteroskedastic observation noise, depending on the hour of week.
```
class Model2(ForecastingModel):
def model(self, zero_data, covariates):
duration = zero_data.size(-2)
prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1)
init_dist = dist.Normal(0, 10).expand([1]).to_event(1)
timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1))
trans_matrix = torch.exp(-1 / timescale)[..., None, None]
trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1))
trans_dist = dist.Normal(0, trans_scale.unsqueeze(-1)).to_event(1)
obs_matrix = torch.tensor([[1.]])
# To model heteroskedastic observation noise, we'll sample obs_scale inside a plate,
# then repeat to full duration. This is the only change from Model1.
with pyro.plate("hour_of_week", 24 * 7, dim=-1):
obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1))
obs_scale = periodic_repeat(obs_scale, duration, dim=-1)
obs_dist = dist.Normal(0, obs_scale.unsqueeze(-1)).to_event(1)
noise_dist = dist.GaussianHMM(
init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration)
self.predict(noise_dist, prediction)
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
covariates = torch.zeros(len(data), 0) # empty
forecaster = Forecaster(Model2(), data[:T1], covariates[:T1], learning_rate=0.1, num_steps=400)
for name, value in forecaster.guide.median().items():
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.item()))
```
Note this gives us a much longer timescale and thereby more accurate short-term predictions:
```
samples = forecaster(data[:T1], covariates, num_samples=100)
samples.clamp_(min=0) # apply domain knowledge: the samples must be positive
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 7, T2)
plt.text(78732, 3.5, "Christmas", rotation=90, color="green")
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 2, T1 + 24 * 4)
plt.legend(loc="best");
```
## Heavy-tailed modeling with LinearHMM
Next let's change our model to a linear-[Stable](http://docs.pyro.ai/en/latest/distributions.html#pyro.distributions.Stable) dynamical system, exhibiting learnable heavy tailed behavior in both the process noise and observation noise. As we've already seen in the [univariate tutorial](http://pyro.ai/examples/forecasting_i.html), this will require special handling of stable distributions by [poutine.reparam()](http://docs.pyro.ai/en/latest/poutine.html#pyro.poutine.handlers.reparam). For state space models, we combine [LinearHMMReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.hmm.LinearHMMReparam) with other reparameterizers like [StableReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.stable.StableReparam) and [SymmetricStableReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.stable.SymmetricStableReparam). All reparameterizers preserve behavior of the generative model, and only serve to enable inference via auxiliary variable methods.
```
class Model3(ForecastingModel):
def model(self, zero_data, covariates):
duration = zero_data.size(-2)
prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1)
# First sample the Gaussian-like parameters as in previous models.
init_dist = dist.Normal(0, 10).expand([1]).to_event(1)
timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1))
trans_matrix = torch.exp(-1 / timescale)[..., None, None]
trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1))
obs_matrix = torch.tensor([[1.]])
with pyro.plate("hour_of_week", 24 * 7, dim=-1):
obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1))
obs_scale = periodic_repeat(obs_scale, duration, dim=-1)
# In addition to the Gaussian parameters, we will learn a global stability
# parameter to determine tail weights, and an observation skew parameter.
stability = pyro.sample("stability", dist.Uniform(1, 2).expand([1]).to_event(1))
skew = pyro.sample("skew", dist.Uniform(-1, 1).expand([1]).to_event(1))
# Next we construct stable distributions and a linear-stable HMM distribution.
trans_dist = dist.Stable(stability, 0, trans_scale.unsqueeze(-1)).to_event(1)
obs_dist = dist.Stable(stability, skew, obs_scale.unsqueeze(-1)).to_event(1)
noise_dist = dist.LinearHMM(
init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration)
# Finally we use a reparameterizer to enable inference.
rep = LinearHMMReparam(None, # init_dist is already Gaussian.
SymmetricStableReparam(), # trans_dist is symmetric.
StableReparam()) # obs_dist is asymmetric.
with poutine.reparam(config={"residual": rep}):
self.predict(noise_dist, prediction)
```
Note that since this model introduces auxiliary variables that are learned by variational inference, gradients are higher variance and we need to train for longer.
```
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
covariates = torch.zeros(len(data), 0) # empty
forecaster = Forecaster(Model3(), data[:T1], covariates[:T1], learning_rate=0.1)
for name, value in forecaster.guide.median().items():
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.item()))
samples = forecaster(data[:T1], covariates, num_samples=100)
samples.clamp_(min=0) # apply domain knowledge: the samples must be positive
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 7, T2)
plt.text(78732, 3.5, "Christmas", rotation=90, color="green")
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 2, T1 + 24 * 4)
plt.legend(loc="best");
```
|
github_jupyter
|
import math
import torch
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
from pyro.contrib.examples.bart import load_bart_od
from pyro.contrib.forecast import ForecastingModel, Forecaster, eval_crps
from pyro.infer.reparam import LinearHMMReparam, StableReparam, SymmetricStableReparam
from pyro.ops.tensor_utils import periodic_repeat
from pyro.ops.stats import quantile
import matplotlib.pyplot as plt
%matplotlib inline
assert pyro.__version__.startswith('1.8.1')
pyro.set_rng_seed(20200305)
dataset = load_bart_od()
print(dataset.keys())
print(dataset["counts"].shape)
print(" ".join(dataset["stations"]))
data = dataset["counts"].sum([-1, -2]).unsqueeze(-1).log1p()
print(data.shape)
plt.figure(figsize=(9, 3))
plt.plot(data, 'b.', alpha=0.1, markeredgewidth=0)
plt.title("Total hourly ridership over nine years")
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(0, len(data));
plt.figure(figsize=(9, 3))
plt.plot(data)
plt.title("Total hourly ridership over one month")
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(len(data) - 24 * 30, len(data));
T0 = 0 # beginning
T2 = data.size(-2) # end
T1 = T2 - 24 * 7 * 2 # train/test split
means = data[:T1 // (24 * 7) * 24 * 7].reshape(-1, 24 * 7).mean(0)
class Model1(ForecastingModel):
def model(self, zero_data, covariates):
duration = zero_data.size(-2)
# We'll hard-code the periodic part of this model, learning only the local model.
prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1)
# On top of this mean prediction, we'll learn a linear dynamical system.
# This requires specifying five pieces of data, on which we will put structured priors.
init_dist = dist.Normal(0, 10).expand([1]).to_event(1)
timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1))
# Note timescale is a scalar but we need a 1x1 transition matrix (hidden_dim=1),
# thus we unsqueeze twice using [..., None, None].
trans_matrix = torch.exp(-1 / timescale)[..., None, None]
trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1))
trans_dist = dist.Normal(0, trans_scale.unsqueeze(-1)).to_event(1)
# Note the obs_matrix has shape hidden_dim x obs_dim = 1 x 1.
obs_matrix = torch.tensor([[1.]])
obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1))
obs_dist = dist.Normal(0, obs_scale.unsqueeze(-1)).to_event(1)
noise_dist = dist.GaussianHMM(
init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration)
self.predict(noise_dist, prediction)
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
covariates = torch.zeros(len(data), 0) # empty
forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1, num_steps=400)
for name, value in forecaster.guide.median().items():
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.item()))
samples = forecaster(data[:T1], covariates, num_samples=100)
samples.clamp_(min=0) # apply domain knowledge: the samples must be positive
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
print(samples.shape, p10.shape)
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 7, T2)
plt.text(78732, 3.5, "Christmas", rotation=90, color="green")
plt.legend(loc="best");
class Model2(ForecastingModel):
def model(self, zero_data, covariates):
duration = zero_data.size(-2)
prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1)
init_dist = dist.Normal(0, 10).expand([1]).to_event(1)
timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1))
trans_matrix = torch.exp(-1 / timescale)[..., None, None]
trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1))
trans_dist = dist.Normal(0, trans_scale.unsqueeze(-1)).to_event(1)
obs_matrix = torch.tensor([[1.]])
# To model heteroskedastic observation noise, we'll sample obs_scale inside a plate,
# then repeat to full duration. This is the only change from Model1.
with pyro.plate("hour_of_week", 24 * 7, dim=-1):
obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1))
obs_scale = periodic_repeat(obs_scale, duration, dim=-1)
obs_dist = dist.Normal(0, obs_scale.unsqueeze(-1)).to_event(1)
noise_dist = dist.GaussianHMM(
init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration)
self.predict(noise_dist, prediction)
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
covariates = torch.zeros(len(data), 0) # empty
forecaster = Forecaster(Model2(), data[:T1], covariates[:T1], learning_rate=0.1, num_steps=400)
for name, value in forecaster.guide.median().items():
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.item()))
samples = forecaster(data[:T1], covariates, num_samples=100)
samples.clamp_(min=0) # apply domain knowledge: the samples must be positive
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 7, T2)
plt.text(78732, 3.5, "Christmas", rotation=90, color="green")
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 2, T1 + 24 * 4)
plt.legend(loc="best");
class Model3(ForecastingModel):
def model(self, zero_data, covariates):
duration = zero_data.size(-2)
prediction = periodic_repeat(means, duration, dim=-1).unsqueeze(-1)
# First sample the Gaussian-like parameters as in previous models.
init_dist = dist.Normal(0, 10).expand([1]).to_event(1)
timescale = pyro.sample("timescale", dist.LogNormal(math.log(24), 1))
trans_matrix = torch.exp(-1 / timescale)[..., None, None]
trans_scale = pyro.sample("trans_scale", dist.LogNormal(-0.5 * math.log(24), 1))
obs_matrix = torch.tensor([[1.]])
with pyro.plate("hour_of_week", 24 * 7, dim=-1):
obs_scale = pyro.sample("obs_scale", dist.LogNormal(-2, 1))
obs_scale = periodic_repeat(obs_scale, duration, dim=-1)
# In addition to the Gaussian parameters, we will learn a global stability
# parameter to determine tail weights, and an observation skew parameter.
stability = pyro.sample("stability", dist.Uniform(1, 2).expand([1]).to_event(1))
skew = pyro.sample("skew", dist.Uniform(-1, 1).expand([1]).to_event(1))
# Next we construct stable distributions and a linear-stable HMM distribution.
trans_dist = dist.Stable(stability, 0, trans_scale.unsqueeze(-1)).to_event(1)
obs_dist = dist.Stable(stability, skew, obs_scale.unsqueeze(-1)).to_event(1)
noise_dist = dist.LinearHMM(
init_dist, trans_matrix, trans_dist, obs_matrix, obs_dist, duration=duration)
# Finally we use a reparameterizer to enable inference.
rep = LinearHMMReparam(None, # init_dist is already Gaussian.
SymmetricStableReparam(), # trans_dist is symmetric.
StableReparam()) # obs_dist is asymmetric.
with poutine.reparam(config={"residual": rep}):
self.predict(noise_dist, prediction)
%%time
pyro.set_rng_seed(1)
pyro.clear_param_store()
covariates = torch.zeros(len(data), 0) # empty
forecaster = Forecaster(Model3(), data[:T1], covariates[:T1], learning_rate=0.1)
for name, value in forecaster.guide.median().items():
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.item()))
samples = forecaster(data[:T1], covariates, num_samples=100)
samples.clamp_(min=0) # apply domain knowledge: the samples must be positive
p10, p50, p90 = quantile(samples, (0.1, 0.5, 0.9)).squeeze(-1)
crps = eval_crps(samples, data[T1:])
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 7, T2)
plt.text(78732, 3.5, "Christmas", rotation=90, color="green")
plt.legend(loc="best");
plt.figure(figsize=(9, 3))
plt.fill_between(torch.arange(T1, T2), p10, p90, color="red", alpha=0.3)
plt.plot(torch.arange(T1, T2), p50, 'r-', label='forecast')
plt.plot(torch.arange(T1 - 24 * 7, T2),
data[T1 - 24 * 7: T2], 'k-', label='truth')
plt.title("Total hourly ridership (CRPS = {:0.3g})".format(crps))
plt.ylabel("log(# rides)")
plt.xlabel("Hour after 2011-01-01")
plt.xlim(T1 - 24 * 2, T1 + 24 * 4)
plt.legend(loc="best");
| 0.811527 | 0.990732 |
# Synsets, wordnet and Yelp reviews
Here we use the `en_core_web_sm` spacy language model. If you haven't already, install it by running `python -m spacy download en_core_web_sm` in a terminal.
Also, run <code>nltk.download('sentiwordnet')</code> if you never ran it.
```
import sys
import json
import spacy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import nltk
from tqdm import tqdm_notebook
from nltk.corpus import sentiwordnet as swn
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import classification_report, confusion_matrix
```
`ConfusionMatrixDisplay` requires `scikit-learn`'s version to be $>0.20$. You can check if by running `!conda list scikit-learn` in a cell below here. Otherwise, you need to update it by running `conda update scikit-learn` in a terminal. Be aware that if you have `textacy` version $0.8$ also installed in the same environment, then scikit-learn will not update.
**IF YOU DO NOT HAVE THIS MODULE AND YOU DON'T WANT TO INSTALL IT, THEN DO NOT RUN THE CELL BELOW!**. You'll just see the confusion matrix in textual format and not graphical.
```
from sklearn.metrics import ConfusionMatrixDisplay
```
## Pre-processing class
```
class SpacyTokenizer(object):
def __init__(self, model='en_core_web_sm', lemma=True, pos_filter=None):
self.pos = pos_filter
self.lemma = lemma
self.nlp = spacy.load(model)
def tokenize(self, text):
tokens = []
for token in self.nlp(text):
if self.lemma:
tk = token.lemma_
else:
tk = token.text
if self.pos is None or token.pos_ in self.pos:
tokens.append((tk, token.pos_))
else:
pass
return tokens
```
## Scoring class
```
class SentiWn(object):
def __init__(self, strategy='sum', use_pos=False):
self.strategy = strategy
self.pos = use_pos
self.pos_map = {
'NOUN': 'n',
'ADJ': 'a',
'VERB': 'v',
'ADV': 'r'
}
self.strategy_map = {
'sum': self._simple_sum,
'weighted_sum': self.weighted_sum,
'average_score': self.average_score,
'weighted_average': self.weighted_average}
# Simplest solution.
# Double-sum: we sum the score for each synset for each word
def _simple_sum(self, text):
s = np.zeros(3)
for token, pos in text:
if self.pos:
try:
synsets = list(swn.senti_synsets(token, self.pos_map[pos]))
except KeyError:
pass
else:
synsets = list(swn.senti_synsets(token))
for syn in synsets:
p, n, o = syn.pos_score(), syn.neg_score(), syn.obj_score()
s[0] += p
s[1] += n
s[2] += o
return s
# We weight the scores considering how many synsets each word has:
# the more syns a word has, the lower its importance.
def weighted_sum(self, text):
s = np.zeros(3)
all_s = []
if self.pos:
all_s = [list(swn.senti_synsets(token, self.pos_map[pos])) for token, pos in text]
else:
all_s = [list(swn.senti_synsets(token)) for token, pos in text]
for i, (token, pos) in enumerate(text):
try:
synsets = all_s[i]
sidf = np.log(max([len(l) for l in all_s]) / len(synsets))
for syn in synsets:
p, n, o = syn.pos_score(), syn.neg_score(), syn.obj_score()
s[0] += p * sidf
s[1] += n * sidf
s[2] += o * sidf # this is neutral
except ZeroDivisionError:
pass
return s
# We just average each score, so that we have an averaged positive, average negative
# and average neutral
def average_score(self, text):
counter = 0
s = np.zeros(3)
for token, pos in text:
if self.pos:
try:
synsets = list(swn.senti_synsets(token, self.pos_map[pos]))
except KeyError:
pass
else:
synsets = list(swn.senti_synsets(token))
for syn in synsets:
p, n, o = syn.pos_score(), syn.neg_score(), syn.obj_score()
s[0] += p
s[1] += n
s[2] += o
counter += 1
s[0] = s[0]/counter
s[1] = s[1]/counter
s[2] = s[2]/counter
return s
# We average the weighted sum
def weighted_average(self, text):
s = np.zeros(3)
all_s = []
if self.pos:
all_s = [list(swn.senti_synsets(token, self.pos_map[pos])) for token, pos in text]
else:
all_s = [list(swn.senti_synsets(token)) for token, pos in text]
counter = 0
for i, (token, pos) in enumerate(text):
try:
synsets = all_s[i]
sidf = np.log(max([len(l) for l in all_s]) / len(synsets))
for syn in synsets:
p, n, o = syn.pos_score(), syn.neg_score(), syn.obj_score()
s[0] += p * sidf
s[1] += n * sidf
s[2] += o * sidf # this is neutral
counter += sidf
except ZeroDivisionError:
pass
s[0] = s[0]/counter
s[1] = s[1]/counter
s[2] = s[2]/counter
return s
def predict(self, docs):
try:
score_function = self.strategy_map[self.strategy]
except KeyError:
raise Exception('{} strategy not yet available'.format(self.strategy))
self.doc_scores = np.array([score_function(doc) for doc in docs])
# we scale data bc the "objective" (=neutral) scores are always higher than pos and neg scores. Thus, if
# we just took the max, then every document would have been considered neutral
self.S = MinMaxScaler().fit_transform(self.doc_scores)
# returns the index of the column with the highest val for each row
# Thus: 0 = positive (first column), 1 = negative (second column), 2 = neutral
pred = self.S.argmax(axis=1)
y_pred = [1 if p == 0 else -1 if p == 1 else 0 for i, p in enumerate(pred)]
return y_pred
def custom_plots(self, y_true):
fig, ax = plt.subplots(figsize=(14, 4), nrows=2, ncols=2)
ax[0,0].boxplot(self.doc_scores)
ax[0,1].scatter(self.doc_scores[:,0], self.doc_scores[:,1], alpha=0.4, c=y_true)
ax[1,0].boxplot(self.S)
ax[1,1].scatter(self.S[:,0], self.S[:,1], alpha=0.4, c=y_true)
return plt
```
## Pre-processing
```
yelp = pd.read_csv('data/yelp_example_1_small.tsv', sep='\t')
tokenizer = SpacyTokenizer(lemma=True, pos_filter=['NOUN', 'ADV', 'ADJ', 'VERB'])
tokenizer.tokenize(yelp.iloc[0].content)
docs, titles, scores = [], [], []
data = tqdm_notebook(list(yelp.iterrows()))
for i, row in data:
tokens = tokenizer.tokenize(row.content)
docs.append(tokens)
titles.append(row.business)
scores.append(row.score)
with open('data/yelp_example_1.json', 'w') as out:
json.dump({'docs': docs, 'titles': titles, 'scores': scores}, out)
```
## Wordnet and synsets examples
```
synsets = list(swn.senti_synsets('happy'))
for syn in synsets:
print(syn)
for syn in synsets:
print(syn.synset.definition())
synsets = list(swn.senti_synsets('play', 'v'))
for syn in synsets:
print(syn)
for syn in synsets:
print(syn.synset.definition())
```
## Application on the Yelp reviews
```
with open('data/yelp_example_1.json', 'r') as infile:
data = json.load(infile)
docs = data['docs']
titles = data['titles']
scores = data['scores']
''' The Num argument indicates the value of the review (i.e: 3 stars).
If the review has more the num stars, then it is postive (=1); otherwise, negative (=-1). 0 for neutral.
We can also get only positive and negative, without neutral, by setting the use_neutral argument to False
'''
def get_true_label_from_score(num, use_neutral = True):
if use_neutral:
return [1 if score > num else -1 if score < num else 0 for i, score in enumerate(scores)]
else:
return [1 if score >= num else -1 for i, score in enumerate(scores)]
y_true = get_true_label_from_score(3)
```
### 01. Simple sum
```
wn = SentiWn(strategy='sum', use_pos=True)
y_pred = wn.predict(docs)
wn.custom_plots(y_true).show()
def print_report_plot_cf(y_true, y_pred):
report = classification_report(y_true, y_pred)
cm = confusion_matrix(y_true, y_pred)
print(report)
if 'sklearn.metrics._plot.confusion_matrix' in sys.modules:
fig, ax = plt.subplots(figsize=(8, 8))
d = ConfusionMatrixDisplay(cm, [-1, 0, 1])
d.plot(cmap=plt.cm.Blues, ax=ax, values_format='10.0f')
plt.show()
else:
print(cm)
print_report_plot_cf(y_true, y_pred)
```
### 02. Weighted sum
```
wn_w = SentiWn(strategy='weighted_sum')
y_w_pred = wn_w.predict(docs)
wn_w.custom_plots(y_true).show()
print_report_plot_cf(y_true, y_w_pred)
```
### 03. Average score
```
wn_a = SentiWn(strategy='average_score')
y_a_pred = wn_a.predict(docs)
wn_a.custom_plots(y_true).show()
print_report_plot_cf(y_true, y_a_pred)
```
### 04. Weighted average
```
wn_wa = SentiWn(strategy='weighted_average')
y_wa_pred = wn_wa.predict(docs)
wn_wa.custom_plots(y_true).show()
print_report_plot_cf(y_true, y_wa_pred)
```
|
github_jupyter
|
import sys
import json
import spacy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import nltk
from tqdm import tqdm_notebook
from nltk.corpus import sentiwordnet as swn
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
class SpacyTokenizer(object):
def __init__(self, model='en_core_web_sm', lemma=True, pos_filter=None):
self.pos = pos_filter
self.lemma = lemma
self.nlp = spacy.load(model)
def tokenize(self, text):
tokens = []
for token in self.nlp(text):
if self.lemma:
tk = token.lemma_
else:
tk = token.text
if self.pos is None or token.pos_ in self.pos:
tokens.append((tk, token.pos_))
else:
pass
return tokens
class SentiWn(object):
def __init__(self, strategy='sum', use_pos=False):
self.strategy = strategy
self.pos = use_pos
self.pos_map = {
'NOUN': 'n',
'ADJ': 'a',
'VERB': 'v',
'ADV': 'r'
}
self.strategy_map = {
'sum': self._simple_sum,
'weighted_sum': self.weighted_sum,
'average_score': self.average_score,
'weighted_average': self.weighted_average}
# Simplest solution.
# Double-sum: we sum the score for each synset for each word
def _simple_sum(self, text):
s = np.zeros(3)
for token, pos in text:
if self.pos:
try:
synsets = list(swn.senti_synsets(token, self.pos_map[pos]))
except KeyError:
pass
else:
synsets = list(swn.senti_synsets(token))
for syn in synsets:
p, n, o = syn.pos_score(), syn.neg_score(), syn.obj_score()
s[0] += p
s[1] += n
s[2] += o
return s
# We weight the scores considering how many synsets each word has:
# the more syns a word has, the lower its importance.
def weighted_sum(self, text):
s = np.zeros(3)
all_s = []
if self.pos:
all_s = [list(swn.senti_synsets(token, self.pos_map[pos])) for token, pos in text]
else:
all_s = [list(swn.senti_synsets(token)) for token, pos in text]
for i, (token, pos) in enumerate(text):
try:
synsets = all_s[i]
sidf = np.log(max([len(l) for l in all_s]) / len(synsets))
for syn in synsets:
p, n, o = syn.pos_score(), syn.neg_score(), syn.obj_score()
s[0] += p * sidf
s[1] += n * sidf
s[2] += o * sidf # this is neutral
except ZeroDivisionError:
pass
return s
# We just average each score, so that we have an averaged positive, average negative
# and average neutral
def average_score(self, text):
counter = 0
s = np.zeros(3)
for token, pos in text:
if self.pos:
try:
synsets = list(swn.senti_synsets(token, self.pos_map[pos]))
except KeyError:
pass
else:
synsets = list(swn.senti_synsets(token))
for syn in synsets:
p, n, o = syn.pos_score(), syn.neg_score(), syn.obj_score()
s[0] += p
s[1] += n
s[2] += o
counter += 1
s[0] = s[0]/counter
s[1] = s[1]/counter
s[2] = s[2]/counter
return s
# We average the weighted sum
def weighted_average(self, text):
s = np.zeros(3)
all_s = []
if self.pos:
all_s = [list(swn.senti_synsets(token, self.pos_map[pos])) for token, pos in text]
else:
all_s = [list(swn.senti_synsets(token)) for token, pos in text]
counter = 0
for i, (token, pos) in enumerate(text):
try:
synsets = all_s[i]
sidf = np.log(max([len(l) for l in all_s]) / len(synsets))
for syn in synsets:
p, n, o = syn.pos_score(), syn.neg_score(), syn.obj_score()
s[0] += p * sidf
s[1] += n * sidf
s[2] += o * sidf # this is neutral
counter += sidf
except ZeroDivisionError:
pass
s[0] = s[0]/counter
s[1] = s[1]/counter
s[2] = s[2]/counter
return s
def predict(self, docs):
try:
score_function = self.strategy_map[self.strategy]
except KeyError:
raise Exception('{} strategy not yet available'.format(self.strategy))
self.doc_scores = np.array([score_function(doc) for doc in docs])
# we scale data bc the "objective" (=neutral) scores are always higher than pos and neg scores. Thus, if
# we just took the max, then every document would have been considered neutral
self.S = MinMaxScaler().fit_transform(self.doc_scores)
# returns the index of the column with the highest val for each row
# Thus: 0 = positive (first column), 1 = negative (second column), 2 = neutral
pred = self.S.argmax(axis=1)
y_pred = [1 if p == 0 else -1 if p == 1 else 0 for i, p in enumerate(pred)]
return y_pred
def custom_plots(self, y_true):
fig, ax = plt.subplots(figsize=(14, 4), nrows=2, ncols=2)
ax[0,0].boxplot(self.doc_scores)
ax[0,1].scatter(self.doc_scores[:,0], self.doc_scores[:,1], alpha=0.4, c=y_true)
ax[1,0].boxplot(self.S)
ax[1,1].scatter(self.S[:,0], self.S[:,1], alpha=0.4, c=y_true)
return plt
yelp = pd.read_csv('data/yelp_example_1_small.tsv', sep='\t')
tokenizer = SpacyTokenizer(lemma=True, pos_filter=['NOUN', 'ADV', 'ADJ', 'VERB'])
tokenizer.tokenize(yelp.iloc[0].content)
docs, titles, scores = [], [], []
data = tqdm_notebook(list(yelp.iterrows()))
for i, row in data:
tokens = tokenizer.tokenize(row.content)
docs.append(tokens)
titles.append(row.business)
scores.append(row.score)
with open('data/yelp_example_1.json', 'w') as out:
json.dump({'docs': docs, 'titles': titles, 'scores': scores}, out)
synsets = list(swn.senti_synsets('happy'))
for syn in synsets:
print(syn)
for syn in synsets:
print(syn.synset.definition())
synsets = list(swn.senti_synsets('play', 'v'))
for syn in synsets:
print(syn)
for syn in synsets:
print(syn.synset.definition())
with open('data/yelp_example_1.json', 'r') as infile:
data = json.load(infile)
docs = data['docs']
titles = data['titles']
scores = data['scores']
''' The Num argument indicates the value of the review (i.e: 3 stars).
If the review has more the num stars, then it is postive (=1); otherwise, negative (=-1). 0 for neutral.
We can also get only positive and negative, without neutral, by setting the use_neutral argument to False
'''
def get_true_label_from_score(num, use_neutral = True):
if use_neutral:
return [1 if score > num else -1 if score < num else 0 for i, score in enumerate(scores)]
else:
return [1 if score >= num else -1 for i, score in enumerate(scores)]
y_true = get_true_label_from_score(3)
wn = SentiWn(strategy='sum', use_pos=True)
y_pred = wn.predict(docs)
wn.custom_plots(y_true).show()
def print_report_plot_cf(y_true, y_pred):
report = classification_report(y_true, y_pred)
cm = confusion_matrix(y_true, y_pred)
print(report)
if 'sklearn.metrics._plot.confusion_matrix' in sys.modules:
fig, ax = plt.subplots(figsize=(8, 8))
d = ConfusionMatrixDisplay(cm, [-1, 0, 1])
d.plot(cmap=plt.cm.Blues, ax=ax, values_format='10.0f')
plt.show()
else:
print(cm)
print_report_plot_cf(y_true, y_pred)
wn_w = SentiWn(strategy='weighted_sum')
y_w_pred = wn_w.predict(docs)
wn_w.custom_plots(y_true).show()
print_report_plot_cf(y_true, y_w_pred)
wn_a = SentiWn(strategy='average_score')
y_a_pred = wn_a.predict(docs)
wn_a.custom_plots(y_true).show()
print_report_plot_cf(y_true, y_a_pred)
wn_wa = SentiWn(strategy='weighted_average')
y_wa_pred = wn_wa.predict(docs)
wn_wa.custom_plots(y_true).show()
print_report_plot_cf(y_true, y_wa_pred)
| 0.369656 | 0.719815 |
# Module 3 Practice - Python Fundamentals
## power iteration of sequences
<font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
- Iterate through Lists using **`for`** and **`in`**
- Use **`for` *`count`* `in range()`** in looping operations
- Use list methods **`.extend()`, `+, .reverse(), .sort()`**
- convert between lists and strings using **`.split()`** and **`.join()`**
- cast strings to lists **/** direct multiple print outputs to a single line. ** `print("hi", end='')`**
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
## list iteration: `for in`
### `for item in list:`
```
# [ ] print out the "physical states of matter" (matter_states) in 4 sentences using list iteration
# each sentence should be of the format: "Solid - is state of matter #1"
matter_states = ['solid', 'liquid', 'gas', 'plasma']
number = 1
for item in matter_states:
print(f"{item.title()} - is state of matter #{number}")
number += 1
# [ ] iterate the list (birds) to see any bird names start with "c" and remove that item from the list
# print the birds list before and after removals
birds = ["turkey", "hawk", "chicken", "dove", "crow"]
print(f"birds in list before = {birds}")
for bird in birds:
if bird.lower().startswith("c"):
birds.remove(bird)
print(f"birds in list after = {birds}")
# the team makes 1pt, 2pt or 3pt baskets
# [ ] print the occurace of each type of basket(1pt, 2pt, 3pt) & total points using the list baskets
baskets = [2,2,2,1,2,1,3,3,1,2,2,2,2,1,3]
ones = 0
twos = 0
threes = 0
total = 0
for item in baskets:
total += item
if item == 1:
ones += 1
elif item == 2:
twos += 1
elif item == 3:
threes += 1
print(f"1pts = {ones}")
print(f"2pts = {twos}")
print(f"3pts = {threes}")
print(f"total = {total}")
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font>
## iteration with `range(start)` & `range(start,stop)`
```
# [ ] using range() print "hello" 4 times
for number in range(4):
print("hello")
# [ ] find spell_list length
# [ ] use range() to iterate each half of spell_list
# [ ] label & print the first and second halves
spell_list = ["Tuesday", "Wednesday", "February", "November", "Annual", "Calendar", "Solstice"]
list_length = len(spell_list)
half_index = int(list_length/2)
print("First half of list:")
for number in range(half_index):
print(spell_list[number])
print()
print("Second half of list:")
for number in range(half_index, list_length):
print(spell_list[number])
# [ ] build a list of numbers from 20 to 29: twenties
# append each number to twenties list using range(start,stop) iteration
# [ ] print twenties
twenties = []
for number in range(20, 30):
twenties.append(number)
print(twenties)
# [ ] iterate through the numbers populated in the list twenties and add each number to a variable: total
# [ ] print total
total = 0
for number in twenties:
total += number
print(f"total = {total}")
# check your answer above using range(start,stop)
# [ ] iterate each number from 20 to 29 using range()
# [ ] add each number to a variable (total) to calculate the sum
# should match earlier task
total = 0
for number in range(20, 30):
total += number
print(f"total = {total}")
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 3</B></font>
## iteration with `range(start:stop:skip)`
```
# [ ] create a list of odd numbers (odd_nums) from 1 to 25 using range(start,stop,skip)
# [ ] print odd_nums
# hint: odd numbers are 2 digits apart
odd_nums = []
for number in range(1, 26, 2):
odd_nums.append(number)
print(odd_nums)
# [ ] create a Decending list of odd numbers (odd_nums) from 25 to 1 using range(start,stop,skip)
# [ ] print odd_nums, output should resemble [25, 23, ...]
odd_nums = []
for number in range(25, 0, -2):
odd_nums.append(number)
print(odd_nums)
# the list, elements, contains the names of the first 20 elements in atomic number order
# [ ] print the even number elements "2 - Helium, 4 - Beryllium,.." in the list with the atomic number
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
for number in range(0, len(elements), 2):
print(f"{number + 2} - {elements[number + 1]}")
# [ ] # the list, elements_60, contains the names of the first 60 elements in atomic number order
# [ ] print the odd number elements "1 - Hydrogen, 3 - Lithium,.." in the list with the atomic number elements_60
elements_60 = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', \
'Oxygen', 'Fluorine', 'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', \
'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', 'Potassium', 'Calcium', 'Hydrogen', \
'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', \
'Argon', 'Potassium', 'Calcium', 'Scandium', 'Titanium', 'Vanadium', 'Chromium', 'Manganese', \
'Iron', 'Cobalt', 'Nickel', 'Copper', 'Zinc', 'Gallium', 'Germanium', 'Arsenic', 'Selenium', \
'Bromine', 'Krypton', 'Rubidium', 'Strontium', 'Yttrium', 'Zirconium']
for number in range(0, len(elements_60), 2):
print(f"{number + 1} - {elements_60[number]}")
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 4</B></font>
## combine lists with `+` and `.extend()`
```
# [ ] print the combined lists (numbers_1 & numbers_2) using "+" operator
numbers_1 = [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
# pythonic casting of a range into a list
numbers_2 = list(range(30,50,2))
print("numbers_1:",numbers_1)
print("numbers_2",numbers_2)
print("All numbers:", numbers_1 + numbers_2)
# [ ] print the combined element lists (first_row & second_row) using ".extend()" method
first_row = ['Hydrogen', 'Helium']
second_row = ['Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']
print("1st Row:", first_row)
print("2nd Row:", second_row)
first_row.extend(second_row)
print("All rows:", first_row)
```
## Project: Combine 3 element rows
Choose to use **"+" or ".extend()" **to build output similar to
```
The 1st three rows of the Period Table of Elements contain:
['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon']
The row breakdown is
Row 1: Hydrogen, Helium
Row 2: Lithium, Beryllium, Boron, Carbon, Nitrogen, Oxygen, Fluorine, Neon
Row 3: Sodium, Magnesium, Aluminum, Silicon, Phosphorus, Sulfur, Chlorine, Argon
```
```
# [ ] create the program: combined 3 element rows
elem_1 = ['Hydrogen', 'Helium']
elem_2 = ['Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']
elem_3 = ['Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon']
print("The first three rows of the period Table contain:")
print(elem_1 + elem_2 + elem_3)
# [ ] .extend() jack_jill with "next_line" string - print the result
jack_jill = ['Jack', 'and', 'Jill', 'went', 'up', 'the', 'hill']
next_line = ['To', 'fetch', 'a', 'pail', 'of', 'water']
jack_jill.extend(next_line)
print(jack_jill)
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 5</B></font>
## .reverse() : reverse a list in place
```
# [ ] use .reverse() to print elements starting with "Calcium", "Chlorine",... in reverse order
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
elements.reverse()
for element in elements:
print(element)
# [ ] reverse order of the list... Then print only words that are 8 characters or longer from the now reversed order
spell_list = ["Tuesday", "Wednesday", "February", "November", "Annual", "Calendar", "Solstice"]
spell_list.reverse()
for spell in spell_list:
if len(spell) >= 8:
print(spell)
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 6</B></font>
## .sort() and sorted()
```
# [ ] sort the list element, so names are in alphabetical order and print elements
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
elements.sort()
print(elements)
# [ ] print the list, numbers, sorted and then below print the original numbers list
numbers = [2,2,2,1,2,1,3,3,1,2,2,2,2,1,3]
old_numbers = str(numbers)
numbers.sort()
print(f"numbers sorted: {numbers}")
print(f"original numbers: {old_numbers}")
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 7</B></font>
## Converting a string to a list with `.split()`
```
# [ ] split the string, daily_fact, into a list of word strings: fact_words
# [ ] print each string in fact_words in upper case on it's own line
daily_fact = "Did you know that there are 1.4 billion students in the world?"
fact_words = daily_fact.split()
for item in fact_words:
print(item.upper())
# [ ] convert the string, code_tip, into a list made from splitting on the letter "o"
code_tip = "always save while working in your notebook!"
code_list = code_tip.split("o")
for item in code_list:
print(item)
# [ ] split poem on "b" to create a list: poem_words
# [ ] print poem_words by iterating the list
poem = "The bright brain, has bran!"
poem_words = poem.split("b")
for item in poem_words:
print(item)
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 8</B></font>
## `.join()`
### build a string from a list
```
# [ ] print a comma separated string output from the list of Halogen elements using ".join()"
halogens = ['Chlorine', 'Florine', 'Bromine', 'Iodine']
print(", ".join(halogens))
# [ ] split the sentence, code_tip, into a words list
# [ ] print the joined words in the list with no spaces in-between
# [ ] Bonus: capitalize each word in the list before .join()
code_tip ="Read code aloud or explain the code step by step to a peer"
code_list = code_tip.split()
for number in range(len(code_list)):
code_list[number] = code_list[number].title()
print("".join(code_list))
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 8</B></font>
## `list(string)` & `print("hello",end=' ')`
- **Cast a string to a list**
- **print to the same line**
```
# [ ] cast the long_word into individual letters list
# [ ] print each letter on a line
long_word = 'decelerating'
long_list = list(long_word)
for item in long_list:
print(item)
# [ ] use use end= in print to output each string in questions with a "?" and on new lines
questions = ["What's the closest planet to the Sun", "How deep do Dolphins swim", "What time is it"]
for item in questions:
print(item, end = "?\n")
# [ ] print each item in foot bones
# - capitalized, both words if two word name
# - separated by a comma and space
# - and keeping on a single print line
foot_bones = ["calcaneus", "talus", "cuboid", "navicular", "lateral cuneiform",
"intermediate cuneiform", "medial cuneiform"]
for item in foot_bones:
print(item.title(), end = ", ")
```
[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
|
github_jupyter
|
# [ ] print out the "physical states of matter" (matter_states) in 4 sentences using list iteration
# each sentence should be of the format: "Solid - is state of matter #1"
matter_states = ['solid', 'liquid', 'gas', 'plasma']
number = 1
for item in matter_states:
print(f"{item.title()} - is state of matter #{number}")
number += 1
# [ ] iterate the list (birds) to see any bird names start with "c" and remove that item from the list
# print the birds list before and after removals
birds = ["turkey", "hawk", "chicken", "dove", "crow"]
print(f"birds in list before = {birds}")
for bird in birds:
if bird.lower().startswith("c"):
birds.remove(bird)
print(f"birds in list after = {birds}")
# the team makes 1pt, 2pt or 3pt baskets
# [ ] print the occurace of each type of basket(1pt, 2pt, 3pt) & total points using the list baskets
baskets = [2,2,2,1,2,1,3,3,1,2,2,2,2,1,3]
ones = 0
twos = 0
threes = 0
total = 0
for item in baskets:
total += item
if item == 1:
ones += 1
elif item == 2:
twos += 1
elif item == 3:
threes += 1
print(f"1pts = {ones}")
print(f"2pts = {twos}")
print(f"3pts = {threes}")
print(f"total = {total}")
# [ ] using range() print "hello" 4 times
for number in range(4):
print("hello")
# [ ] find spell_list length
# [ ] use range() to iterate each half of spell_list
# [ ] label & print the first and second halves
spell_list = ["Tuesday", "Wednesday", "February", "November", "Annual", "Calendar", "Solstice"]
list_length = len(spell_list)
half_index = int(list_length/2)
print("First half of list:")
for number in range(half_index):
print(spell_list[number])
print()
print("Second half of list:")
for number in range(half_index, list_length):
print(spell_list[number])
# [ ] build a list of numbers from 20 to 29: twenties
# append each number to twenties list using range(start,stop) iteration
# [ ] print twenties
twenties = []
for number in range(20, 30):
twenties.append(number)
print(twenties)
# [ ] iterate through the numbers populated in the list twenties and add each number to a variable: total
# [ ] print total
total = 0
for number in twenties:
total += number
print(f"total = {total}")
# check your answer above using range(start,stop)
# [ ] iterate each number from 20 to 29 using range()
# [ ] add each number to a variable (total) to calculate the sum
# should match earlier task
total = 0
for number in range(20, 30):
total += number
print(f"total = {total}")
# [ ] create a list of odd numbers (odd_nums) from 1 to 25 using range(start,stop,skip)
# [ ] print odd_nums
# hint: odd numbers are 2 digits apart
odd_nums = []
for number in range(1, 26, 2):
odd_nums.append(number)
print(odd_nums)
# [ ] create a Decending list of odd numbers (odd_nums) from 25 to 1 using range(start,stop,skip)
# [ ] print odd_nums, output should resemble [25, 23, ...]
odd_nums = []
for number in range(25, 0, -2):
odd_nums.append(number)
print(odd_nums)
# the list, elements, contains the names of the first 20 elements in atomic number order
# [ ] print the even number elements "2 - Helium, 4 - Beryllium,.." in the list with the atomic number
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
for number in range(0, len(elements), 2):
print(f"{number + 2} - {elements[number + 1]}")
# [ ] # the list, elements_60, contains the names of the first 60 elements in atomic number order
# [ ] print the odd number elements "1 - Hydrogen, 3 - Lithium,.." in the list with the atomic number elements_60
elements_60 = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', \
'Oxygen', 'Fluorine', 'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', \
'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', 'Potassium', 'Calcium', 'Hydrogen', \
'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', \
'Argon', 'Potassium', 'Calcium', 'Scandium', 'Titanium', 'Vanadium', 'Chromium', 'Manganese', \
'Iron', 'Cobalt', 'Nickel', 'Copper', 'Zinc', 'Gallium', 'Germanium', 'Arsenic', 'Selenium', \
'Bromine', 'Krypton', 'Rubidium', 'Strontium', 'Yttrium', 'Zirconium']
for number in range(0, len(elements_60), 2):
print(f"{number + 1} - {elements_60[number]}")
# [ ] print the combined lists (numbers_1 & numbers_2) using "+" operator
numbers_1 = [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
# pythonic casting of a range into a list
numbers_2 = list(range(30,50,2))
print("numbers_1:",numbers_1)
print("numbers_2",numbers_2)
print("All numbers:", numbers_1 + numbers_2)
# [ ] print the combined element lists (first_row & second_row) using ".extend()" method
first_row = ['Hydrogen', 'Helium']
second_row = ['Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']
print("1st Row:", first_row)
print("2nd Row:", second_row)
first_row.extend(second_row)
print("All rows:", first_row)
The 1st three rows of the Period Table of Elements contain:
['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon']
The row breakdown is
Row 1: Hydrogen, Helium
Row 2: Lithium, Beryllium, Boron, Carbon, Nitrogen, Oxygen, Fluorine, Neon
Row 3: Sodium, Magnesium, Aluminum, Silicon, Phosphorus, Sulfur, Chlorine, Argon
# [ ] create the program: combined 3 element rows
elem_1 = ['Hydrogen', 'Helium']
elem_2 = ['Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']
elem_3 = ['Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon']
print("The first three rows of the period Table contain:")
print(elem_1 + elem_2 + elem_3)
# [ ] .extend() jack_jill with "next_line" string - print the result
jack_jill = ['Jack', 'and', 'Jill', 'went', 'up', 'the', 'hill']
next_line = ['To', 'fetch', 'a', 'pail', 'of', 'water']
jack_jill.extend(next_line)
print(jack_jill)
# [ ] use .reverse() to print elements starting with "Calcium", "Chlorine",... in reverse order
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
elements.reverse()
for element in elements:
print(element)
# [ ] reverse order of the list... Then print only words that are 8 characters or longer from the now reversed order
spell_list = ["Tuesday", "Wednesday", "February", "November", "Annual", "Calendar", "Solstice"]
spell_list.reverse()
for spell in spell_list:
if len(spell) >= 8:
print(spell)
# [ ] sort the list element, so names are in alphabetical order and print elements
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
elements.sort()
print(elements)
# [ ] print the list, numbers, sorted and then below print the original numbers list
numbers = [2,2,2,1,2,1,3,3,1,2,2,2,2,1,3]
old_numbers = str(numbers)
numbers.sort()
print(f"numbers sorted: {numbers}")
print(f"original numbers: {old_numbers}")
# [ ] split the string, daily_fact, into a list of word strings: fact_words
# [ ] print each string in fact_words in upper case on it's own line
daily_fact = "Did you know that there are 1.4 billion students in the world?"
fact_words = daily_fact.split()
for item in fact_words:
print(item.upper())
# [ ] convert the string, code_tip, into a list made from splitting on the letter "o"
code_tip = "always save while working in your notebook!"
code_list = code_tip.split("o")
for item in code_list:
print(item)
# [ ] split poem on "b" to create a list: poem_words
# [ ] print poem_words by iterating the list
poem = "The bright brain, has bran!"
poem_words = poem.split("b")
for item in poem_words:
print(item)
# [ ] print a comma separated string output from the list of Halogen elements using ".join()"
halogens = ['Chlorine', 'Florine', 'Bromine', 'Iodine']
print(", ".join(halogens))
# [ ] split the sentence, code_tip, into a words list
# [ ] print the joined words in the list with no spaces in-between
# [ ] Bonus: capitalize each word in the list before .join()
code_tip ="Read code aloud or explain the code step by step to a peer"
code_list = code_tip.split()
for number in range(len(code_list)):
code_list[number] = code_list[number].title()
print("".join(code_list))
# [ ] cast the long_word into individual letters list
# [ ] print each letter on a line
long_word = 'decelerating'
long_list = list(long_word)
for item in long_list:
print(item)
# [ ] use use end= in print to output each string in questions with a "?" and on new lines
questions = ["What's the closest planet to the Sun", "How deep do Dolphins swim", "What time is it"]
for item in questions:
print(item, end = "?\n")
# [ ] print each item in foot bones
# - capitalized, both words if two word name
# - separated by a comma and space
# - and keeping on a single print line
foot_bones = ["calcaneus", "talus", "cuboid", "navicular", "lateral cuneiform",
"intermediate cuneiform", "medial cuneiform"]
for item in foot_bones:
print(item.title(), end = ", ")
| 0.225758 | 0.934753 |
```
# -*- coding: utf-8 -*-
"""
Created on Fri Dec 4 19:34:27 2020
@author: Marc
"""
import re
import pandas as pd
#open input
with open(r"./data/day_4_data.txt","r") as f:
passports = f.readlines()
#Part 1
#dump each line in a string
passports2 = ""
for line in passports:
passports2 = passports2 + line
#make list with each passport as an item
passports3 = passports2.split("\n\n")
#create regex patterns to check for each item
byrRegex = re.compile(r"byr")
iyrRegex = re.compile(r"iyr")
eyrRegex = re.compile(r"eyr")
hgtRegex = re.compile(r"hgt")
hclRegex = re.compile(r"hcl")
pidRegex = re.compile(r"pid")
eclRegex = re.compile(r"ecl")
#create empty lists where we will check whether the pattern exists
byrs = []
iyrs = []
eyrs = []
hgts = []
hcls = []
pids = []
ecls = []
#loop through passports and check whether patterns exist
for passport in passports3:
if byrRegex.search(passport) == None:
byr = "no"
else:
byr = "yes"
byrs.append(byr)
if iyrRegex.search(passport) == None:
iyr = "no"
else:
iyr = "yes"
iyrs.append(iyr)
if eyrRegex.search(passport) == None:
eyr = "no"
else:
eyr = "yes"
eyrs.append(eyr)
if hgtRegex.search(passport) == None:
hgt = "no"
else:
hgt = "yes"
hgts.append(hgt)
if hclRegex.search(passport) == None:
hcl = "no"
else:
hcl = "yes"
hcls.append(hcl)
if pidRegex.search(passport) == None:
pid = "no"
else:
pid = "yes"
pids.append(pid)
if eclRegex.search(passport) == None:
ecl = "no"
else:
ecl = "yes"
ecls.append(ecl)
#group passports with results in a df and remove invalid ones
results = {"passport": passports3, "byr": byrs, "iyr": iyrs, "eyr": eyrs, "hgt": hgts, "hcl": hcls, "pid": pids, "ecl": ecls}
results = pd.DataFrame.from_dict(results)
resultsyes = results[(results["byr"] == "yes") & (results["iyr"]== "yes") & (results["eyr"] == "yes") & (results["hgt"] == "yes") & (results["hcl"]== "yes") & (results["pid"] == "yes") & (results["ecl"] == "yes")]
#Part two
#separate valid passports
validpassports = resultsyes["passport"]
#create new regex objects
byrRegex = re.compile(r"(byr:)(\d\d\d\d)(\b)")
iyrRegex = re.compile(r"(iyr:)(\d\d\d\d)(\b)")
eyrRegex = re.compile(r"(eyr:)(\d\d\d\d)(\b)")
hgtRegex = re.compile(r"(hgt:)(\d){2,3}(\w\w)")
hclRegex = re.compile(r"(hcl:)(#)([0123456789abcdef]){6}(\b)")
pidRegex = re.compile(r"(pid:)([0123456789]){9}(\b)")
eclRegex = re.compile(r"(ecl:)(\w\w\w)(\b)")
byrs = []
iyrs = []
eyrs = []
hgts = []
hcls = []
pids = []
ecls = []
for passport in validpassports:
if byrRegex.search(passport) == None:
byr = "no"
else:
byr = byrRegex.search(passport).group()
byr = int(byr[4:8])
byrs.append(byr)
if iyrRegex.search(passport) == None:
iyr = "no"
else:
iyr = iyrRegex.search(passport).group()
iyr = int(iyr[4:8])
iyrs.append(iyr)
if eyrRegex.search(passport) == None:
eyr = "no"
else:
eyr = eyrRegex.search(passport).group()
eyr = int(eyr[4:8])
eyrs.append(eyr)
if hgtRegex.search(passport) == None:
hgt = "no"
else:
hgt = hgtRegex.search(passport).group()
if hgt[-1] == "\n":
hgt = hgt[4:].strip("\n")
else:
hgt = hgt[4:]
hgts.append(hgt)
if hclRegex.search(passport) == None:
hcl = "no"
else:
hcl = hclRegex.search(passport).group()
hcl = hcl[4:11]
hcls.append(hcl)
if pidRegex.search(passport) == None:
pid = "no"
else:
pid = pidRegex.search(passport).group()
pid = pid[4:13]
pids.append(pid)
if eclRegex.search(passport) == None:
ecl = "no"
else:
ecl = eclRegex.search(passport).group()
ecl = ecl[4:7]
ecls.append(ecl)
#group passports with results in a df and remove invalid ones
results2 = {"passport": validpassports, "byr": byrs, "iyr": iyrs, "eyr": eyrs, "hgt": hgts, "hcl": hcls, "pid": pids, "ecl": ecls}
results2 = pd.DataFrame.from_dict(results2)
#filter for byr
results2 = results2[(results2["byr"] >= 1920) & (results2["byr"] <= 2002)]
#filter for iyr
results2 = results2[(results2["iyr"] >= 2010) & (results2["iyr"] <= 2020)]
#filter for eyr
results2 = results2[(results2["eyr"] >= 2020) & (results2["eyr"] <= 2030)]
#filter for hcl
results2 = results2[results2["hcl"] !="no"]
#filter for pid
results2 = results2[results2["pid"] !="no"]
#filter for ecl
results2 = results2[results2["ecl"].isin(["amb", "blu", "brn", "gry", "grn", "hzl", "oth"])]
#filter for hgt
results2 = results2[results2["hgt"] !="no"]
hgtfilter = []
for hgt in results2["hgt"]:
if str(hgt)[-2:] == "in":
hgt = int(str(hgt)[:-2])
if 59 <= hgt <= 76:
hgtf = "yes"
else:
hgtf = "no"
else:
hgt = int(str(hgt)[:-2])
if 150 <= hgt <= 193:
hgtf = "yes"
else:
hgtf = "no"
hgtfilter.append(hgtf)
results2["hgtfilter"] = hgtfilter
results2 = results2[results2["hgtfilter"] !="no"]
results2
#Part two
#separate valid passports
validpassports = resultsyes["passport"]
#create new regex objects
byrRegex = re.compile(r"(byr:)(\d\d\d\d)(\b)")
iyrRegex = re.compile(r"(iyr:)(\d\d\d\d)(\b)")
eyrRegex = re.compile(r"(eyr:)(\d\d\d\d)(\b)")
hgtRegex = re.compile(r"(hgt:)(\d){2,3}(\w\w)")
hclRegex = re.compile(r"(hcl:)(#)([0123456789abcdef]){6}(\b)")
pidRegex = re.compile(r"(pid:)([0123456789]){9}(\b)")
eclRegex = re.compile(r"(ecl:)(\w\w\w)(\b)")
byrs = []
iyrs = []
eyrs = []
hgts = []
hcls = []
pids = []
ecls = []
for passport in validpassports:
if byrRegex.search(passport) == None:
byr = "no"
else:
byr = byrRegex.search(passport).group()
byr = int(byr[4:8])
byrs.append(byr)
if iyrRegex.search(passport) == None:
iyr = "no"
else:
iyr = iyrRegex.search(passport).group()
iyr = int(iyr[4:8])
iyrs.append(iyr)
if eyrRegex.search(passport) == None:
eyr = "no"
else:
eyr = eyrRegex.search(passport).group()
eyr = int(eyr[4:8])
eyrs.append(eyr)
if hgtRegex.search(passport) == None:
hgt = "no"
else:
hgt = hgtRegex.search(passport).group()
if hgt[-1] == "\n":
hgt = hgt[4:].strip("\n")
else:
hgt = hgt[4:]
hgts.append(hgt)
if hclRegex.search(passport) == None:
hcl = "no"
else:
hcl = hclRegex.search(passport).group()
hcl = hcl[4:11]
hcls.append(hcl)
if pidRegex.search(passport) == None:
pid = "no"
else:
pid = pidRegex.search(passport).group()
pid = pid[4:13]
pids.append(pid)
if eclRegex.search(passport) == None:
ecl = "no"
else:
ecl = eclRegex.search(passport).group()
ecl = ecl[4:7]
ecls.append(ecl)
#group passports with results in a df and remove invalid ones
results2 = {"passport": validpassports, "byr": byrs, "iyr": iyrs, "eyr": eyrs, "hgt": hgts, "hcl": hcls, "pid": pids, "ecl": ecls}
results2 = pd.DataFrame.from_dict(results2)
#filter for byr
results2 = results2[(results2["byr"] >= 1920) & (results2["byr"] <= 2002)]
#filter for iyr
results2 = results2[(results2["iyr"] >= 2010) & (results2["iyr"] <= 2020)]
#filter for eyr
results2 = results2[(results2["eyr"] >= 2020) & (results2["eyr"] <= 2030)]
#filter for hcl
results2 = results2[results2["hcl"] !="no"]
#filter for pid
results2 = results2[results2["pid"] !="no"]
#filter for ecl
results2 = results2[results2["ecl"].isin(["amb", "blu", "brn", "gry", "grn", "hzl", "oth"])]
#filter for hgt
results2 = results2[results2["hgt"] !="no"]
hgtfilter = []
for hgt in results2["hgt"]:
if str(hgt)[-2:] == "in":
hgt = int(str(hgt)[:-2])
if 59 <= hgt <= 76:
hgtf = "yes"
else:
hgtf = "no"
else:
hgt = int(str(hgt)[:-2])
if 150 <= hgt <= 193:
hgtf = "yes"
else:
hgtf = "no"
hgtfilter.append(hgtf)
results2["hgtfilter"] = hgtfilter
results2 = results2[results2["hgtfilter"] !="no"]
```
|
github_jupyter
|
# -*- coding: utf-8 -*-
"""
Created on Fri Dec 4 19:34:27 2020
@author: Marc
"""
import re
import pandas as pd
#open input
with open(r"./data/day_4_data.txt","r") as f:
passports = f.readlines()
#Part 1
#dump each line in a string
passports2 = ""
for line in passports:
passports2 = passports2 + line
#make list with each passport as an item
passports3 = passports2.split("\n\n")
#create regex patterns to check for each item
byrRegex = re.compile(r"byr")
iyrRegex = re.compile(r"iyr")
eyrRegex = re.compile(r"eyr")
hgtRegex = re.compile(r"hgt")
hclRegex = re.compile(r"hcl")
pidRegex = re.compile(r"pid")
eclRegex = re.compile(r"ecl")
#create empty lists where we will check whether the pattern exists
byrs = []
iyrs = []
eyrs = []
hgts = []
hcls = []
pids = []
ecls = []
#loop through passports and check whether patterns exist
for passport in passports3:
if byrRegex.search(passport) == None:
byr = "no"
else:
byr = "yes"
byrs.append(byr)
if iyrRegex.search(passport) == None:
iyr = "no"
else:
iyr = "yes"
iyrs.append(iyr)
if eyrRegex.search(passport) == None:
eyr = "no"
else:
eyr = "yes"
eyrs.append(eyr)
if hgtRegex.search(passport) == None:
hgt = "no"
else:
hgt = "yes"
hgts.append(hgt)
if hclRegex.search(passport) == None:
hcl = "no"
else:
hcl = "yes"
hcls.append(hcl)
if pidRegex.search(passport) == None:
pid = "no"
else:
pid = "yes"
pids.append(pid)
if eclRegex.search(passport) == None:
ecl = "no"
else:
ecl = "yes"
ecls.append(ecl)
#group passports with results in a df and remove invalid ones
results = {"passport": passports3, "byr": byrs, "iyr": iyrs, "eyr": eyrs, "hgt": hgts, "hcl": hcls, "pid": pids, "ecl": ecls}
results = pd.DataFrame.from_dict(results)
resultsyes = results[(results["byr"] == "yes") & (results["iyr"]== "yes") & (results["eyr"] == "yes") & (results["hgt"] == "yes") & (results["hcl"]== "yes") & (results["pid"] == "yes") & (results["ecl"] == "yes")]
#Part two
#separate valid passports
validpassports = resultsyes["passport"]
#create new regex objects
byrRegex = re.compile(r"(byr:)(\d\d\d\d)(\b)")
iyrRegex = re.compile(r"(iyr:)(\d\d\d\d)(\b)")
eyrRegex = re.compile(r"(eyr:)(\d\d\d\d)(\b)")
hgtRegex = re.compile(r"(hgt:)(\d){2,3}(\w\w)")
hclRegex = re.compile(r"(hcl:)(#)([0123456789abcdef]){6}(\b)")
pidRegex = re.compile(r"(pid:)([0123456789]){9}(\b)")
eclRegex = re.compile(r"(ecl:)(\w\w\w)(\b)")
byrs = []
iyrs = []
eyrs = []
hgts = []
hcls = []
pids = []
ecls = []
for passport in validpassports:
if byrRegex.search(passport) == None:
byr = "no"
else:
byr = byrRegex.search(passport).group()
byr = int(byr[4:8])
byrs.append(byr)
if iyrRegex.search(passport) == None:
iyr = "no"
else:
iyr = iyrRegex.search(passport).group()
iyr = int(iyr[4:8])
iyrs.append(iyr)
if eyrRegex.search(passport) == None:
eyr = "no"
else:
eyr = eyrRegex.search(passport).group()
eyr = int(eyr[4:8])
eyrs.append(eyr)
if hgtRegex.search(passport) == None:
hgt = "no"
else:
hgt = hgtRegex.search(passport).group()
if hgt[-1] == "\n":
hgt = hgt[4:].strip("\n")
else:
hgt = hgt[4:]
hgts.append(hgt)
if hclRegex.search(passport) == None:
hcl = "no"
else:
hcl = hclRegex.search(passport).group()
hcl = hcl[4:11]
hcls.append(hcl)
if pidRegex.search(passport) == None:
pid = "no"
else:
pid = pidRegex.search(passport).group()
pid = pid[4:13]
pids.append(pid)
if eclRegex.search(passport) == None:
ecl = "no"
else:
ecl = eclRegex.search(passport).group()
ecl = ecl[4:7]
ecls.append(ecl)
#group passports with results in a df and remove invalid ones
results2 = {"passport": validpassports, "byr": byrs, "iyr": iyrs, "eyr": eyrs, "hgt": hgts, "hcl": hcls, "pid": pids, "ecl": ecls}
results2 = pd.DataFrame.from_dict(results2)
#filter for byr
results2 = results2[(results2["byr"] >= 1920) & (results2["byr"] <= 2002)]
#filter for iyr
results2 = results2[(results2["iyr"] >= 2010) & (results2["iyr"] <= 2020)]
#filter for eyr
results2 = results2[(results2["eyr"] >= 2020) & (results2["eyr"] <= 2030)]
#filter for hcl
results2 = results2[results2["hcl"] !="no"]
#filter for pid
results2 = results2[results2["pid"] !="no"]
#filter for ecl
results2 = results2[results2["ecl"].isin(["amb", "blu", "brn", "gry", "grn", "hzl", "oth"])]
#filter for hgt
results2 = results2[results2["hgt"] !="no"]
hgtfilter = []
for hgt in results2["hgt"]:
if str(hgt)[-2:] == "in":
hgt = int(str(hgt)[:-2])
if 59 <= hgt <= 76:
hgtf = "yes"
else:
hgtf = "no"
else:
hgt = int(str(hgt)[:-2])
if 150 <= hgt <= 193:
hgtf = "yes"
else:
hgtf = "no"
hgtfilter.append(hgtf)
results2["hgtfilter"] = hgtfilter
results2 = results2[results2["hgtfilter"] !="no"]
results2
#Part two
#separate valid passports
validpassports = resultsyes["passport"]
#create new regex objects
byrRegex = re.compile(r"(byr:)(\d\d\d\d)(\b)")
iyrRegex = re.compile(r"(iyr:)(\d\d\d\d)(\b)")
eyrRegex = re.compile(r"(eyr:)(\d\d\d\d)(\b)")
hgtRegex = re.compile(r"(hgt:)(\d){2,3}(\w\w)")
hclRegex = re.compile(r"(hcl:)(#)([0123456789abcdef]){6}(\b)")
pidRegex = re.compile(r"(pid:)([0123456789]){9}(\b)")
eclRegex = re.compile(r"(ecl:)(\w\w\w)(\b)")
byrs = []
iyrs = []
eyrs = []
hgts = []
hcls = []
pids = []
ecls = []
for passport in validpassports:
if byrRegex.search(passport) == None:
byr = "no"
else:
byr = byrRegex.search(passport).group()
byr = int(byr[4:8])
byrs.append(byr)
if iyrRegex.search(passport) == None:
iyr = "no"
else:
iyr = iyrRegex.search(passport).group()
iyr = int(iyr[4:8])
iyrs.append(iyr)
if eyrRegex.search(passport) == None:
eyr = "no"
else:
eyr = eyrRegex.search(passport).group()
eyr = int(eyr[4:8])
eyrs.append(eyr)
if hgtRegex.search(passport) == None:
hgt = "no"
else:
hgt = hgtRegex.search(passport).group()
if hgt[-1] == "\n":
hgt = hgt[4:].strip("\n")
else:
hgt = hgt[4:]
hgts.append(hgt)
if hclRegex.search(passport) == None:
hcl = "no"
else:
hcl = hclRegex.search(passport).group()
hcl = hcl[4:11]
hcls.append(hcl)
if pidRegex.search(passport) == None:
pid = "no"
else:
pid = pidRegex.search(passport).group()
pid = pid[4:13]
pids.append(pid)
if eclRegex.search(passport) == None:
ecl = "no"
else:
ecl = eclRegex.search(passport).group()
ecl = ecl[4:7]
ecls.append(ecl)
#group passports with results in a df and remove invalid ones
results2 = {"passport": validpassports, "byr": byrs, "iyr": iyrs, "eyr": eyrs, "hgt": hgts, "hcl": hcls, "pid": pids, "ecl": ecls}
results2 = pd.DataFrame.from_dict(results2)
#filter for byr
results2 = results2[(results2["byr"] >= 1920) & (results2["byr"] <= 2002)]
#filter for iyr
results2 = results2[(results2["iyr"] >= 2010) & (results2["iyr"] <= 2020)]
#filter for eyr
results2 = results2[(results2["eyr"] >= 2020) & (results2["eyr"] <= 2030)]
#filter for hcl
results2 = results2[results2["hcl"] !="no"]
#filter for pid
results2 = results2[results2["pid"] !="no"]
#filter for ecl
results2 = results2[results2["ecl"].isin(["amb", "blu", "brn", "gry", "grn", "hzl", "oth"])]
#filter for hgt
results2 = results2[results2["hgt"] !="no"]
hgtfilter = []
for hgt in results2["hgt"]:
if str(hgt)[-2:] == "in":
hgt = int(str(hgt)[:-2])
if 59 <= hgt <= 76:
hgtf = "yes"
else:
hgtf = "no"
else:
hgt = int(str(hgt)[:-2])
if 150 <= hgt <= 193:
hgtf = "yes"
else:
hgtf = "no"
hgtfilter.append(hgtf)
results2["hgtfilter"] = hgtfilter
results2 = results2[results2["hgtfilter"] !="no"]
| 0.074252 | 0.224757 |
```
%matplotlib inline
import os
import sys
print sys.version
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from xgboost import XGBRegressor
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
from sklearn.preprocessing import StandardScaler
from sklearn.dummy import DummyRegressor
from sklearn.metrics.scorer import make_scorer
from sklearn.model_selection import cross_val_score
from utils import rmsle, rmsle1m
from dataload import load_features
```
### Make RMSLE scorers
```
rmsle_scorer = make_scorer(rmsle, greater_is_better=False)
rmsle_scorer_1m = make_scorer(rmsle1m, greater_is_better=False)
```
### Dictionary of RMSLE results
```
rmsle_results_bg = {}
rmsle_results_e0 = {}
```
### Load basic data
```
DATA_DIR = './data'
train, test = load_features(DATA_DIR, with_ext=False, with_geo=False)
```
### Build basic train and test data sets
```
X_train = train.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'cellvol',
'bandgap', 'E0'], axis=1)
X_test = test.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'cellvol'], axis=1)
# Use log1p of energies to correct for skew
y_bg_train = train['bandgap']
y_e0_train = np.log1p(train['E0'])
# One-hot encode spacegroup_natoms
X_train = pd.concat([X_train.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_train['spacegroup_natoms'])], axis=1)
X_test = pd.concat([X_test.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_test['spacegroup_natoms'])], axis=1)
```
### Test different models
#### Output predictions (Mean model)
```
est_bg = DummyRegressor(strategy='mean')
est_e0 = DummyRegressor(strategy='mean')
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['mean_model'] = err_bg
rmsle_results_e0['mean_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
```
#### Output predictions (Basic model)
```
pa_bg = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 120,
'max_depth': 5,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 108,
'max_depth': 5,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['basic_model'] = err_bg
rmsle_results_e0['basic_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
```
### Load extra data
```
train, test = load_features(DATA_DIR, with_ext=True, with_geo=False)
```
### Build extra train and test data sets
```
X_train = train.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'o_cnt', 'cellvol', 'o_fraction', 'avg_mass',
'bandgap', 'E0'], axis=1)
X_test = test.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'o_cnt', 'cellvol', 'o_fraction', 'avg_mass'], axis=1)
# Use log1p of energies to correct for skew
y_bg_train = train['bandgap']
y_e0_train = np.log1p(train['E0'])
# One-hot encode spacegroup_natoms
X_train = pd.concat([X_train.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_train['spacegroup_natoms'])], axis=1)
X_test = pd.concat([X_test.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_test['spacegroup_natoms'])], axis=1)
```
### Test different models
#### Output predictions (With ext model)
```
pa_bg = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.01, # L1 regularization term on weights
'n_estimators': 290,
'max_depth': 3,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 150,
'max_depth': 4,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('scaler', StandardScaler()),
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('scaler', StandardScaler()),
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['with_ext_model'] = err_bg
rmsle_results_e0['with_ext_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
```
### Load extra data and geo data
```
train, test = load_features(DATA_DIR, with_ext=True, with_geo=True)
```
### Build extra and geo train and test data sets
```
X_train = train.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'o_cnt', 'cellvol', 'o_fraction', 'avg_mass',
'bandgap', 'E0'], axis=1)
X_test = test.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'o_cnt', 'cellvol', 'o_fraction', 'avg_mass'], axis=1)
# Use log1p of energies to correct for skew
y_bg_train = train['bandgap']
y_e0_train = np.log1p(train['E0'])
# One-hot encode spacegroup_natoms
X_train = pd.concat([X_train.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_train['spacegroup_natoms'])], axis=1)
X_test = pd.concat([X_test.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_test['spacegroup_natoms'])], axis=1)
```
### Test different models
#### Output predictions (With ext and geo model, no PCA)
```
pa_bg = {'learning_rate': 0.02, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.05, # L1 regularization term on weights
'n_estimators': 600,
'max_depth': 3,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 500,
'max_depth': 4,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('scaler', StandardScaler()),
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('scaler', StandardScaler()),
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['with_ext_geo_model'] = err_bg
rmsle_results_e0['with_ext_geo_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
```
#### Output predictions (With ext and geo model, with PCA)
```
pa_bg = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.05, # L1 regularization term on weights
'n_estimators': 700,
'max_depth': 3,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 800,
'max_depth': 4,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('scaler', StandardScaler()),
('pca', PCA(n_components=32, random_state=42)),
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('scaler', StandardScaler()),
('pca', PCA(n_components=58, random_state=42)),
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['with_ext_geo_pca_model'] = err_bg
rmsle_results_e0['with_ext_geo_pca_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
```
#### Output predictions (With ext and geo model, with Kernal PCA)
```
pa_bg = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.05, # L1 regularization term on weights
'n_estimators': 700,
'max_depth': 3,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 800,
'max_depth': 4,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('scaler', StandardScaler()),
('kpca', KernelPCA(kernel='poly', fit_inverse_transform=True, n_components=32, random_state=42)),
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('scaler', StandardScaler()),
('kpca', KernelPCA(kernel='poly', fit_inverse_transform=True, n_components=58, random_state=42)),
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
```
### Plotting results
```
models = ['with_ext_geo_pca_model', 'with_ext_geo_model', 'with_ext_model', 'basic_model', 'mean_model']
rmsle_bg = [rmsle_results_bg[i] for i in models]
rmsle_e0 = [rmsle_results_e0[i] for i in models]
df = pd.DataFrame(map(list, zip(*[models, rmsle_bg, rmsle_e0])), columns=['model', 'bandgap', 'E0'])
ax = df.plot.barh(x='model',
figsize=(8, 7),
xlim=[0.017, 0.103],
legend=False)
plt.xlabel('RMSLE', fontsize=14)
plt.ylabel('Model', fontsize=14)
for i in range(len(df)):
ax.text(df.E0.iloc[i] - 0.01,
i + 0.075, str(round(df.E0.iloc[i], 5)),
color='white', fontweight='bold')
if df.bandgap.iloc[i] - 0.01 < 0.105:
ax.text(df.bandgap.iloc[i] - 0.01,
i-0.175, str(round(df.bandgap.iloc[i], 5)),
color='white', fontweight='bold')
else:
ax.text(0.105,
i-0.175, str(round(df.bandgap.iloc[i], 5)),
color='black', fontweight='bold')
```
|
github_jupyter
|
%matplotlib inline
import os
import sys
print sys.version
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from xgboost import XGBRegressor
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
from sklearn.preprocessing import StandardScaler
from sklearn.dummy import DummyRegressor
from sklearn.metrics.scorer import make_scorer
from sklearn.model_selection import cross_val_score
from utils import rmsle, rmsle1m
from dataload import load_features
rmsle_scorer = make_scorer(rmsle, greater_is_better=False)
rmsle_scorer_1m = make_scorer(rmsle1m, greater_is_better=False)
rmsle_results_bg = {}
rmsle_results_e0 = {}
DATA_DIR = './data'
train, test = load_features(DATA_DIR, with_ext=False, with_geo=False)
X_train = train.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'cellvol',
'bandgap', 'E0'], axis=1)
X_test = test.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'cellvol'], axis=1)
# Use log1p of energies to correct for skew
y_bg_train = train['bandgap']
y_e0_train = np.log1p(train['E0'])
# One-hot encode spacegroup_natoms
X_train = pd.concat([X_train.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_train['spacegroup_natoms'])], axis=1)
X_test = pd.concat([X_test.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_test['spacegroup_natoms'])], axis=1)
est_bg = DummyRegressor(strategy='mean')
est_e0 = DummyRegressor(strategy='mean')
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['mean_model'] = err_bg
rmsle_results_e0['mean_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
pa_bg = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 120,
'max_depth': 5,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 108,
'max_depth': 5,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['basic_model'] = err_bg
rmsle_results_e0['basic_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
train, test = load_features(DATA_DIR, with_ext=True, with_geo=False)
X_train = train.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'o_cnt', 'cellvol', 'o_fraction', 'avg_mass',
'bandgap', 'E0'], axis=1)
X_test = test.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'o_cnt', 'cellvol', 'o_fraction', 'avg_mass'], axis=1)
# Use log1p of energies to correct for skew
y_bg_train = train['bandgap']
y_e0_train = np.log1p(train['E0'])
# One-hot encode spacegroup_natoms
X_train = pd.concat([X_train.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_train['spacegroup_natoms'])], axis=1)
X_test = pd.concat([X_test.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_test['spacegroup_natoms'])], axis=1)
pa_bg = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.01, # L1 regularization term on weights
'n_estimators': 290,
'max_depth': 3,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 150,
'max_depth': 4,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('scaler', StandardScaler()),
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('scaler', StandardScaler()),
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['with_ext_model'] = err_bg
rmsle_results_e0['with_ext_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
train, test = load_features(DATA_DIR, with_ext=True, with_geo=True)
X_train = train.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'o_cnt', 'cellvol', 'o_fraction', 'avg_mass',
'bandgap', 'E0'], axis=1)
X_test = test.drop(['id', 'natoms', 'spacegroup',
'alpha', 'beta', 'gamma',
'ga', 'o_cnt', 'cellvol', 'o_fraction', 'avg_mass'], axis=1)
# Use log1p of energies to correct for skew
y_bg_train = train['bandgap']
y_e0_train = np.log1p(train['E0'])
# One-hot encode spacegroup_natoms
X_train = pd.concat([X_train.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_train['spacegroup_natoms'])], axis=1)
X_test = pd.concat([X_test.drop('spacegroup_natoms', axis=1),
pd.get_dummies(X_test['spacegroup_natoms'])], axis=1)
pa_bg = {'learning_rate': 0.02, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.05, # L1 regularization term on weights
'n_estimators': 600,
'max_depth': 3,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 500,
'max_depth': 4,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('scaler', StandardScaler()),
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('scaler', StandardScaler()),
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['with_ext_geo_model'] = err_bg
rmsle_results_e0['with_ext_geo_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
pa_bg = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.05, # L1 regularization term on weights
'n_estimators': 700,
'max_depth': 3,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 800,
'max_depth': 4,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('scaler', StandardScaler()),
('pca', PCA(n_components=32, random_state=42)),
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('scaler', StandardScaler()),
('pca', PCA(n_components=58, random_state=42)),
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
rmsle_results_bg['with_ext_geo_pca_model'] = err_bg
rmsle_results_e0['with_ext_geo_pca_model'] = err_e0
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
pa_bg = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.05, # L1 regularization term on weights
'n_estimators': 700,
'max_depth': 3,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
pa_e0 = {'learning_rate': 0.05, # Step size shrinkage used in update (Learning rate)
'reg_alpha': 0.10, # L1 regularization term on weights
'n_estimators': 800,
'max_depth': 4,
'subsample': 1,
'colsample_bytree': 0.90,
'colsample_bylevel': 0.90,
'silent': True,
'random_state': 42,
'objective': 'reg:linear'}
# Estimator pipeline
est_bg = Pipeline([
('scaler', StandardScaler()),
('kpca', KernelPCA(kernel='poly', fit_inverse_transform=True, n_components=32, random_state=42)),
('xgbreg', XGBRegressor(**pa_bg)),
])
est_e0 = Pipeline([
('scaler', StandardScaler()),
('kpca', KernelPCA(kernel='poly', fit_inverse_transform=True, n_components=58, random_state=42)),
('xgbreg', XGBRegressor(**pa_e0)),
])
est_bg.fit(X_train, y_bg_train)
est_e0.fit(X_train, y_e0_train)
err_bg = -cross_val_score(est_bg, X_train, y_bg_train, scoring=rmsle_scorer, cv=5).mean()
err_e0 = -cross_val_score(est_e0, X_train, y_e0_train, scoring=rmsle_scorer_1m, cv=5).mean()
# 5-fold CV Scores
print "RMSLE BG: {}, RMSLE E0: {}, RMSLE AVG: {}".format(err_bg, err_e0,
0.5 * (err_bg + err_e0))
models = ['with_ext_geo_pca_model', 'with_ext_geo_model', 'with_ext_model', 'basic_model', 'mean_model']
rmsle_bg = [rmsle_results_bg[i] for i in models]
rmsle_e0 = [rmsle_results_e0[i] for i in models]
df = pd.DataFrame(map(list, zip(*[models, rmsle_bg, rmsle_e0])), columns=['model', 'bandgap', 'E0'])
ax = df.plot.barh(x='model',
figsize=(8, 7),
xlim=[0.017, 0.103],
legend=False)
plt.xlabel('RMSLE', fontsize=14)
plt.ylabel('Model', fontsize=14)
for i in range(len(df)):
ax.text(df.E0.iloc[i] - 0.01,
i + 0.075, str(round(df.E0.iloc[i], 5)),
color='white', fontweight='bold')
if df.bandgap.iloc[i] - 0.01 < 0.105:
ax.text(df.bandgap.iloc[i] - 0.01,
i-0.175, str(round(df.bandgap.iloc[i], 5)),
color='white', fontweight='bold')
else:
ax.text(0.105,
i-0.175, str(round(df.bandgap.iloc[i], 5)),
color='black', fontweight='bold')
| 0.387574 | 0.722539 |
# Convert predictions to the format used by the ERASER evaluation script.
```
import numpy as np
import random, os, collections, json, codecs
from sklearn.metrics import classification_report
from itertools import groupby
import torch
from torch.nn import functional as F
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import shutil
import altair as alt
import models as m
import analyze_util
multirc_single_u = [m.MULTIRC_TEST_SINGLE_U_S1, m.MULTIRC_TEST_SINGLE_U_S2, m.MULTIRC_TEST_SINGLE_U_S3]
multirc_single_s = [m.MULTIRC_TEST_SINGLE_S_S1, m.MULTIRC_TEST_SINGLE_S_S2, m.MULTIRC_TEST_SINGLE_S_S3]
multirc_multi_u = [m.MULTIRC_TEST_MULTI_U_S1, m.MULTIRC_TEST_MULTI_U_S2, m.MULTIRC_TEST_MULTI_U_S3]
multirc_multi_s = [m.MULTIRC_TEST_MULTI_S_S1, m.MULTIRC_TEST_MULTI_S_S2, m.MULTIRC_TEST_MULTI_S_S3]
fever_single_u = [m.FEVER_TEST_SINGLE_U_S1, m.FEVER_TEST_SINGLE_U_S2, m.FEVER_TEST_SINGLE_U_S3]
fever_single_s = [m.FEVER_TEST_SINGLE_S_S1, m.FEVER_TEST_SINGLE_S_S2, m.FEVER_TEST_SINGLE_S_S3]
fever_multi_u = [m.FEVER_TEST_MULTI_U_S1, m.FEVER_TEST_MULTI_U_S2, m.FEVER_TEST_MULTI_U_S3]
fever_multi_s = [m.FEVER_TEST_MULTI_S_S1, m.FEVER_TEST_MULTI_S_S2, m.FEVER_TEST_MULTI_S_S3]
movies_u = [m.MOVIES_TEST_SINGLE_U_S1, m.MOVIES_TEST_SINGLE_U_S2, m.MOVIES_TEST_SINGLE_U_S3]
movies_s = [m.MOVIES_TEST_SINGLE_S_S1, m.MOVIES_TEST_SINGLE_S_S2, m.MOVIES_TEST_SINGLE_S_S3]
def get_doc(sample, inst_to_doc=None):
if inst_to_doc is not None:
orig = inst_to_doc[sample['annotation_id']]
windows_readable = orig
if windows_readable[-1] == '.':
windows_readable = windows_readable[:-1] + '___.txt'
return orig, windows_readable
if ':' in sample['annotation_id']:
return sample['annotation_id'].split(':')[0], sample['annotation_id'].split(':')[0]
else:
return sample['annotation_id'], sample['annotation_id']
def get_selected_sents(sample):
selected_idx = sample['rationale_probabilities'].argmax()
if sample['selected_sentences_type'] == 'ordered':
if selected_idx == len(sample['rationale_probabilities']) - 1:
return []
else:
return [selected_idx]
else:
return sorted([int(i) for i in sample['selected_sentences'][selected_idx]])
def convert(pred_in, out, docdir, labels=['False', 'True'], inst_to_doc=None):
docs = dict()
with codecs.open(out, 'w', encoding='utf-8') as f_out:
for sample in pred_in:
assert sample['predicted_target'] == labels[sample['logits'].argmax()]
eraser_sample = {
'annotation_id': sample['annotation_id'],
'classification': sample['predicted_target'],
'classification_scores': dict([(lbl, float(sample['logits'][i])) for i, lbl in enumerate(labels)])
}
docname, win_docname = get_doc(sample, inst_to_doc)
if docname not in docs:
with codecs.open(os.path.join(docdir, win_docname), encoding='utf-8') as f_in:
lines = [line.strip() for line in f_in.readlines()]
docs[docname] = [line.split(' ') for line in lines]
doc = docs[docname]
selected_sents = get_selected_sents(sample)
rationales = []
sentence_lengths = [len(s) for s in doc]
for sent in selected_sents:
start_token = np.sum(sentence_lengths[:sent])
end_token = start_token + sentence_lengths[sent]
rationales.append({
'start_token': int(start_token),
'end_token': int(end_token)
})
if len(rationales) == 0:
rationales = [{
'start_token': -1,
'end_token': 0
}]
eraser_sample['rationales'] = [{
'docid': docname,
'hard_rationale_predictions': rationales
}]
f_out.write(json.dumps(eraser_sample) + '\n')
def convert_multiple(src_dir, preds, dest_dir, doc_dir, labels, inst_to_doc=None):
for name in preds:
data = analyze_util.load_jsonl_sentselecting(os.path.join(src_dir, name))
dest = os.path.join(dest_dir, name)
convert(data, dest, doc_dir, labels, inst_to_doc)
def create_sample_to_doc(src):
with codecs.open(src, encoding='utf-8') as f_in:
data = [json.loads(line.strip()) for line in f_in.readlines()]
sample_to_doc = dict()
for sample in data:
sample_to_doc[sample['annotation_id']] = sample['evidences'][0][0]['docid']
return sample_to_doc
TRANSFORMED_OUT = '<Add output directory>'
DOCS_MOVIES = '<Path to docs>'
DOCS_FEVER = '<Path to docs>'
DOCS_MULTIRC = '<Path to docs>'
fever_sample_to_doc = create_sample_to_doc('<Hacky for windows because of filenames: Path to test.jsonl>')
convert_multiple(m.PRED_DIR, movies_u, TRANSFORMED_OUT, DOCS_MOVIES , ['NEG', 'POS'])
convert_multiple(m.PRED_DIR, movies_s, TRANSFORMED_OUT, DOCS_MOVIES , ['NEG', 'POS'])
convert_multiple(m.PRED_DIR, fever_single_u, TRANSFORMED_OUT, DOCS_FEVER , ['SUPPORTS', 'REFUTES'], fever_sample_to_doc)
convert_multiple(m.PRED_DIR, fever_single_s, TRANSFORMED_OUT, DOCS_FEVER , ['SUPPORTS', 'REFUTES'], fever_sample_to_doc)
convert_multiple(m.PRED_DIR, fever_multi_u, TRANSFORMED_OUT, DOCS_FEVER , ['SUPPORTS', 'REFUTES'], fever_sample_to_doc)
convert_multiple(m.PRED_DIR, fever_multi_s, TRANSFORMED_OUT, DOCS_FEVER , ['SUPPORTS', 'REFUTES'], fever_sample_to_doc)
convert_multiple(m.PRED_DIR, multirc_single_u, TRANSFORMED_OUT, DOCS_MULTIRC , ['False', 'True'])
convert_multiple(m.PRED_DIR, multirc_single_s, TRANSFORMED_OUT, DOCS_MULTIRC , ['False', 'True'])
convert_multiple(m.PRED_DIR, multirc_multi_u, TRANSFORMED_OUT, DOCS_MULTIRC , ['False', 'True'])
convert_multiple(m.PRED_DIR, multirc_multi_s, TRANSFORMED_OUT, DOCS_MULTIRC , ['False', 'True'])
```
|
github_jupyter
|
import numpy as np
import random, os, collections, json, codecs
from sklearn.metrics import classification_report
from itertools import groupby
import torch
from torch.nn import functional as F
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import shutil
import altair as alt
import models as m
import analyze_util
multirc_single_u = [m.MULTIRC_TEST_SINGLE_U_S1, m.MULTIRC_TEST_SINGLE_U_S2, m.MULTIRC_TEST_SINGLE_U_S3]
multirc_single_s = [m.MULTIRC_TEST_SINGLE_S_S1, m.MULTIRC_TEST_SINGLE_S_S2, m.MULTIRC_TEST_SINGLE_S_S3]
multirc_multi_u = [m.MULTIRC_TEST_MULTI_U_S1, m.MULTIRC_TEST_MULTI_U_S2, m.MULTIRC_TEST_MULTI_U_S3]
multirc_multi_s = [m.MULTIRC_TEST_MULTI_S_S1, m.MULTIRC_TEST_MULTI_S_S2, m.MULTIRC_TEST_MULTI_S_S3]
fever_single_u = [m.FEVER_TEST_SINGLE_U_S1, m.FEVER_TEST_SINGLE_U_S2, m.FEVER_TEST_SINGLE_U_S3]
fever_single_s = [m.FEVER_TEST_SINGLE_S_S1, m.FEVER_TEST_SINGLE_S_S2, m.FEVER_TEST_SINGLE_S_S3]
fever_multi_u = [m.FEVER_TEST_MULTI_U_S1, m.FEVER_TEST_MULTI_U_S2, m.FEVER_TEST_MULTI_U_S3]
fever_multi_s = [m.FEVER_TEST_MULTI_S_S1, m.FEVER_TEST_MULTI_S_S2, m.FEVER_TEST_MULTI_S_S3]
movies_u = [m.MOVIES_TEST_SINGLE_U_S1, m.MOVIES_TEST_SINGLE_U_S2, m.MOVIES_TEST_SINGLE_U_S3]
movies_s = [m.MOVIES_TEST_SINGLE_S_S1, m.MOVIES_TEST_SINGLE_S_S2, m.MOVIES_TEST_SINGLE_S_S3]
def get_doc(sample, inst_to_doc=None):
if inst_to_doc is not None:
orig = inst_to_doc[sample['annotation_id']]
windows_readable = orig
if windows_readable[-1] == '.':
windows_readable = windows_readable[:-1] + '___.txt'
return orig, windows_readable
if ':' in sample['annotation_id']:
return sample['annotation_id'].split(':')[0], sample['annotation_id'].split(':')[0]
else:
return sample['annotation_id'], sample['annotation_id']
def get_selected_sents(sample):
selected_idx = sample['rationale_probabilities'].argmax()
if sample['selected_sentences_type'] == 'ordered':
if selected_idx == len(sample['rationale_probabilities']) - 1:
return []
else:
return [selected_idx]
else:
return sorted([int(i) for i in sample['selected_sentences'][selected_idx]])
def convert(pred_in, out, docdir, labels=['False', 'True'], inst_to_doc=None):
docs = dict()
with codecs.open(out, 'w', encoding='utf-8') as f_out:
for sample in pred_in:
assert sample['predicted_target'] == labels[sample['logits'].argmax()]
eraser_sample = {
'annotation_id': sample['annotation_id'],
'classification': sample['predicted_target'],
'classification_scores': dict([(lbl, float(sample['logits'][i])) for i, lbl in enumerate(labels)])
}
docname, win_docname = get_doc(sample, inst_to_doc)
if docname not in docs:
with codecs.open(os.path.join(docdir, win_docname), encoding='utf-8') as f_in:
lines = [line.strip() for line in f_in.readlines()]
docs[docname] = [line.split(' ') for line in lines]
doc = docs[docname]
selected_sents = get_selected_sents(sample)
rationales = []
sentence_lengths = [len(s) for s in doc]
for sent in selected_sents:
start_token = np.sum(sentence_lengths[:sent])
end_token = start_token + sentence_lengths[sent]
rationales.append({
'start_token': int(start_token),
'end_token': int(end_token)
})
if len(rationales) == 0:
rationales = [{
'start_token': -1,
'end_token': 0
}]
eraser_sample['rationales'] = [{
'docid': docname,
'hard_rationale_predictions': rationales
}]
f_out.write(json.dumps(eraser_sample) + '\n')
def convert_multiple(src_dir, preds, dest_dir, doc_dir, labels, inst_to_doc=None):
for name in preds:
data = analyze_util.load_jsonl_sentselecting(os.path.join(src_dir, name))
dest = os.path.join(dest_dir, name)
convert(data, dest, doc_dir, labels, inst_to_doc)
def create_sample_to_doc(src):
with codecs.open(src, encoding='utf-8') as f_in:
data = [json.loads(line.strip()) for line in f_in.readlines()]
sample_to_doc = dict()
for sample in data:
sample_to_doc[sample['annotation_id']] = sample['evidences'][0][0]['docid']
return sample_to_doc
TRANSFORMED_OUT = '<Add output directory>'
DOCS_MOVIES = '<Path to docs>'
DOCS_FEVER = '<Path to docs>'
DOCS_MULTIRC = '<Path to docs>'
fever_sample_to_doc = create_sample_to_doc('<Hacky for windows because of filenames: Path to test.jsonl>')
convert_multiple(m.PRED_DIR, movies_u, TRANSFORMED_OUT, DOCS_MOVIES , ['NEG', 'POS'])
convert_multiple(m.PRED_DIR, movies_s, TRANSFORMED_OUT, DOCS_MOVIES , ['NEG', 'POS'])
convert_multiple(m.PRED_DIR, fever_single_u, TRANSFORMED_OUT, DOCS_FEVER , ['SUPPORTS', 'REFUTES'], fever_sample_to_doc)
convert_multiple(m.PRED_DIR, fever_single_s, TRANSFORMED_OUT, DOCS_FEVER , ['SUPPORTS', 'REFUTES'], fever_sample_to_doc)
convert_multiple(m.PRED_DIR, fever_multi_u, TRANSFORMED_OUT, DOCS_FEVER , ['SUPPORTS', 'REFUTES'], fever_sample_to_doc)
convert_multiple(m.PRED_DIR, fever_multi_s, TRANSFORMED_OUT, DOCS_FEVER , ['SUPPORTS', 'REFUTES'], fever_sample_to_doc)
convert_multiple(m.PRED_DIR, multirc_single_u, TRANSFORMED_OUT, DOCS_MULTIRC , ['False', 'True'])
convert_multiple(m.PRED_DIR, multirc_single_s, TRANSFORMED_OUT, DOCS_MULTIRC , ['False', 'True'])
convert_multiple(m.PRED_DIR, multirc_multi_u, TRANSFORMED_OUT, DOCS_MULTIRC , ['False', 'True'])
convert_multiple(m.PRED_DIR, multirc_multi_s, TRANSFORMED_OUT, DOCS_MULTIRC , ['False', 'True'])
| 0.397704 | 0.623663 |
# Getting Started: Exploring Nemo Fundamentals
NeMo is a toolkit for creating [Conversational AI](https://developer.nvidia.com/conversational-ai#started) applications.
NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components - Neural Modules. Neural Modules are conceptual blocks of neural networks that take typed inputs and produce typed outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
The toolkit comes with extendable collections of pre-built modules and ready-to-use models for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS). Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
For more information, please visit https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/#
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'main'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
## Install TorchAudio
!pip install torchaudio>=0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
## Grab the config we'll use in this example
!mkdir configs
```
## Foundations of NeMo
---------
NeMo models leverage [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) Module, and are compatible with the entire PyTorch ecosystem. This means that users have the full flexibility of using the higher level APIs provided by PyTorch Lightning (via Trainer), or write their own training and evaluation loops in PyTorch directly (by simply calling the model and the individual components of the model).
For NeMo developers, a "Model" is the neural network(s) as well as all the infrastructure supporting those network(s), wrapped into a singular, cohesive unit. As such, all NeMo models are constructed to contain the following out of the box (at the bare minimum, some models support additional functionality too!) -
- Neural Network architecture - all of the modules that are required for the model.
- Dataset + Data Loaders - all of the components that prepare the data for consumption during training or evaluation.
- Preprocessing + Postprocessing - all of the components that process the datasets so they can easily be consumed by the modules.
- Optimizer + Schedulers - basic defaults that work out of the box, and allow further experimentation with ease.
- Any other supporting infrastructure - tokenizers, language model configuration, data augmentation etc.
```
import nemo
nemo.__version__
```
## NeMo Collections
NeMo is sub-divided into a few fundamental collections based on their domains - `asr`, `nlp`, `tts`. When you performed the `import nemo` statement above, none of the above collections were imported. This is because you might not need all of the collections at once, so NeMo allows partial imports of just one or more collection, as and when you require them.
-------
Let's import the above three collections -
```
import nemo.collections.asr as nemo_asr
import nemo.collections.nlp as nemo_nlp
import nemo.collections.tts as nemo_tts
```
## NeMo Models in Collections
NeMo contains several models for each of its collections, pertaining to certain common tasks involved in conversational AI. At a brief glance, let's look at all the Models that NeMo offers for the above 3 collections.
```
asr_models = [model for model in dir(nemo_asr.models) if model.endswith("Model")]
asr_models
nlp_models = [model for model in dir(nemo_nlp.models) if model.endswith("Model")]
nlp_models
tts_models = [model for model in dir(nemo_tts.models) if model.endswith("Model")]
tts_models
```
## The NeMo Model
Let's dive deeper into what a NeMo model really is. There are many ways we can create these models - we can use the constructor and pass in a config, we can instantiate the model from a pre-trained checkpoint, or simply pass a pre-trained model name and instantiate a model directly from the cloud !
---------
For now, let's try to work with an ASR model - [Citrinet](https://arxiv.org/abs/2104.01721)
```
citrinet = nemo_asr.models.EncDecCTCModelBPE.from_pretrained('stt_en_citrinet_512')
citrinet.summarize();
```
## Model Configuration using OmegaConf
--------
So we could download, instantiate and analyse the high level structure of the `Citrinet` model in a few lines! Now let's delve deeper into the configuration file that makes the model work.
First, we import [OmegaConf](https://omegaconf.readthedocs.io/en/latest/). OmegaConf is an excellent library that is used throughout NeMo in order to enable us to perform yaml configuration management more easily. Additionally, it plays well with another library, [Hydra](https://hydra.cc/docs/intro/), that is used by NeMo to perform on the fly config edits from the command line, dramatically boosting ease of use of our config files !
```
from omegaconf import OmegaConf
```
All NeMo models come packaged with their model configuration inside the `cfg` attribute. While technically it is meant to be config declaration of the model as it has been currently constructed, `cfg` is an essential tool to modify the behaviour of the Model after it has been constructed. It can be safely used to make it easier to perform many essential tasks inside Models.
To be doubly sure, we generally work on a copy of the config until we are ready to edit it inside the model
```
import copy
cfg = copy.deepcopy(citrinet.cfg)
print(OmegaConf.to_yaml(cfg))
```
## Analysing the contents of the Model config
----------
Above we see a configuration for the Citrinet model. As discussed in the beginning, NeMo models contain the entire definition of the neural network(s) as well as most of the surrounding infrastructure to support that model within themselves. Here, we see a perfect example of this behaviour.
Citrinet contains within its config -
- `preprocessor` - MelSpectrogram preprocessing layer
- `encoder` - The acoustic encoder model.
- `decoder` - The CTC decoder layer.
- `optim` (and potentially `sched`) - Optimizer configuration. Can optionally include Scheduler information.
- `spec_augment` - Spectrogram Augmentation support.
- `train_ds`, `validation_ds` and `test_ds` - Dataset and data loader construction information.
## Modifying the contents of the Model config
----------
Say we want to experiment with a different preprocessor (we want MelSpectrogram, but with different configuration than was provided in the original configuration). Or say we want to add a scheduler to this model during training.
OmegaConf makes this a very simple task for us!
```
# OmegaConf won't allow you to add new config items, so we temporarily disable this safeguard.
OmegaConf.set_struct(cfg, False)
# Let's see the old optim config
print("Old Config: ")
print(OmegaConf.to_yaml(cfg.optim))
sched = {'name': 'CosineAnnealing', 'warmup_steps': 1000, 'min_lr': 1e-6}
sched = OmegaConf.create(sched) # Convert it into a DictConfig
# Assign it to cfg.optim.sched namespace
cfg.optim.sched = sched
# Let's see the new optim config
print("New Config: ")
print(OmegaConf.to_yaml(cfg.optim))
# Here, we restore the safeguards so no more additions can be made to the config
OmegaConf.set_struct(cfg, True)
```
## Updating the model from config
----------
NeMo Models can be updated in a few ways, but we follow similar patterns within each collection so as to maintain consistency.
Here, we will show the two most common ways to modify core components of the model - using the `from_config_dict` method, and updating a few special parts of the model.
Remember, all NeMo models are PyTorch Lightning modules, which themselves are PyTorch modules, so we have a lot of flexibility here!
### Update model using `from_config_dict`
In certain config files, you will notice the following pattern :
```yaml
preprocessor:
_target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
normalize: per_feature
window_size: 0.02
sample_rate: 16000
window_stride: 0.01
window: hann
features: 64
n_fft: 512
frame_splicing: 1
dither: 1.0e-05
stft_conv: false
```
You might ask, why are we using `_target_`? Well, it is generally rare for the preprocessor, encoder, decoder and perhaps a few other details to be changed often from the command line when experimenting. In order to stabilize these settings, we enforce that our preprocessor will always be of type `AudioToMelSpectrogramPreprocessor` for this model by setting its `_target_` attribute in the config. In order to provide its parameters in the class constructor, we simply add them after `_target_`.
---------
Note, we can still change all of the parameters of this `AudioToMelSpectrogramPreprocessor` class from the command line using hydra, so we don't lose any flexibility once we decide what type of preprocessing class we want !
This also gives us a flexible way to instantiate parts of the model from just the config object !
```
new_preprocessor_config = copy.deepcopy(cfg.preprocessor)
new_preprocessor = citrinet.from_config_dict(new_preprocessor_config)
print(new_preprocessor)
```
So how do we actually update our model's internal preprocessor with something new? Well, since NeMo Model's are just pytorch Modules, we can just replace their attribute !
```
citrinet.preprocessor = new_preprocessor
citrinet.summarize();
```
--------
This might look like nothing changed - because we didn't actually modify the config for the preprocessor at all ! But as we showed above, we can easily modify the config for the preprocessor, instantiate it from config, and then just set it to the model.
-------
**NOTE**: Preprocessors don't generally have weights, so this was easy, but say we want to replace a part of the model which actually has trained parameters?
Well, the above approach will still work, just remember the fact that the new module you inserted into `citrinet.encoder` or `citrinet.decoder` actually won't have pretrained weights. You can easily rectify that by loading the state dict for the module *before* you set it to the Model though!
### Preserving the new config
So we went ahead and updated the preprocessor of the model. We however also need to perform a crucial step - **preserving the updated config**!
Why do we want to do this? NeMo has many ways of saving and restoring its models, which we will discuss a bit later. All of them depend on having an updated config that defines the model in its entirety, so if we modify anything, we should also update the corresponding part of the config to safely save and restore models.
```
# Update the config copy
cfg.preprocessor = new_preprocessor_config
# Update the model config
citrinet.cfg = cfg
```
## Update a few special components of the Model
---------
While the above approach is good for most major components of the model, NeMo has special utilities for a few components.
They are -
- `setup_training_data`
- `setup_validation_data` and `setup_multi_validation_data`
- `setup_test_data` and `setup_multi_test_data`
- `setup_optimization`
These special utilities are meant to help you easily setup training, validation, testing once you restore a model from a checkpoint.
------
One of the major tasks of all conversational AI models is fine-tuning onto new datasets - new languages, new corpus of text, new voices etc. It is often insufficient to have just a pre-trained model. So these setup methods are provided to enable users to adapt models *after* they have been already trained or provided to you.
You might remember having seen a few warning messages the moment you tried to instantiate the pre-trained model. Those warnings are in fact reminders to call the appropriate setup methods for the task you want to perform.
Those warnings are simply displaying the old config that was used to train that model, and are a basic template that you can easily modify. You have the ability to modify the `train_ds`, `validation_ds` and `test_ds` sub-configs in their entirety in order to evaluate, fine-tune or train from scratch the model, or any further purpose as you require it.
Let's discuss how to add the scheduler to the model below (which initially had just an optimizer in its config)
```
# Let's print out the current optimizer
print(OmegaConf.to_yaml(citrinet.cfg.optim))
# Now let's update the config
citrinet.setup_optimization(cfg.optim);
```
-------
We see a warning -
```
Neither `max_steps` nor `iters_per_batch` were provided to `optim.sched`, cannot compute effective `max_steps` !
Scheduler will not be instantiated !
```
We don't have a train dataset setup, nor do we have max_steps in the config. Most NeMo schedulers cannot be instantiated without computing how many train steps actually exist!
Here, we can temporarily allow the scheduler construction by explicitly passing a max_steps value to be 100
```
OmegaConf.set_struct(cfg.optim.sched, False)
cfg.optim.sched.max_steps = 100
OmegaConf.set_struct(cfg.optim.sched, True)
# Now let's update the config and try again
citrinet.setup_optimization(cfg.optim);
```
You might wonder why we didnt explicitly set `citrinet.cfg.optim = cfg.optim`.
This is because the `setup_optimization()` method does it for you! You can still update the config manually.
### Optimizer & Scheduler Config
Optimizers and schedulers are common components of models, and are essential to train the model from scratch.
They are grouped together under a unified `optim` namespace, as schedulers often operate on a given optimizer.
### Let's breakdown the general `optim` structure
```yaml
optim:
name: novograd
lr: 0.01
# optimizer arguments
betas: [0.8, 0.25]
weight_decay: 0.001
# scheduler setup
sched:
name: CosineAnnealing
# Optional arguments
max_steps: null # computed at runtime or explicitly set here
monitor: val_loss
reduce_on_plateau: false
# scheduler config override
warmup_steps: 1000
warmup_ratio: null
min_lr: 1e-9
```
Essential Optimizer components -
- `name`: String name of the optimizer. Generally a lower case of the class name.
- `lr`: Learning rate is a required argument to all optimizers.
Optional Optimizer components - after the above two arguments are provided, any additional arguments added under `optim` will be passed to the constructor of that optimizer as keyword arguments
- `betas`: List of beta values to pass to the optimizer
- `weight_decay`: Optional weight decay passed to the optimizer.
Optional Scheduler components - `sched` is an optional setup of the scheduler for the given optimizer.
If `sched` is provided, only one essential argument needs to be provided :
- `name`: The name of the scheduler. Generally, it is the full class name.
Optional Scheduler components -
- `max_steps`: Max steps as an override from the user. If one provides `trainer.max_steps` inside the trainer configuration, that value is used instead. If neither value is set, the scheduler will attempt to compute the `effective max_steps` using the size of the train data loader. If that too fails, then the scheduler will not be created at all.
- `monitor`: Used if you are using an adaptive scheduler such as ReduceLROnPlateau. Otherwise ignored. Defaults to `loss` - indicating train loss as monitor.
- `reduce_on_plateau`: Required to be set to true if using an adaptive scheduler.
Any additional arguments under `sched` will be supplied as keyword arguments to the constructor of the scheduler.
## Difference between the data loader setup methods
----------
You might notice, we have multiple setup methods for validation and test data sets. We also don't have an equivalent `setup_multi_train_data`.
In general, the `multi` methods refer to multiple data sets / data loaders.
### Where's `setup_multi_train_data`?
With the above in mind, let's tackle why we don't have `setup_multi_train_data`.
NeMo is concerned with multiple domains - `asr`, `nlp` and `tts`. The way datasets are setup and used in these domains is dramatically different. It is often unclear what it means to have multiple train datasets - do we concatenate them? Do we randomly sample (with same or different probability) from each of them?
Therefore we leave such support for multiple datasets up to the model itself. For example, in ASR, you can concatenate multiple train manifest files by using commas when providing the `manifest_filepath` value!
### What are multi methods?
In many cases, especially true for ASR and NLP, we may have multiple validation and test datasets. The most common example for this in ASR is `Librispeech`, which has `dev_clean`, `dev_other`, `test_clean`, `test_other`.
NeMo standardizes how to handle multiple data loaders for validation and testing, so that all of our collections have a similar look and feel, as well as ease development of our models. During evaluation, these datasets are treated independently and prepended with resolved names so that logs are separate!
The `multi` methods are therefore generalizations of the single validation and single test data setup methods, with some additional functionality. If you provide multiple datasets, you still have to write code for just one dataset and NeMo will automatically attach the appropriate names to your logs so you can differentiate between them!
Furthermore, they also automatically preserve the config the user passes to them when updating the validation or test data loaders.
**In general, it is preferred to call the `setup_multi_validation_data` and `setup_multi_test_data` methods, even if you are only using single datasets, simply for the automated management they provide.**
## Creating Model from constructor vs restoring a model
---------
You might notice, we discuss all of the above setup methods in the context of model after it is restored. However, NeMo scripts do not call them inside any of the example train scripts themselves.
This is because these methods are automatically called by the constructor when the Model is created for the first time, but these methods are skipped during restoration (either from a PyTorch Lightning checkpoint using `load_from_checkpoint`, or via `restore_from` method inside NeMo Models).
This is done as most datasets are stored on a user's local directory, and the path to these datasets is set in the config (either set by default, or set by Hydra overrides). On the other hand, the models are meant to be portable. On another user's system, the data might not be placed at exactly the same location, or even on the same drive as specified in the model's config!
Therefore we allow the constructor some brevity and automate such dataset setup, whereas restoration warns that data loaders were not set up and provides the user with ways to set up their own datasets.
------
Why are optimizers not restored automatically? Well, optimizers themselves don't face an issue, but as we saw before, schedulers depend on the number of train steps in order to calculate their schedule.
However, if you don't wish to modify the optimizer and scheduler, and prefer to leave them to their default values, that's perfectly alright. The `setup_optimization()` method is automatically called by PyTorch Lightning for you when you begin training your model!
## Saving and restoring models
----------
NeMo provides a few ways to save and restore models. If you utilize the Experiment Manager that is part of all NeMo train scripts, PyTorch Lightning will automatically save checkpoints for you in the experiment directory.
We can also use packaged files using the specialized `save_to` and `restore_from` methods.
### Saving and Restoring from PTL Checkpoints
----------
The PyTorch Lightning Trainer object will periodically save checkpoints when the experiment manager is being used during training.
PyTorch Lightning checkpoints can then be loaded and evaluated / fine-tuned just as always using the class method `load_from_checkpoint`.
For example, restore a Citrinet model from a checkpoint -
```python
citrinet = nemo_asr.models.EncDecCTCModelBPE.load_from_checkpoint(<path to checkpoint>)
```
### Saving and Restoring from .nemo files
----------
There are a few models which might require external dependencies to be packaged with them in order to restore them properly.
One such example is an ASR model with an external BPE tokenizer. It is preferred if the model includes all of the components required to restore it, but a binary file for a tokenizer cannot be serialized into a PyTorch Lightning checkpoint.
In such cases, we can use the `save_to` and `restore_from` method to package the entire model + its components (here, the tokenizer file(s)) into a tarfile. This can then be easily imported by the user and used to restore the model.
```
# Save the model
citrinet.save_to('citrinet_512.nemo')
!ls -d -- *.nemo
# Restore the model
temp_cn = nemo_asr.models.EncDecCTCModelBPE.restore_from('citrinet_512.nemo')
temp_cn.summarize();
# Note that the preprocessor + optimizer config have been preserved after the changes we made !
print(OmegaConf.to_yaml(temp_cn.cfg))
```
Note, that .nemo file is a simple .tar.gz with checkpoint, configuration and, potentially, other artifacts such as tokenizer configs being used by the model
```
!cp citrinet_512.nemo citrinet_512.tar.gz
!tar -xvf citrinet_512.tar.gz
```
### Extracting PyTorch checkpoints from NeMo tarfiles (Model level)
-----------
While the .nemo tarfile is an excellent way to have a portable model, sometimes it is necessary for researchers to have access to the basic PyTorch save format. NeMo aims to be entirely compatible with PyTorch, and therefore offers a simple method to extract just the PyTorch checkpoint from the .nemo tarfile.
```
import torch
state_dict = temp_cn.extract_state_dict_from('citrinet_512.nemo', save_dir='./pt_ckpt/')
!ls ./pt_ckpt/
```
As we can see below, there is now a single basic PyTorch checkpoint available inside the `pt_ckpt` directory, which we can use to load the weights of the entire model as below
```
temp_cn.load_state_dict(torch.load('./pt_ckpt/model_weights.ckpt'))
```
### Extracting PyTorch checkpoints from NeMo tarfiles (Module level)
----------
While the above method is exceptional when extracting the checkpoint of the entire model, sometimes there may be a necessity to load and save the individual modules that comprise the Model.
The same extraction method offers a flag to extract the individual model level checkpoints into their individual files, so that users have access to per-module level checkpoints.
```
state_dict = temp_cn.extract_state_dict_from('citrinet_512.nemo', save_dir='./pt_module_ckpt/', split_by_module=True)
!ls ./pt_module_ckpt/
```
Now, we can load and assign the weights of the individual modules of the above Citrinet Model !
```
temp_cn.preprocessor.load_state_dict(torch.load('./pt_module_ckpt/preprocessor.ckpt'))
temp_cn.encoder.load_state_dict(torch.load('./pt_module_ckpt/encoder.ckpt'))
temp_cn.decoder.load_state_dict(torch.load('./pt_module_ckpt/decoder.ckpt'))
```
# NeMo with Hydra
[Hydra](https://hydra.cc/docs/intro/) is used throughout NeMo as a way to enable rapid prototyping using predefined config files. Hydra and OmegaConf offer great compatibility with each other, and below we show a few general helpful tips to improve productivity with Hydra when using NeMo.
## Hydra Help
--------
Since our scripts are written with hydra in mind, you might notice that using `python <script.py> --help` returns you a config rather than the usual help format from argparse.
Using `--help` you can see the default config attached to the script - every NeMo script has at least one default config file attached to it. This gives you a guide on how you can modify values for an experiment.
Hydra also has a special `--hydra-help` flag, which will offer you more help with respect to hydra itself as it is set up in the script.
## Changing config paths and files
---------
While all NeMo models come with at least 1 default config file, one might want to switch configs without changing code. This is easily achieved by the following commands :
- `--config-path`: Path to the directory which contains the config files
- `--config-name`: Name of the config file we wish to load.
Note that these two arguments need to be at the very beginning of your execution statement, before you provide any command line overrides to your config file.
## Overriding config from the command line
----------
Hydra allows users to provide command line overrides to any part of the config. There are three cases to consider -
- Override existing value in config
- Add new value in config
- Remove old value in config
### Overriding existing values in config
Let's take the case where we want to change the optimizer from `novograd` to `adam`. Let's also change the beta values to default adam values.
Hydra overrides are based on the `.` syntax - each `.` representing a level in the config itself.
```sh
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
model.optim.name="adam" \
model.optim.betas=[0.9,0.999]
```
It is to be noted, if lists are passed, there cannot be any spaces between items.
------
We can also support multi validation datasets with the above list syntax, but it depends on the model level support.
For ASR collection, the following syntax is widely supported in ASR, ASR-BPE and classification models. Let's take an example of a model being trained on LibriSpeech -
```sh
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
model.validation_ds.manifest_filepath=["path to dev clean","path to dev other"] \
model.test_ds.manifest_filepath=["path to test clean","path to test other"]
```
### Add new values in config
----------
Hydra allows us to inject additional parameters inside the config using the `+` syntax.
Let's take an example of adding `amsgrad` fix for the `novograd` optimizer above.
```sh
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
+model.optim.amsgrad=true
```
### Remove old value in config
---------
Hydra allows us to remove parameters inside the config using the `~` syntax.
Let's take an example of removing `weight_decay` inside the Novograd optimizer
```sh
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
~model.optim.weight_decay
```
## Setting a value to `None` from the command line
We may sometimes choose to disable a feature by setting the value to `None`.
We can accomplish this by using the keyword `null` inside the command line.
Let's take an example of disabling the validation data loader inside an ASR model's config -
```sh
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
model.test_ds.manifest_filepath=null
```
# NeMo Examples
NeMo supports various pre-built models for ASR, NLP and TTS tasks. One example we see in this notebook is the ASR model for Speech to Text - by using the Citrinet model.
The NeMo repository has a dedicated `examples` directory with scripts to train and evaluate models for various tasks - ranging from ASR speech to text, NLP question answering and TTS text to speech using models such as `FastPitch` and `HiFiGAN`.
NeMo constantly adds new models and new tasks to these examples, such that these examples serve as the basis to train and evaluate models from scratch with the provided config files.
NeMo Examples directory can be found here - https://github.com/NVIDIA/NeMo/tree/main/examples
## Structure of NeMo Examples
-------
The NeMo Examples directory is structured by domain, as well as sub-task. Similar to how we partition the collections supported by NeMo, the examples themselves are separated initially by domain, and then by sub-tasks of that domain.
All these example scripts are bound to at least one default config file. These config files contain all of the information of the model, as well as the PyTorch Lightning Trainer configuration and Experiment Manager configuration.
In general, once the model is trained and saved to a PyTorch Lightning checkpoint, or to a .nemo tarfile, it will no longer contain the training configuration - no configuration information for the Trainer or Experiment Manager.
**These config files have good defaults pre-set to run an experiment with NeMo, so it is advised to base your own training configuration on these configs.**
Let's take a deeper look at some of the examples inside each domain.
## ASR Examples
-------
NeMo supports multiple Speech Recognition models such as Jasper, QuartzNet, Citrinet, Conformer and more, all of which can be trained on various datasets. We also provide pretrained checkpoints for these models trained on standard datasets so that they can be used immediately. These scripts are made available in `speech_to_text.py`.
ASR examples also supports sub-tasks such as speech classification - MatchboxNet trained on the Google Speech Commands Dataset is available in `speech_to_label.py`. Voice Activity Detection is also supported with the same script, by simply changing the config file passed to the script!
NeMo also supports training Speech Recognition models with Byte Pair/Word Piece encoding of the corpus, via the `speech_to_text_bpe.py` example. Since these models are still under development, their configs fall under the `experimental/configs` directory.
Finally, in order to simply perform inference on some dataset using these models, prefer to use the `speech_to_text_eval.py` example, which provides a look at how to compute WER over a dataset provided by the user.
## NLP Examples
---------
NeMo supports a wide variety of tasks in NLP - from text classification and language modelling all the way to glue benchmarking!
All NLP models require text tokenization as data preprocessing steps. The list of tokenizers can be found in nemo.collections.common.tokenizers, and include WordPiece tokenizer, SentencePiece tokenizer or simple tokenizers like Word tokenizer.
A non-exhaustive list of tasks that NeMo currently supports in NLP is -
- Language Modelling - Assigns a probability distribution over a sequence of words. Can be either generative e.g. vanilla left-right-transformer or BERT with a masked language model loss.
- Text Classification - Classifies an entire text based on its content into predefined categories, e.g. news, finance, science etc. These models are BERT-based and can be used for applications such as sentiment analysis, relationship extraction
- Token Classification - Classifies each input token separately. Models are based on BERT. Applications include named entity recognition, punctuation and capitalization, etc.
- Intent Slot Classification - used for joint recognition of Intents and Slots (Entities) for building conversational assistants.
- Question Answering - Currently only SQuAD is supported. This takes in a question and a passage as input and predicts a span in the passage, from which the answer is extracted.
- Glue Benchmarks - A benchmark of nine sentence- or sentence-pair language understanding tasks
## TTS Examples
---------
NeMo supports Text To Speech (TTS, aka Speech Synthesis) via a two step inference procedure. First, a model is used to generate a mel spectrogram from text. Second, a model is used to generate audio from a mel spectrogram.
Supported Models:
Mel Spectrogram Generators:
* Tacotron2
* FastPitch
* Talknet
* And more...
Audio Generators (Vocoders):
* WaveGlow
* HiFiGAN
* And more...
# NeMo Tutorials
Alongside the example scripts provided above, NeMo provides in depth tutorials for usage of these models for each of the above domains inside the `tutorials` directory found in the NeMo repository.
Tutorials are meant to be more in-depth explanation of the workflow in the discussed task - usually involving a small amount of data to train a small model on a task, along with some explanation of the task itself.
White the tutorials are a great example of the simplicity of NeMo, please note for the best performance when training on real datasets, we advice the use of the example scripts instead of the tutorial notebooks.
NeMo Tutorials directory can be found here - https://github.com/NVIDIA/NeMo/tree/main/tutorials
|
github_jupyter
|
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'main'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
## Install TorchAudio
!pip install torchaudio>=0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
## Grab the config we'll use in this example
!mkdir configs
import nemo
nemo.__version__
import nemo.collections.asr as nemo_asr
import nemo.collections.nlp as nemo_nlp
import nemo.collections.tts as nemo_tts
asr_models = [model for model in dir(nemo_asr.models) if model.endswith("Model")]
asr_models
nlp_models = [model for model in dir(nemo_nlp.models) if model.endswith("Model")]
nlp_models
tts_models = [model for model in dir(nemo_tts.models) if model.endswith("Model")]
tts_models
citrinet = nemo_asr.models.EncDecCTCModelBPE.from_pretrained('stt_en_citrinet_512')
citrinet.summarize();
from omegaconf import OmegaConf
import copy
cfg = copy.deepcopy(citrinet.cfg)
print(OmegaConf.to_yaml(cfg))
# OmegaConf won't allow you to add new config items, so we temporarily disable this safeguard.
OmegaConf.set_struct(cfg, False)
# Let's see the old optim config
print("Old Config: ")
print(OmegaConf.to_yaml(cfg.optim))
sched = {'name': 'CosineAnnealing', 'warmup_steps': 1000, 'min_lr': 1e-6}
sched = OmegaConf.create(sched) # Convert it into a DictConfig
# Assign it to cfg.optim.sched namespace
cfg.optim.sched = sched
# Let's see the new optim config
print("New Config: ")
print(OmegaConf.to_yaml(cfg.optim))
# Here, we restore the safeguards so no more additions can be made to the config
OmegaConf.set_struct(cfg, True)
preprocessor:
_target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
normalize: per_feature
window_size: 0.02
sample_rate: 16000
window_stride: 0.01
window: hann
features: 64
n_fft: 512
frame_splicing: 1
dither: 1.0e-05
stft_conv: false
new_preprocessor_config = copy.deepcopy(cfg.preprocessor)
new_preprocessor = citrinet.from_config_dict(new_preprocessor_config)
print(new_preprocessor)
citrinet.preprocessor = new_preprocessor
citrinet.summarize();
# Update the config copy
cfg.preprocessor = new_preprocessor_config
# Update the model config
citrinet.cfg = cfg
# Let's print out the current optimizer
print(OmegaConf.to_yaml(citrinet.cfg.optim))
# Now let's update the config
citrinet.setup_optimization(cfg.optim);
Neither `max_steps` nor `iters_per_batch` were provided to `optim.sched`, cannot compute effective `max_steps` !
Scheduler will not be instantiated !
OmegaConf.set_struct(cfg.optim.sched, False)
cfg.optim.sched.max_steps = 100
OmegaConf.set_struct(cfg.optim.sched, True)
# Now let's update the config and try again
citrinet.setup_optimization(cfg.optim);
optim:
name: novograd
lr: 0.01
# optimizer arguments
betas: [0.8, 0.25]
weight_decay: 0.001
# scheduler setup
sched:
name: CosineAnnealing
# Optional arguments
max_steps: null # computed at runtime or explicitly set here
monitor: val_loss
reduce_on_plateau: false
# scheduler config override
warmup_steps: 1000
warmup_ratio: null
min_lr: 1e-9
citrinet = nemo_asr.models.EncDecCTCModelBPE.load_from_checkpoint(<path to checkpoint>)
# Save the model
citrinet.save_to('citrinet_512.nemo')
!ls -d -- *.nemo
# Restore the model
temp_cn = nemo_asr.models.EncDecCTCModelBPE.restore_from('citrinet_512.nemo')
temp_cn.summarize();
# Note that the preprocessor + optimizer config have been preserved after the changes we made !
print(OmegaConf.to_yaml(temp_cn.cfg))
!cp citrinet_512.nemo citrinet_512.tar.gz
!tar -xvf citrinet_512.tar.gz
import torch
state_dict = temp_cn.extract_state_dict_from('citrinet_512.nemo', save_dir='./pt_ckpt/')
!ls ./pt_ckpt/
temp_cn.load_state_dict(torch.load('./pt_ckpt/model_weights.ckpt'))
state_dict = temp_cn.extract_state_dict_from('citrinet_512.nemo', save_dir='./pt_module_ckpt/', split_by_module=True)
!ls ./pt_module_ckpt/
temp_cn.preprocessor.load_state_dict(torch.load('./pt_module_ckpt/preprocessor.ckpt'))
temp_cn.encoder.load_state_dict(torch.load('./pt_module_ckpt/encoder.ckpt'))
temp_cn.decoder.load_state_dict(torch.load('./pt_module_ckpt/decoder.ckpt'))
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
model.optim.name="adam" \
model.optim.betas=[0.9,0.999]
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
model.validation_ds.manifest_filepath=["path to dev clean","path to dev other"] \
model.test_ds.manifest_filepath=["path to test clean","path to test other"]
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
+model.optim.amsgrad=true
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
~model.optim.weight_decay
$ python <script>.py \
--config-path="dir to config" \
--config-name="name of config" \
model.test_ds.manifest_filepath=null
| 0.821868 | 0.959039 |
## Linear Regression using Kernel Trick
### Kernel Trick
One of the limitation of linear regression is that it is usually implemented to utilize the linear relationship between the predictors in the input space, but this tend to overlook the interaction between these predictors in the input space. One of the approaches to mitigate this probelem is by mapping the input space into the feature space($\underline{x}\rightarrow\phi(\underline{x}$). In which thie feature space tends to be a in a higher dimension than the original input space, and in this space we can apply linear regression and achieve good result by finding the parameter of the non-linear model with respect to the feature. Nevertheless, this approach suffers from that it is computationally expensive to compute the feature vector for each observation. To solve this issue we use what is known as the kernel trick and a better name for this approach would be kernel substitution, in which instead of calculating the feature vector explicitly we could calculate it implicitly using the kernel function. The kernel function is an inner product between two feature vectors which can be expressed as follows:- $k(x,x')=\langle \phi(x), \phi(x')\rangle=\phi(x)^T\phi(x')$. There are many kernels from the like of linear kernel($k(x,x')=x^Tx'$), stationary kernel ($k(x,x')=c(x-x')$), Gaussian kernel ($k(x, x')=exp(\frac{-||x-x'||_2^2}{2\sigma^2})$), and etc. One can view kernels as a similarity measure between two observation. There are many conditions to choose a kernel one of those condition are sufficient condition(Mercer kernel) in which the kernel matrix must be P(S)D, in which this can be tested by checking for the eignevalues of the kernel matrix. THe kernel matrix is the outer product of the design matrix, and the design matrix will have its rows as the feature vector of each observation. The equation of kernel marix is as follows:- $K = \Phi\Phi^T$in which:-
$$
\begin{align*}
&\Phi=
\begin{pmatrix}
& \phi(\underline{x_1})^T\\
& \phi(\underline{x_1})^T\\
&....\\
& \phi(\underline{x_n})^T\\
\end{pmatrix}
\end{align*}
$$
Therefore, the kernel matrix size is $R^{nxn}$ which is a symmetric matrix (or sometimes called Gram m). One of the core ideas in kernel is that you can build more complex kernels from atomic kernels, like, the linear kernel. Let's see this idea with the Gaussian kerenel.
$$
\begin{align*}
\begin{split}
&k(x, x')=exp(\frac{-||x-x'||_2^2}{2\sigma^2})=exp(\frac{-(x-x')^(x-x')}{2\sigma^2})=exp(-\frac{x^Tx-x^Tx'-x'^Tx+x'^Tx'}{2\sigma^2})\\
&k(x,x')=exp(\frac{k1(x, x')}{\sigma^2})exp(-\frac{k1(x', x')}{2\sigma^2})exp(-\frac{k1(x, x')}{2\sigma^2});\ where\ is\ k1(.,.)\ is\ linear\ kernel\\
&We\ can\ change\ this\ kernel\ to\ be\ nonlinear\ kernel\ from\ the\ like\ of\ k2(x, x')=(x^Tx'+c)^M\\
&Where\ k2\ can\ be\ also\ build\ from\ atomic\ kernels\ from\ the\ like\ of\ the\ linear\ kernel\
\end{split}
\end{align*}
$$
Also, it is known that the feature vector that is used to build the Gaussian kernel is a infinte dimension, hence, this would be computationally infeasible and this is indicative of the importance of kernel trick.
### Kernel Trick on Linear Regression Direct Solution
The cost function that we usually minimize for the linear regression is the mean squared error which can be driven by maximizing the likelihood. The following cost function will be used for the regularized cost function which is equivalent of using a gaussian prior.
$$
\begin{align*}
\begin{split}
&\nabla_w\big( J(w)=\frac{1}{2}\sum_{n}(w^T\phi(x_n) -t_n)^2 + \frac{\lambda}{2}||w||_2^2 \big)\\
&w = \frac{-1}{\lambda}\sum{n}(w^T\phi(x_n) - t_n)\phi(x_n) = \frac{-1}{\lambda}\sum{n}a_n\phi(x_n)=\Phi^T\underline{a}; substituting\ this\ solution\ into\ J(w)\\
&J(w) =\frac{1}{2}(\Phi(\Phi^Ta) - t)^T(\Phi(\Phi^Ta) - t) + \frac{\lambda}{2}(\Phi^Ta)^T\\
&J(w)=\frac{1}{2}a^T\Phi\Phi^T\Phi^T\Phi a-a^T\Phi\Phi^Tt -\frac{1}{2}t^t+\frac{\lambda}{2}a^T\Phi\Phi^Ta;by\ using\ K=\Phi\Phi^T\\
&As\ can\ be\ seen\ the\ w\ have\ disappeared\ and\ were\ replaced\ by\ a\, so\ we\ are\ maximizing\ w.r.t\ a\\
&\nabla_aJ(a)=\nabla_a(\frac{1}{2}a^TKK^Ta-a^TKt -\frac{1}{2}t^t+\frac{\lambda}{2}a^TK a)\\
&KK^Ta-K^Tt+\lambda Ka=0\rightarrow a = (KK^T+\lambda K)^{-1}K^Tt;\ by\ K=K^T\\
&a=(K+\lambda I_N)^{-1}t
\end{split}
\end{align*}
$$
And to make prediction $y(\phi(x_n), w) = w^T\phi(x_n)=a^T\Phi\phi(x_n)=\phi(x_n)^T\Phi^T(K+\lambda IN)^{-1}t=\underline(k(x)^T(K+\lambda I_N)^{-1}t$, where is k(x) is just the inner product of feature vector for xn and the design matrix. And this can be expressed as inner product of xn with every obsertation and this indicate that we need to store the dataset to make a prediction. So, one would think this is a severe downfall for this method but as we will see in SVM there would be few vectors to be stored which usually are called support vectors. As can be seen the prediction and the optimization were complete expressed by the kernel fucntion, hence, the name of kernel substitution.
Also, as can be seen from the equation for a that minimize the cost function, have a dimension of nx1 instead of px1. So, this raise an issue that if we have a large dataset then out parameter(a) would be large, but this trade off will be really appreciated when a simple linear model with respect to the predictors doesn't perform well and explicit computation for the feature vector is computationally infeasible. Also, the kernel trick will be a signficant factor in SVM that relies on changing the space of the dataset to be linearly separable in this new space while it would be non-linearly separable in the original space.
```
%matplotlib inline
import numpy as np
import sklearn.preprocessing
import sklearn.datasets
import pandas as pd
import sklearn.model_selection
import numpy.random
import math
import sklearn.metrics
import sklearn.kernel_ridge
X, y = sklearn.datasets.load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, random_state=42)
standard = sklearn.preprocessing.StandardScaler()
X_train = standard.fit_transform(X_train)
y_train = standard.fit_transform(y_train.reshape(-1, 1))
training_data = np.c_[X_train, y_train]#All of the features are continuous, so, no need to use one-hot encoder and we can directly standard normalize the features of the data set
X_test = standard.transform(X_test)
y_test = standard.transform(y_test.reshape(-1, 1))
test_data = np.c_[X_test, y_test]
print(training_data.shape)
print(test_data.shape)
def gaussian_kernel(x, x_star, sigma):
return np.exp(np.divide(-1*(np.linalg.norm(x-x_star)**2), 2*sigma**2))
def estimating_a(X, y, lambd, sigma):
K = np.zeros((X.shape[0], X.shape[0]))
for i in range(0, X.shape[0]):
for j in range(0, X.shape[0]):
K[i, j] = gaussian_kernel(X[i, :], X[j, :], sigma)
#K[i, :] = gaussian_kernel(X[i, :], X[:, :], sigma)
#print(K)
I = np.eye(X.shape[0])
a = np.dot(np.linalg.inv(K + lambd * I), y)
return a, K
def prediction(xn, X, t, K, lambd, sigma):
k = np.zeros((X.shape[0], 1))
for i in range(0, X.shape[0]):
k[i] = gaussian_kernel(xn, X[i, :], sigma)
I = np.eye(X.shape[0])
return (np.dot( (np.dot(k.T, np.linalg.inv( K + lambd*I ))), t))[0]
a, K = estimating_a(X_train, y_train, 0.3, 2)
pred = []
for x in X_train:
pred.append(prediction(x, X_train, y_train, K, 0.3, 2))
sklearn.metrics.mean_squared_error(y_train, pred)#Should be in range of [-1, 1]
pred = []
for x in X_test:
pred.append(prediction(x, X_train, y_train, K, 0.3, 2))
sklearn.metrics.mean_squared_error(y_test, pred)#The test error became large which indicative of overfitting, so, we need to either change from Gaussian kernel or make the lambda with larger value to make the parameter more sparse
```
### References
* Chapter 3, and Chapter 5 from Bishop, C. (2006). Pattern Recognition and Machine Learning. Cambridge: Springer.
* Andrew Ng, Lec 7: (https://www.youtube.com/watch?v=s8B4A5ubw6c)
* Andrew Ng, Lec 8: (https://www.youtube.com/watch?v=bUv9bfMPMb4)
|
github_jupyter
|
%matplotlib inline
import numpy as np
import sklearn.preprocessing
import sklearn.datasets
import pandas as pd
import sklearn.model_selection
import numpy.random
import math
import sklearn.metrics
import sklearn.kernel_ridge
X, y = sklearn.datasets.load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, random_state=42)
standard = sklearn.preprocessing.StandardScaler()
X_train = standard.fit_transform(X_train)
y_train = standard.fit_transform(y_train.reshape(-1, 1))
training_data = np.c_[X_train, y_train]#All of the features are continuous, so, no need to use one-hot encoder and we can directly standard normalize the features of the data set
X_test = standard.transform(X_test)
y_test = standard.transform(y_test.reshape(-1, 1))
test_data = np.c_[X_test, y_test]
print(training_data.shape)
print(test_data.shape)
def gaussian_kernel(x, x_star, sigma):
return np.exp(np.divide(-1*(np.linalg.norm(x-x_star)**2), 2*sigma**2))
def estimating_a(X, y, lambd, sigma):
K = np.zeros((X.shape[0], X.shape[0]))
for i in range(0, X.shape[0]):
for j in range(0, X.shape[0]):
K[i, j] = gaussian_kernel(X[i, :], X[j, :], sigma)
#K[i, :] = gaussian_kernel(X[i, :], X[:, :], sigma)
#print(K)
I = np.eye(X.shape[0])
a = np.dot(np.linalg.inv(K + lambd * I), y)
return a, K
def prediction(xn, X, t, K, lambd, sigma):
k = np.zeros((X.shape[0], 1))
for i in range(0, X.shape[0]):
k[i] = gaussian_kernel(xn, X[i, :], sigma)
I = np.eye(X.shape[0])
return (np.dot( (np.dot(k.T, np.linalg.inv( K + lambd*I ))), t))[0]
a, K = estimating_a(X_train, y_train, 0.3, 2)
pred = []
for x in X_train:
pred.append(prediction(x, X_train, y_train, K, 0.3, 2))
sklearn.metrics.mean_squared_error(y_train, pred)#Should be in range of [-1, 1]
pred = []
for x in X_test:
pred.append(prediction(x, X_train, y_train, K, 0.3, 2))
sklearn.metrics.mean_squared_error(y_test, pred)#The test error became large which indicative of overfitting, so, we need to either change from Gaussian kernel or make the lambda with larger value to make the parameter more sparse
| 0.33546 | 0.991579 |
# Visualizing results of different machine learning algorithms
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as ticker
learning_results = pd.read_csv('MachineLearningStatistics.csv').set_index('Property')
learning_results
# Define a color palette for plotting
cmap = sns.diverging_palette(261, 18, s=80, l=60, as_cmap=True)
SMALL_SIZE = 14
MEDIUM_SIZE = 18
BIGGER_SIZE = 22
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize
f, ax = plt.subplots(figsize=(10,7))
#ax.set_title('R^2: Learners vs. Properties',fontsize=BIGGER_SIZE)
learn_heatmap = sns.heatmap(learning_results, vmin=0, vmax=1.0, cmap = "YlGnBu", linewidths=3, square = True, annot=True)
ax.tick_params(axis='x', which='major', rotation=0)
plt.yticks([0.5,1.5,2.5,3.5,4.5,5.5], ('Magnetic\nSaturation', 'Magnetostriction', 'Coercivity', 'Curie\nTemperature', 'Grain\nSize','Permeability'), fontname = "Times New Roman", fontsize=14)
plt.ylabel('', rotation=0)
plt.xticks([0.5,1.5,2.5,3.5,4.5,5.5], ('Random\nForest', 'kNN', 'Decision\nTree', 'Neural\nNetwork', 'SVM','Linear\nRegression'), fontname = "Times New Roman", fontsize=14)
learn_heatmap.figure.savefig('Figures\\machine_learning_heatmap.eps', bbox_inches='tight')
magnetic_saturation = pd.read_csv('MagneticSaturationPredictions.csv')
magnetostriction = pd.read_csv('MagnetostrictionPredictions.csv')
coercivity = pd.read_csv('CoercivityPredictions.csv')
actual_saturation = magnetic_saturation['Magnetic Saturation']
predicted_saturation = magnetic_saturation['Random Forest']
actual_magnetostriction = magnetostriction['Magnetostriction']
predicted_magnetostriction = magnetostriction['Random Forest']
actual_coercivity = coercivity['LogCoercivity']
predicted_coercivity = coercivity['Random Forest']
```
## Visualizing results of random forest algorithm
```
fig, ax = plt.subplots(figsize=(5,5))
plt.text(0.25, 1.75, 'Magnetic Saturation',fontname="Times New Roman")
plt.xlabel("Actual",fontname="Times New Roman")
plt.ylabel("Predicted", fontname="Times New Roman")
plt.xticks(fontsize=14, fontname = "Times New Roman")
plt.yticks(fontsize=14, fontname = "Times New Roman")
plt.xlim(0, 2)
plt.ylim(0, 2)
# plt.title("Random Forest\nMagnetic Saturation (T)")
ax.scatter(actual_saturation, predicted_saturation, marker='o', s=18, color='#000080',zorder=1, facecolors='none')
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
ax.tick_params(direction='in', length=5)
ax.xaxis.set_major_locator(ticker.MultipleLocator(0.5))
ax.yaxis.set_major_locator(ticker.MultipleLocator(0.5))
ax.plot(lims, lims, 'k-', zorder=2, linewidth=2, linestyle='solid', color='#FF0000')
ax.set_xlim(lims)
ax.set_ylim(lims)
fig.savefig('Figures\\magnetic_saturation_predictions.eps')
fig, ax = plt.subplots(figsize=(5,5))
plt.text(-3.75, 33.75, 'Magnetostriction',fontname="Times New Roman")
plt.xlabel("Actual",fontname="Times New Roman")
plt.ylabel("Predicted", labelpad = -2, fontname="Times New Roman")
plt.xticks(fontsize=14, fontname = "Times New Roman")
plt.yticks(fontsize=14, fontname = "Times New Roman")
plt.xlim(-10, 40)
plt.ylim(-10, 40)
ax.scatter(actual_magnetostriction, predicted_magnetostriction, marker='s', s=18, color='#808080',zorder=1, facecolors='none')
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
ax.tick_params(direction='in', length=5)
ax.xaxis.set_major_locator(ticker.MultipleLocator(10))
ax.yaxis.set_major_locator(ticker.MultipleLocator(10))
ax.plot(lims, lims, 'k-', zorder=2, linewidth=2, linestyle='solid', color='#FF0000')
ax.set_xlim(lims)
ax.set_ylim(lims)
fig.savefig('Figures\\magnetostriction_predictions.eps')
fig, ax = plt.subplots(figsize=(5,5))
plt.text(-3.125, 8.125, 'log(Coercivity)',fontname="Times New Roman")
plt.xlabel("Actual",fontname="Times New Roman")
plt.ylabel("Predicted", labelpad = -5, fontname="Times New Roman")
plt.xticks(fontsize=14, fontname = "Times New Roman")
plt.yticks(fontsize=14, fontname = "Times New Roman")
plt.xlim(-5, 10)
plt.ylim(-5, 10)
ax.scatter(actual_coercivity, predicted_coercivity, marker='^', s=18, color='#0000ff',zorder=1, facecolors='none')
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
ax.tick_params(direction='in', length=5)
ax.xaxis.set_major_locator(ticker.MultipleLocator(2.5))
ax.yaxis.set_major_locator(ticker.MultipleLocator(2.5))
ax.plot(lims, lims, 'k-', zorder=2, linewidth=2, linestyle='solid', color='#FF0000')
ax.set_xlim(lims)
ax.set_ylim(lims)
fig.savefig('Figures\\coercivity_predictions.eps')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as ticker
learning_results = pd.read_csv('MachineLearningStatistics.csv').set_index('Property')
learning_results
# Define a color palette for plotting
cmap = sns.diverging_palette(261, 18, s=80, l=60, as_cmap=True)
SMALL_SIZE = 14
MEDIUM_SIZE = 18
BIGGER_SIZE = 22
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize
f, ax = plt.subplots(figsize=(10,7))
#ax.set_title('R^2: Learners vs. Properties',fontsize=BIGGER_SIZE)
learn_heatmap = sns.heatmap(learning_results, vmin=0, vmax=1.0, cmap = "YlGnBu", linewidths=3, square = True, annot=True)
ax.tick_params(axis='x', which='major', rotation=0)
plt.yticks([0.5,1.5,2.5,3.5,4.5,5.5], ('Magnetic\nSaturation', 'Magnetostriction', 'Coercivity', 'Curie\nTemperature', 'Grain\nSize','Permeability'), fontname = "Times New Roman", fontsize=14)
plt.ylabel('', rotation=0)
plt.xticks([0.5,1.5,2.5,3.5,4.5,5.5], ('Random\nForest', 'kNN', 'Decision\nTree', 'Neural\nNetwork', 'SVM','Linear\nRegression'), fontname = "Times New Roman", fontsize=14)
learn_heatmap.figure.savefig('Figures\\machine_learning_heatmap.eps', bbox_inches='tight')
magnetic_saturation = pd.read_csv('MagneticSaturationPredictions.csv')
magnetostriction = pd.read_csv('MagnetostrictionPredictions.csv')
coercivity = pd.read_csv('CoercivityPredictions.csv')
actual_saturation = magnetic_saturation['Magnetic Saturation']
predicted_saturation = magnetic_saturation['Random Forest']
actual_magnetostriction = magnetostriction['Magnetostriction']
predicted_magnetostriction = magnetostriction['Random Forest']
actual_coercivity = coercivity['LogCoercivity']
predicted_coercivity = coercivity['Random Forest']
fig, ax = plt.subplots(figsize=(5,5))
plt.text(0.25, 1.75, 'Magnetic Saturation',fontname="Times New Roman")
plt.xlabel("Actual",fontname="Times New Roman")
plt.ylabel("Predicted", fontname="Times New Roman")
plt.xticks(fontsize=14, fontname = "Times New Roman")
plt.yticks(fontsize=14, fontname = "Times New Roman")
plt.xlim(0, 2)
plt.ylim(0, 2)
# plt.title("Random Forest\nMagnetic Saturation (T)")
ax.scatter(actual_saturation, predicted_saturation, marker='o', s=18, color='#000080',zorder=1, facecolors='none')
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
ax.tick_params(direction='in', length=5)
ax.xaxis.set_major_locator(ticker.MultipleLocator(0.5))
ax.yaxis.set_major_locator(ticker.MultipleLocator(0.5))
ax.plot(lims, lims, 'k-', zorder=2, linewidth=2, linestyle='solid', color='#FF0000')
ax.set_xlim(lims)
ax.set_ylim(lims)
fig.savefig('Figures\\magnetic_saturation_predictions.eps')
fig, ax = plt.subplots(figsize=(5,5))
plt.text(-3.75, 33.75, 'Magnetostriction',fontname="Times New Roman")
plt.xlabel("Actual",fontname="Times New Roman")
plt.ylabel("Predicted", labelpad = -2, fontname="Times New Roman")
plt.xticks(fontsize=14, fontname = "Times New Roman")
plt.yticks(fontsize=14, fontname = "Times New Roman")
plt.xlim(-10, 40)
plt.ylim(-10, 40)
ax.scatter(actual_magnetostriction, predicted_magnetostriction, marker='s', s=18, color='#808080',zorder=1, facecolors='none')
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
ax.tick_params(direction='in', length=5)
ax.xaxis.set_major_locator(ticker.MultipleLocator(10))
ax.yaxis.set_major_locator(ticker.MultipleLocator(10))
ax.plot(lims, lims, 'k-', zorder=2, linewidth=2, linestyle='solid', color='#FF0000')
ax.set_xlim(lims)
ax.set_ylim(lims)
fig.savefig('Figures\\magnetostriction_predictions.eps')
fig, ax = plt.subplots(figsize=(5,5))
plt.text(-3.125, 8.125, 'log(Coercivity)',fontname="Times New Roman")
plt.xlabel("Actual",fontname="Times New Roman")
plt.ylabel("Predicted", labelpad = -5, fontname="Times New Roman")
plt.xticks(fontsize=14, fontname = "Times New Roman")
plt.yticks(fontsize=14, fontname = "Times New Roman")
plt.xlim(-5, 10)
plt.ylim(-5, 10)
ax.scatter(actual_coercivity, predicted_coercivity, marker='^', s=18, color='#0000ff',zorder=1, facecolors='none')
lims = [
np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes
np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes
]
# now plot both limits against eachother
ax.tick_params(direction='in', length=5)
ax.xaxis.set_major_locator(ticker.MultipleLocator(2.5))
ax.yaxis.set_major_locator(ticker.MultipleLocator(2.5))
ax.plot(lims, lims, 'k-', zorder=2, linewidth=2, linestyle='solid', color='#FF0000')
ax.set_xlim(lims)
ax.set_ylim(lims)
fig.savefig('Figures\\coercivity_predictions.eps')
| 0.522446 | 0.811751 |
```
# Activate R magic:
%load_ext rpy2.ipython
```
# 1.Introducción
Las ventas online en Chile aumentaron 119% en la última semana de marzo de 2021, cuando comenzaron las cuarentenas en el país, [según la Cámara de Comercio de Santiago](https://forbescentroamerica.com/2020/04/23/el-efecto-de-covid-19-en-el-ecommerce/).
Esto implicó un cambio en la estrategia de negocios de múltiples empresas a nivel nacional. Dada la imposibilidad de ir presencialmente a una tienda, gran cantidad de empresas a nivel nacional tuvieron que dejar de lado la clásica tienda comercial y se vieron obligados a potenciar su canal de ventas e-commerce.
De esta forma, se busca entregar información personalizada a los clientes, vía e-mail, basado en los productos que este ha adquirido anteriormente. Con esto, también se busca tener más información sobre los clientes para futuros proyectos.
**Blockstore** es una cadena de tiendas, ubicada en Chile, enfocada la venta de zapatillas, accesorios y vestuario de las mejores marcas urbanas.
Dada la contingencia mundial de la pandemia y el estallido social del 2019, Block tuvo que adaptarse al canal de ventas online por lo que la necesidad de la aplicación de análisis de datos y en particular de minería de datos va en el contexto cambiante en el ámbito nacional, donde la crisis social, pandemia, mayor dinero circulante y un aumento explosivo en ventas online hacen necesario tener las mejores estrategias diferenciadas para cada cliente.
Es por esto que decidimos estudiar el comportamiento de compra de los clientes a nivel nacional.
Nos parece interesante estudiar estos datos para generar más ventas enfocando una estrategia de negocios diferenciada para cada grupo de clientes en específico de acuerdo a sus gustos.
# 2.Exploración de Datos
```
# Asignamos nuestro "working directory" (set w. d.) como el directorio ~/RDATA y ponemos librerías
%%R
library(ggplot2)
library(dplyr)
library(tidyverse)
```
**Cargamos los datos originales**
Sin embargo, estos datos no están limpios debido a que los clientes no ingresaron bien los datos, por lo que se realizó una limpieza de los datos para poder trabajar de mejor manera.
```
# (No volver a ejecutar pls)
%%R
pedidos_preeliminar <- read_csv("/content/orders.csv")
pedidos_detalle_preeliminar <-read_csv("/content/order_detail.csv")
```
A modo de resumen se dejan la cantidad de columnas y filas de cada dataset
```
%%R
print(nrow(pedidos_preeliminar))
print(ncol(pedidos_preeliminar))
print(nrow(pedidos_detalle_preeliminar))
print(ncol(pedidos_detalle_preeliminar))
```
Para la limpieza se eliminaron filas vacías, datos *dummy* realizados por la empresa, y otros valores incosistentes y repetitivos.
También se hizo un refactoring de los nombres de las columnas para un mejor entendimiento del dataset.
```
%%R
pedidos<-read.csv("/content/Orders_OFICIAL.csv", encoding = "UTF-8", sep=";")
pedidos_detalle <-read.csv("/content/ORDER_DETAIL_OFICIAL.csv", encoding = "UTF-8",sep=";")
pedidos$count <- as.numeric(ave(pedidos$Comuna, pedidos$Comuna, FUN = length))
pedidos_detalle$count.marca <- as.numeric(ave(pedidos_detalle$Marca, pedidos_detalle$Marca, FUN = length))
%%R
# Se cambia el formato de la columna Fecha.Compra a Date.
pedidos$Fecha.Compra <- as.Date(pedidos$Fecha.Compra, format ="%d-%m-%Y")
# Se cambia el formato de la columna Fecha.Pedido a Date. También se agrega una columna con el año.
pedidos_detalle$Fecha.Pedido <- as.Date(pedidos_detalle$Fecha.Pedido, format ="%Y-%m-%d")
pedidos_detalle$anio <- as.numeric(format(pedidos_detalle$Fecha.Pedido,'%Y'))
```
# 2.1. Estudio sobre los datos filtrados
1. **A grandes razgos podemos calcular la cantidad promedio que se gasta una persona en Block:**
```
%%R
nfilas <- nrow(pedidos)
total_vendido <- sum(pedidos$Precio.Pedido)
promedio_pedidos <- total_vendido/nfilas
(promedio_pedidos)
```
2. **Vamos a ver primero la cantidad de gente que pide más y menos que el promedio:**
```
%%R
#La gente que pide mas que el promedio
pedidos_RM_mayorpromedio <- data.frame(pedidos[pedidos$REGION.CON.CODIGO == "RM" & pedidos$Precio.Pedido > promedio_pedidos,] )
print(nrow(pedidos_RM_mayorpromedio))
#La gente que pide menos que el promedio
pedidos_RM_menorpromedio <- data.frame(pedidos[pedidos$REGION.CON.CODIGO == "RM" & pedidos$Precio.Pedido <= promedio_pedidos,] )
print(nrow(pedidos_RM_menorpromedio))
```
3. **El pedido más caro en Block online**
Podemos ver que en la tabla pedidos_Detalles según el numero de pedido podemos ver el detalle de la compra que realizó la clienta.
```
%%R
pedidos[which.max(pedidos$Precio.Pedido),]
%%R
#usamos su numero de pedido para ver las compras realizadas
pedido_maximo_detalle <- pedidos_detalle[pedidos_detalle$Numero.Pedido == "#BL4499",]
#verificamos el dinero sea consistente en ambas tablas
sum(pedido_maximo_detalle$Precio.Total.Productos)
```
4. **La cantidad de pedidos divididos por región:**
```
%%R
ggplot(pedidos, aes(x = REGION.CON.CODIGO),) +
ggtitle("Cantidad de pedidos por región") +
geom_bar()
```
5. **Cantidad de pedidos dividos por comuna**
```
%%R
freq_comuna <- data.frame(comuna = pedidos$Comuna, count = pedidos$count)
freq_comuna <- unique(freq_comuna)
freq_comuna <- freq_comuna[order(freq_comuna[,"count"], decreasing = TRUE),]
ggplot(freq_comuna[1:30,], aes(x = reorder(comuna , count),y = count)) + ggtitle("Las 30 Comunas con mas pedidos del pais") + coord_flip() + geom_bar(stat = "identity")
```
6. Puente alto y Las condes son de las comunas más pobladas de la RM
Puente alto es una de las comunas más "zapatilleras" vs "Las condes" que es una comuna que tiende más a la ropa formal. A continuación veremos la cantidad de pedidos, el promedio de estos y la suma total de dinero de las respectivas comunas:
```
%%R
pedidos_comuna_Lascondes <- data.frame(pedidos[pedidos$Comuna == "LAS CONDES", ] )
print(nrow(pedidos_comuna_Lascondes))
print(mean(pedidos_comuna_Lascondes$Precio.Pedido))
print(sum(pedidos_comuna_Lascondes$Precio.Pedido))
pedidos_comuna_maipu <- data.frame(pedidos[pedidos$Comuna == "MAIPU", ] )
print(nrow(pedidos_comuna_maipu))
print(mean(pedidos_comuna_maipu$Precio.Pedido))
print(sum(pedidos_comuna_maipu$Precio.Pedido))
```
7. **Se deja un historial de pedidos desde el inicio del dataset hasta la ultima fecha presente:**
```
%%R
ggplot(pedidos, aes(x = Fecha.Compra) ) + geom_bar() + scale_x_date(date_breaks = "1 month", date_labels = "%b") + labs(title = "Compras a lo largo del tiempo", x = "fecha", y = "Cantidad de pedidos")
```
8. **Vamos a ver cuanto se ha vendido en block por marca**
```
%%R
freq_marca <- data.frame(marca = pedidos_detalle$Marca, count = pedidos_detalle$count.marca)
freq_marca <- unique(freq_marca)
freq_marca <- freq_marca[order(freq_marca[,"count"], decreasing = TRUE),]
ggplot(freq_marca[1:30,], aes(x = reorder(marca , count), y = count)) + ggtitle("Las 30 Marcas con mas pedidos del pais") + coord_flip() + geom_bar(stat = "identity")
```
9. **Veremos cuanto se ha vendido en block por marca:**
```
%%R
precio_marca <- data.frame(marca = pedidos_detalle$Marca, precio = pedidos_detalle$Precio.Producto)
p1 <- ggplot(pedidos_detalle, aes(x = Marca, y = Precio.Producto)) + geom_boxplot()+ coord_flip() + ggtitle("Precios de la marca")
p1
```
# Hito 3
Para este hito se trabajará con la librería *pandas* en *Python*, por lo que se debe reimportar las bases de datos y también limpiarlas para reordenar las ideas que serán llevadas a cabo en este *Hito 3*.
Se importan las librerías necesarias para la exploración:
```
# Librerias principales para la exploración:
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
Luego de esto, borramos las columnas de la dirección y el rut para proteger la privacidad de cada uno de los compradores, además que tenemos un ID de cliente que identifica a cada persona única, similar al *RUT*.
```
# Reimportamos todo
pedidos2 = pd.read_csv("Orders_OFICIAL.csv", sep=";")
details2 = pd.read_csv("ORDER_DETAIL_OFICIAL.csv", sep=";")
pedidos2 = pedidos2.drop(columns="Direccion 1")
pedidos2 = pedidos2.drop(columns="Direccion 2")
pedidos2 = pedidos2.drop(columns="RUT")
pedidos2 = pedidos2.drop(columns="Email")
pedidos2.head()
```
También cargaremos los datos en R para trabajar con las reglas de asociación
```
%%R
install.packages('arules')
install.packages('arulesViz')
```
# ¿Cómo se relaciona el precio de la compra con el lugar desde donde se pidió?
Esta era una pregunta planteada en el hito 1, la cual se pudo responder explorando datos
```
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
# Comunas
comunas = pedidos2['Comuna'].unique()
# print(comunas) -> Algunas son numeros
promedios = []
for comuna in comunas:
promedios.append(pedidos2[pedidos2['Comuna'] == comuna]['Precio Pedido'].mean())
# Promedio de precios de pedido por comuna
print("Promedio Chonchi: ",round(pedidos2[pedidos2['Comuna'] == 'CHONCHI']['Precio Pedido'].mean()),"Cantidad de pedidos:", len(pedidos2[pedidos2['Comuna'] == 'CHONCHI']['Precio Pedido']))
print("Promedio Santiago:",round(pedidos2[pedidos2['Comuna'] == 'SANTIAGO']['Precio Pedido'].mean()),"Cantidad de pedidos:", len(pedidos2[pedidos2['Comuna'] == 'SANTIAGO']['Precio Pedido']))
print("Promedio Vitacura:",round(pedidos2[pedidos2['Comuna'] == 'VITACURA']['Precio Pedido'].mean()),"Cantidad de pedidos:", len(pedidos2[pedidos2['Comuna'] == 'VITACURA']['Precio Pedido']))
print("Promedio Iquique: ",round(pedidos2[pedidos2['Comuna'] == 'IQUIQUE']['Precio Pedido'].mean()),"Cantidad de pedidos:", len(pedidos2[pedidos2['Comuna'] == 'IQUIQUE']['Precio Pedido']))
print("\n")
zip_iterator = zip(comunas, promedios)
diccionario = list(zip_iterator)
diccionario = sorted(diccionario, key=lambda cantidad:cantidad[1], reverse=True)
comunas_final = []
prom_final = []
for elem in diccionario:
if not elem[0].isnumeric():
comunas_final.append(elem[0])
prom_final.append(elem[1])
#print(comunas_final)
plt.figure(figsize=(10,6))
plt.scatter(comunas_final, prom_final)
plt.xlim(-0.5, 20.5)
plt.xticks(rotation=90)
plt.title("Promedios de pedidos por comuna (descendente)")
plt.xlabel("Comuna")
plt.ylabel("Monto promedio [$]")
plt.show()
```
# Preguntas y problemas
Dada la exploración anterior y su motivación original, formular preguntas que se pueden responder mediante la minería de datos y que se puedan vincular a la problemática planteada en la motivación
1. **¿Qué marcas se compran juntas? (En un mismo pedido)**
2. **¿Existe una tendencia, segun las caracteristicas del comprador, en la segunda compra con respecto a la primera?**
3. **¿Qué tipo de producto puede que compre alguien que ya adquirió x tipo de producto?**
4. **Comportamiento de clientes con respecto a la cantidad de compras y su monto total.**
## Generación de datasets
Para esta parte, utlizaremos datasets creados a partir de *ORDER_DETAIL_OFICIAL2.csv*, los cuales serán las canastas utilizadas para la pregunta 1 y 3.
Se adjunta el código utilizado.
### canastas_pedidos.csv
```
import numpy as np
import matplotlib.pyplot as plt
pedidos_detalles2 = pd.read_csv("ORDER_DETAIL_OFICIAL2.csv", sep=";")
pedidos = pd.read_csv("Orders_OFICIAL.csv", sep=";")
for i in pedidos_detalles2.index:
cantidad = pedidos_detalles2["Cantidad"][i]
if cantidad!=1:
for n in range(cantidad-1):
pedidos_detalles2=pedidos_detalles2.append({"id" : pedidos_detalles2["id"][i],
"Numero Pedido" : pedidos_detalles2["Numero Pedido"][i],
"Fecha Pedido" : pedidos_detalles2["Fecha Pedido"][i],
"Nombre Producto" : pedidos_detalles2["Nombre Producto"][i],
"SKU" : pedidos_detalles2["SKU"][i],
"Cantidad" : 1,
"Precio Producto" : pedidos_detalles2["Precio Producto"][i],
"Precio Total Productos" : pedidos_detalles2["Precio Total Productos"][i],
"Marca" : pedidos_detalles2["Marca"][i],
"Tipo Producto" : pedidos_detalles2["Tipo Producto"][i]}, ignore_index=True)
pedidos_detalles2=pedidos_detalles2.drop(columns="Cantidad")
canasta = {}
for i in pedidos_detalles2.index:
numero_pedido = pedidos_detalles2["Numero Pedido"][i]
marca_producto = pedidos_detalles2["Marca"][i].upper()
if numero_pedido in canasta.keys():
lista = canasta[numero_pedido]
lista.append(marca_producto)
canasta[numero_pedido] = lista
else:
canasta[numero_pedido]=[marca_producto]
otra_canasta = {}
for i in pedidos.index:
numero_pedido = pedidos["Numero Pedido"][i]
id_cliente = pedidos["ID Cliente"][i]
if id_cliente in otra_canasta.keys():
lista = otra_canasta[id_cliente]
indice = otra_canasta[id_cliente][-1][-2:]
new_indice = int(indice)+1
for i in canasta[numero_pedido]:
agregar= i +' ' +str(new_indice)
lista.append(agregar)
otra_canasta[id_cliente]=lista
else:
la_lista = []
for i in canasta[numero_pedido]:
agregar= i + " 1"
la_lista.append(agregar)
otra_canasta[id_cliente]=la_lista
lista_canasta =[]
for persona in otra_canasta:
lista_canasta.append(otra_canasta[persona])
import csv
with open('canastas_pedidos.csv', 'w', newline='') as file:
writer = csv.writer(file, quoting=csv.QUOTE_ALL,delimiter=';')
writer.writerows(lista_canasta)
```
### canastas_marcas.csv
```
pedidos_detalles2 = pd.read_csv("ORDER_DETAIL_OFICIAL2.csv", sep=";")
for i in pedidos_detalles2.index:
cantidad = pedidos_detalles2["Cantidad"][i]
if cantidad!=1:
for n in range(cantidad-1):
pedidos_detalles2=pedidos_detalles2.append({"id" : pedidos_detalles2["id"][i],
"Numero Pedido" : pedidos_detalles2["Numero Pedido"][i],
"Fecha Pedido" : pedidos_detalles2["Fecha Pedido"][i],
"Nombre Producto" : pedidos_detalles2["Nombre Producto"][i],
"SKU" : pedidos_detalles2["SKU"][i],
"Cantidad" : 1,
"Precio Producto" : pedidos_detalles2["Precio Producto"][i],
"Precio Total Productos" : pedidos_detalles2["Precio Total Productos"][i],
"Marca" : pedidos_detalles2["Marca"][i],
"Tipo Producto" : pedidos_detalles2["Tipo Producto"][i]}, ignore_index=True)
pedidos_detalles2=pedidos_detalles2.drop(columns="Cantidad")
pedidos_detalles2
canasta = {}
for i in pedidos_detalles2.index:
numero_pedido = pedidos_detalles2["Numero Pedido"][i]
marca_producto = pedidos_detalles2["Marca"][i].upper()
if numero_pedido in canasta.keys():
lista = canasta[numero_pedido]
lista.append(marca_producto)
canasta[numero_pedido] = lista
else:
canasta[numero_pedido]=[marca_producto]
lista =[]
for numero_pedido in canasta:
lista.append(canasta[numero_pedido])
import csv
with open('canastas_marcas.csv', 'w', newline='') as file:
writer = csv.writer(file, quoting=csv.QUOTE_ALL,delimiter=';')
writer.writerows(lista)
```
## Pregunta 1
### ¿Qué marcas se compran juntas? (En un mismo pedido)
```
from mlxtend.frequent_patterns import apriori
details = pd.read_csv("ORDER_DETAIL_OFICIAL2.csv", sep=";")
details.head(3)
# Activate R magic:
%load_ext rpy2.ipython
%%R
library('arules') # cargamos arules
```
Se cargan los datos de las canastas con las marcas. Corresponde a un archivo previamente filtrado y agrupado en canastas, derivado del archivo *ORDER_DETAIL_OFICIAL2.csv*
```
%%R
canastas_marcas <- read.transactions("canastas_marcas.csv", sep=";")
```
Se utilizó un *support* con valor muy pequeño debido a la gran cantidad de datos distintos que hay
```
%%R
rules_marcas <- apriori(canastas_marcas, parameter=list(support=0.0001, confidence=0.1), minlen=2)
```
Mostramos las reglas generadas con los valores anteriores:
```
%%R
rules_marcas.sorted <- sort(rules_marcas, by="lift")
rules_marcas.sorted.first3 <- head(rules_marcas.sorted, 50)
inspect(rules_marcas.sorted.first3)
```
#### Visualización de las reglas
```
%%R
install.packages('graphlayouts')
```
Gráfico de Dispersión
```
%%R
library('arulesViz')
plot(rules_marcas)
```
Gráfico de matriz agrupada
```
%%R
plot(rules_marcas, method = "grouped")
```
Grafo
```
%%R
subrules <- head(rules_marcas, n = 20, by = "lift")
plot(subrules, method = "graph")
```
Al observar las reglas de asociación generadas, se puede observar que las con mayor lift son Vans implica converse
y converse implica Vans lo cual no es de gran sorpresa si es que se conoce los productos que tiene cada marca, esto ya que
en general similares en estilo. Dada esta regla, podemos ofrecerles indistintamente a clientes que compran una de estas marcas
productos de la otra.
Por otro lado, las 3 ultimas reglas son más interesantes ya que Block y Stance son las únicas marcas en la tienda que
venden calcetines. Con este resultado, si es que llegaran nuevas marcas de calcetines a la tienda,
se podrían generar packs de oferta para incentivar la compra de los nuevos productos apoyándose de Vans y Converse
dado que por el estudio de datos se sabe que son marcas muy vendidas en Block.
En el gráfico de matriz agrupada, el cual hay esferas con grupos de antecedentes como columnas
y consecuentes como filas, donde el color y el tamaño representan las medidas de interés (lift y support).
Se nota que la regla con mayor lift es Vans -> Convers y la con mayor support Dvs -> Supra.
Y en este último gráfico las marcas y las reglas son representados como vértices conectados
con aristas dirigidas. Las medidas de interés se representan mediante el color y el tamaño
de los vértices al igual que en el grafo anterior. Este grafico ayuda con la visualización de las reglas.
## Pregunta 2
### ¿Existe una tendencia, segun las caracteristicas del comprador, en la segunda compra con respecto a la primera?
```
df = pd.read_csv("comuna_precio_pedido_1y2.csv", sep=";")
X= df[['Precio pedido 1','Precio pedido 2']].to_numpy()
y=df['Comuna'].to_numpy()
print("X:\n", X[:10])
print("Y:\n", y[:10])
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(X, y) ## Entrenar usando X (features), y (clase)
y_pred = clf.predict(X) ## predecir 'y' usando la matriz 'X'
print(y_pred)
from sklearn.metrics import accuracy_score
print("Accuracy:", accuracy_score(y, y_pred))
from sklearn.metrics import classification_report
print(classification_report(y, y_pred))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.33, random_state=37,
stratify=y)
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train) ## Entrenamos con features X_train y clases y_train
y_pred = clf.predict(X_test) ## Predecimos con nuevos datos (los de test X_test)
print("Accuracy en test set:", accuracy_score(y_test, y_pred)) ## Evaluamos la predicción comparando y_test con y_pred
```
## Pregunta 3
### ¿Qué tipo de producto puede que compre alguien que ya adquirió x tipo de producto?
Se cargan los datos de las canastas con el tipo de producto, la marca y el número de compra en que lo hizo la misma persona. Archivo previamente filtrado y agrupado en canastas, derivado de *ORDER_DETAIL_OFICIAL2.csv*
```
%%R
canastas_pedidos <- read.transactions("canastas_pedidos.csv", sep=";")
```
Se utilizó un *support* con valor pequeño debido a la gran cantidad de datos distintos que hay.
```
%%R
rules_pedidos <- apriori(canastas_pedidos, parameter=list(support=0.001, confidence=0.1), minlen=2)
```
Mostramos las reglas generadas con los valores anteriores:
```
%%R
rules_pedidos.sorted <- sort(rules_pedidos, by="lift")
rules_pedidos.sorted.first3 <- head(rules_pedidos.sorted, 15)
inspect(rules_pedidos.sorted.first3)
%%R
out <- data.frame(
lhs=labels(lhs(rules_pedidos.sorted)),
rhs=labels(rhs(rules_pedidos.sorted)),
support = quality(rules_pedidos.sorted)$support,
count=quality(rules_pedidos.sorted)$count,
confidence=quality(rules_pedidos.sorted)$confidence)
write.csv(out, "datos_cherrypoto0.csv", row.names = FALSE)
```
Podemos notar que hay reglas que no nos sirven, por ejemplo las que son del tipo ***{prod 3} => {prod 1}***, ya que queremos saber cuál fue la siguiente compra de la persona en orden cronológico.
Para solucionar esto, guardamos el archivo en un csv y será filtrado a mano, ya que no son demasiadas reglas.
|lhs | | rhs | support | count | confidence
|--------------------------------------------|:---:|:------------------------:|:---------:|:-----:|----------------
|{Zapatilla SUPRA 1,Zapatilla SUPRA 3} | => | {Zapatilla SUPRA 4} | 0.001042094 | 89 |0.307958478
|{Zapatilla SUPRA 2,Zapatilla SUPRA 3} | => | {Zapatilla SUPRA 4} | 0.001018676 |87 |0.294915254
|{Beanie 2} | => | {Beanie 3} | 0.001006967 |86 | 0.202830189
|{Zapatilla SUPRA 3} | => | {Zapatilla SUPRA 4} | 0.001346525 |115 | 0.227272727
|{Jockey 1,Jockey 2} | => | {Jockey 3} | 0.002072478 |177 | 0.210965435
|{Jockey 2} | => | {Jockey 3} | 0.002763304 | 236 |0.1712627
|{Zapatilla SUPRA 1,Zapatilla SUPRA 2} | => | {Zapatilla SUPRA 3} | 0.002529126 |216 | 0.205714286
|{Zapatilla SUPRA 2} | => | {Zapatilla SUPRA 3} | 0.00345413 |295 | 0.161026201
|{Zapatilla CONVERSE 1,Zapatilla CONVERSE 2} | => | {Zapatilla CONVERSE 3} | 0.001604122 |137 | 0.123090746
|{Zapatilla VANS 1,Zapatilla VANS 2} | => | {Zapatilla VANS 3} | 0.001779755 |152 | 0.133099825
|{Zapatilla KSWISS 1} | => | {Zapatilla KSWISS 2} | 0.002318365 |198 | 0.10301769
|{Jockey 1} | => | {Jockey 2}| 0.009823781 | 839 | 0.10930172
|{Zapatilla SUPRA 1} | => | {Zapatilla SUPRA 2} | 0.012294362 | 1050 | 0.109181657
Aquí podemos ver esta tabla ordenada por el atributo lift de mayor a menor, donde la primera fila posee un lift de 143, lo que nos indica que ese conjunto aparece una cantidad de veces superior a lo esperado bajo condiciones de independencia.
Igual que en el resto de los conjuntos, podemos ver cómo la gente suele comprar zapatillas SUPRA y en su siguiente compra lo hace nuevamente.
Esto lo podemos utilizar a conveniencia para ofrecer descuentos a la gente que suela comprar zapatillas SUPRA en este mismo producto y así poder utilizar de buena manera los resultados obtenidos de cómo se comportan las personas a lo largo de todas las compras realizadas.
## Pregunta 4
### Comportamiento de clientes con respecto a la cantidad de compras y su monto total
```
from sklearn.cluster import KMeans
from sklearn import datasets
import matplotlib.pyplot as plt
import numpy as np
from sklearn.neighbors import NearestNeighbors
from sklearn.cluster import DBSCAN
from sklearn.metrics import silhouette_score
```
Se crea una tabla con lo cantidad de pedidos y lo total gastado por cada cliente.
```
# crear tabla a partir de la otra
pedidos3 = pedidos2[['ID Cliente', 'Cantidad Pedidos Cliente', 'Total Gastado Cliente']].copy()
print("Numero de filas con clientes repetidos:",pedidos3.size)
pedidos3 = pedidos3.drop_duplicates(subset = ['ID Cliente'])
print("Numero de filas luego de quitar clientes repetidos:",pedidos3.size)
pedidos3 = pedidos3.drop(['ID Cliente'], axis=1)
pedidos3.head()
```
Se crea un dataframe con el cual se trabajará en los clusters.
```
X = pd.DataFrame(pedidos3).to_numpy()
X
```
**K-means:**
Se utiliza el metodo del codo entre 1 y 10 clusters para definir la cantidad de clusters que se utilizarán luego.
```
sse = []
clusters = list(range(1, 10))
for k in clusters:
kmeans = KMeans(n_clusters=k).fit(X)
sse.append(kmeans.inertia_)
plt.plot(clusters, sse, marker="o")
plt.title("Metodo del codo de 1 a 10 clusters")
plt.grid(True)
plt.show()
```
Se grafican los clusters.
```
kmeans3 = KMeans(n_clusters=3, random_state=100).fit(X)
plt.scatter(X[:, 0], X[:, 1], c=kmeans3.labels_)
plt.title("K-Means")
plt.ylabel("Monto Gastado")
plt.xlabel("Cantidad de pedidos")
plt.show()
```
**DBSCAN:**
Se utiliza el metodo de la rodilla para estimar el valor optimo del eps. Como no se encuentra un punto de infleccion claro, se eligen algunos valores donde la curva sube.
```
nbrs = NearestNeighbors(n_neighbors=3).fit(X)
distances, indices = nbrs.kneighbors(X)
distances = np.sort(distances, axis=0)
distances = distances[:,1]
plt.axhline(y=0.1, color='r', linestyle='--') #Ajuste el valor para "y" en esta línea
plt.axhline(y=0.9, color='orange', linestyle='--') #Ajuste el valor para "y" en esta línea
plt.axhline(y=4, color='g', linestyle='--') #Ajuste el valor para "y" en esta línea
plt.ylim(0, 5)
plt.plot(distances)
```
Se grafican los clusters con distintos valores de eps=0.1
```
eps = 0.1
min_samples = 3
dbscan01 = DBSCAN(eps=eps, min_samples=min_samples).fit(X)
plt.scatter(X[:,0], X[:,1], c=dbscan01.labels_)
plt.title(f"DBSCAN: eps={eps}, min_samples={min_samples}")
#plt.ylim(0, 10000)
plt.show()
```
Se grafican los clusters con distintos valores de eps=0.9
```
eps = 0.9
min_samples = 3
dbscan09 = DBSCAN(eps=eps, min_samples=min_samples).fit(X)
plt.scatter(X[:,0], X[:,1], c=dbscan09.labels_)
plt.title(f"DBSCAN: eps={eps}, min_samples={min_samples}")
plt.show()
```
Se grafican los clusters con distintos valores de eps=4
```
eps = 4
min_samples = 3
dbscan4 = DBSCAN(eps=eps, min_samples=min_samples).fit(X)
plt.scatter(X[:,0], X[:,1], c=dbscan4.labels_)
plt.title(f"DBSCAN: eps={eps}, min_samples={min_samples}")
plt.show()
```
Finalmente se evalúan los clusters hechos utilizando el coeficiente de Silhouette.
```
print("Dataset X K-Means 3\t", silhouette_score(X, kmeans3.labels_))
# para DBSCAN tenermos que filtrar las labels negativas, ya que representan ruido, no otro cluster
_filter_label = dbscan01.labels_ >= 0
print("Dataset X DBSCAN 0.1 \t", silhouette_score(X[_filter_label], dbscan01.labels_[_filter_label]))
_filter_label = dbscan09.labels_ >= 0
print("Dataset X DBSCAN 0.9 \t", silhouette_score(X[_filter_label], dbscan09.labels_[_filter_label]))
_filter_label = dbscan4.labels_ >= 0
print("Dataset X DBSCAN 4 \t", silhouette_score(X[_filter_label], dbscan4.labels_[_filter_label]))
```
Dado los coeficientes obtenidos tenemos que ver el coef. Silhouette ~ 1. Por lo tanto, los que tienen mejor resultado dentro de los 4 experimentos son el DBSCAN con *eps=0.1* y *eps=0.9* debido a que su valor es 1.
En cuanto a la diferenciación de datos se logró ver que los distintos clusters representaban los distintos tipos de clientes, siendo todos estos diferenciados en su mayoría por la distancia en precio, pues por ejemplo los resultados de K-Means resultaron ser efectivos para diferenciar compradores de grandes listas de items, los cuales en su mayoría resultaron en un gasto mayor al millón de pesos chilenos. Para el algoritmo DBScan, probando distintos valores para eps se logró ver que logró agrupar los clientes más casuales, es decir, de baja cantidad de pedidos y precio regular. Este algoritmo consideró como ruido-outliers a los datos de mayor cantidad de pedidos y mayor dinero invertido, por lo cual este último demostró no ser tan efectivo visualmente en nuestro dataset pese a haber conseguido un puntaje de 1 con el coeficiente de Silhouette, pues el algoritmo determina que hace un buen clustering pero no agarra tantos datos como se desearía.
Finalmente se puede concluir que, como se esperaba, existen clientes que gastan distintas cantidades de dinero en función de la cantidad de pedidos totales, distribución que se puede analizar muy visualmente para concluir que la dispersión es muy grande, pero de todas formas esta distribución se asemeja a una recta acorde van aumentando los pedidos. Además, se puede inferir que esta variabilidad de la cantidad gastada para cantidades grandes de pedidos tiene que ver con el tipo de cliente que realiza estas compras, ya sea un comprador mayorista o un cliente frecuente. Sería interesante poder analizar este mismo tipo de pregunta en datasets más grandes para poder observar como el rango de esta curva se va expandiendo, y si de verdad se asemejaría esto a una recta o no.
# Conclusiones
Una de las principales problemáticas a la hora de generar las canastas, fue que en muchas de estas tenían un solo producto o el mismo producto varias veces, lo cuál no es considerado para la generación de las reglas, dificultando la obtención de resultados y disminuyendo los posibles casos existentes.
En general, se tuvo que filtrar mucho los datos, ya que habían bastantes incongruencias y columnas que no nos eran de utilidad. Para una próxima vez se podría elegir un dataset más pequeño con datos más específicos de lo que necesitamos para evitar esto.
# Contribuciones
**María Hernández:** Encargada de la limpieza de datos.
**Lung Pang:** Encargado de la exploración de datos y formulación de preguntas.
**Cristóbal Saldías:** Encargado del informe, resultados y su organización.
**Víctor Vidal:** Encargado de hacer la presentación, resultados preliminares.
|
github_jupyter
|
# Activate R magic:
%load_ext rpy2.ipython
# Asignamos nuestro "working directory" (set w. d.) como el directorio ~/RDATA y ponemos librerías
%%R
library(ggplot2)
library(dplyr)
library(tidyverse)
# (No volver a ejecutar pls)
%%R
pedidos_preeliminar <- read_csv("/content/orders.csv")
pedidos_detalle_preeliminar <-read_csv("/content/order_detail.csv")
%%R
print(nrow(pedidos_preeliminar))
print(ncol(pedidos_preeliminar))
print(nrow(pedidos_detalle_preeliminar))
print(ncol(pedidos_detalle_preeliminar))
%%R
pedidos<-read.csv("/content/Orders_OFICIAL.csv", encoding = "UTF-8", sep=";")
pedidos_detalle <-read.csv("/content/ORDER_DETAIL_OFICIAL.csv", encoding = "UTF-8",sep=";")
pedidos$count <- as.numeric(ave(pedidos$Comuna, pedidos$Comuna, FUN = length))
pedidos_detalle$count.marca <- as.numeric(ave(pedidos_detalle$Marca, pedidos_detalle$Marca, FUN = length))
%%R
# Se cambia el formato de la columna Fecha.Compra a Date.
pedidos$Fecha.Compra <- as.Date(pedidos$Fecha.Compra, format ="%d-%m-%Y")
# Se cambia el formato de la columna Fecha.Pedido a Date. También se agrega una columna con el año.
pedidos_detalle$Fecha.Pedido <- as.Date(pedidos_detalle$Fecha.Pedido, format ="%Y-%m-%d")
pedidos_detalle$anio <- as.numeric(format(pedidos_detalle$Fecha.Pedido,'%Y'))
%%R
nfilas <- nrow(pedidos)
total_vendido <- sum(pedidos$Precio.Pedido)
promedio_pedidos <- total_vendido/nfilas
(promedio_pedidos)
%%R
#La gente que pide mas que el promedio
pedidos_RM_mayorpromedio <- data.frame(pedidos[pedidos$REGION.CON.CODIGO == "RM" & pedidos$Precio.Pedido > promedio_pedidos,] )
print(nrow(pedidos_RM_mayorpromedio))
#La gente que pide menos que el promedio
pedidos_RM_menorpromedio <- data.frame(pedidos[pedidos$REGION.CON.CODIGO == "RM" & pedidos$Precio.Pedido <= promedio_pedidos,] )
print(nrow(pedidos_RM_menorpromedio))
%%R
pedidos[which.max(pedidos$Precio.Pedido),]
%%R
#usamos su numero de pedido para ver las compras realizadas
pedido_maximo_detalle <- pedidos_detalle[pedidos_detalle$Numero.Pedido == "#BL4499",]
#verificamos el dinero sea consistente en ambas tablas
sum(pedido_maximo_detalle$Precio.Total.Productos)
%%R
ggplot(pedidos, aes(x = REGION.CON.CODIGO),) +
ggtitle("Cantidad de pedidos por región") +
geom_bar()
%%R
freq_comuna <- data.frame(comuna = pedidos$Comuna, count = pedidos$count)
freq_comuna <- unique(freq_comuna)
freq_comuna <- freq_comuna[order(freq_comuna[,"count"], decreasing = TRUE),]
ggplot(freq_comuna[1:30,], aes(x = reorder(comuna , count),y = count)) + ggtitle("Las 30 Comunas con mas pedidos del pais") + coord_flip() + geom_bar(stat = "identity")
%%R
pedidos_comuna_Lascondes <- data.frame(pedidos[pedidos$Comuna == "LAS CONDES", ] )
print(nrow(pedidos_comuna_Lascondes))
print(mean(pedidos_comuna_Lascondes$Precio.Pedido))
print(sum(pedidos_comuna_Lascondes$Precio.Pedido))
pedidos_comuna_maipu <- data.frame(pedidos[pedidos$Comuna == "MAIPU", ] )
print(nrow(pedidos_comuna_maipu))
print(mean(pedidos_comuna_maipu$Precio.Pedido))
print(sum(pedidos_comuna_maipu$Precio.Pedido))
%%R
ggplot(pedidos, aes(x = Fecha.Compra) ) + geom_bar() + scale_x_date(date_breaks = "1 month", date_labels = "%b") + labs(title = "Compras a lo largo del tiempo", x = "fecha", y = "Cantidad de pedidos")
%%R
freq_marca <- data.frame(marca = pedidos_detalle$Marca, count = pedidos_detalle$count.marca)
freq_marca <- unique(freq_marca)
freq_marca <- freq_marca[order(freq_marca[,"count"], decreasing = TRUE),]
ggplot(freq_marca[1:30,], aes(x = reorder(marca , count), y = count)) + ggtitle("Las 30 Marcas con mas pedidos del pais") + coord_flip() + geom_bar(stat = "identity")
%%R
precio_marca <- data.frame(marca = pedidos_detalle$Marca, precio = pedidos_detalle$Precio.Producto)
p1 <- ggplot(pedidos_detalle, aes(x = Marca, y = Precio.Producto)) + geom_boxplot()+ coord_flip() + ggtitle("Precios de la marca")
p1
# Librerias principales para la exploración:
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Reimportamos todo
pedidos2 = pd.read_csv("Orders_OFICIAL.csv", sep=";")
details2 = pd.read_csv("ORDER_DETAIL_OFICIAL.csv", sep=";")
pedidos2 = pedidos2.drop(columns="Direccion 1")
pedidos2 = pedidos2.drop(columns="Direccion 2")
pedidos2 = pedidos2.drop(columns="RUT")
pedidos2 = pedidos2.drop(columns="Email")
pedidos2.head()
%%R
install.packages('arules')
install.packages('arulesViz')
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
# Comunas
comunas = pedidos2['Comuna'].unique()
# print(comunas) -> Algunas son numeros
promedios = []
for comuna in comunas:
promedios.append(pedidos2[pedidos2['Comuna'] == comuna]['Precio Pedido'].mean())
# Promedio de precios de pedido por comuna
print("Promedio Chonchi: ",round(pedidos2[pedidos2['Comuna'] == 'CHONCHI']['Precio Pedido'].mean()),"Cantidad de pedidos:", len(pedidos2[pedidos2['Comuna'] == 'CHONCHI']['Precio Pedido']))
print("Promedio Santiago:",round(pedidos2[pedidos2['Comuna'] == 'SANTIAGO']['Precio Pedido'].mean()),"Cantidad de pedidos:", len(pedidos2[pedidos2['Comuna'] == 'SANTIAGO']['Precio Pedido']))
print("Promedio Vitacura:",round(pedidos2[pedidos2['Comuna'] == 'VITACURA']['Precio Pedido'].mean()),"Cantidad de pedidos:", len(pedidos2[pedidos2['Comuna'] == 'VITACURA']['Precio Pedido']))
print("Promedio Iquique: ",round(pedidos2[pedidos2['Comuna'] == 'IQUIQUE']['Precio Pedido'].mean()),"Cantidad de pedidos:", len(pedidos2[pedidos2['Comuna'] == 'IQUIQUE']['Precio Pedido']))
print("\n")
zip_iterator = zip(comunas, promedios)
diccionario = list(zip_iterator)
diccionario = sorted(diccionario, key=lambda cantidad:cantidad[1], reverse=True)
comunas_final = []
prom_final = []
for elem in diccionario:
if not elem[0].isnumeric():
comunas_final.append(elem[0])
prom_final.append(elem[1])
#print(comunas_final)
plt.figure(figsize=(10,6))
plt.scatter(comunas_final, prom_final)
plt.xlim(-0.5, 20.5)
plt.xticks(rotation=90)
plt.title("Promedios de pedidos por comuna (descendente)")
plt.xlabel("Comuna")
plt.ylabel("Monto promedio [$]")
plt.show()
import numpy as np
import matplotlib.pyplot as plt
pedidos_detalles2 = pd.read_csv("ORDER_DETAIL_OFICIAL2.csv", sep=";")
pedidos = pd.read_csv("Orders_OFICIAL.csv", sep=";")
for i in pedidos_detalles2.index:
cantidad = pedidos_detalles2["Cantidad"][i]
if cantidad!=1:
for n in range(cantidad-1):
pedidos_detalles2=pedidos_detalles2.append({"id" : pedidos_detalles2["id"][i],
"Numero Pedido" : pedidos_detalles2["Numero Pedido"][i],
"Fecha Pedido" : pedidos_detalles2["Fecha Pedido"][i],
"Nombre Producto" : pedidos_detalles2["Nombre Producto"][i],
"SKU" : pedidos_detalles2["SKU"][i],
"Cantidad" : 1,
"Precio Producto" : pedidos_detalles2["Precio Producto"][i],
"Precio Total Productos" : pedidos_detalles2["Precio Total Productos"][i],
"Marca" : pedidos_detalles2["Marca"][i],
"Tipo Producto" : pedidos_detalles2["Tipo Producto"][i]}, ignore_index=True)
pedidos_detalles2=pedidos_detalles2.drop(columns="Cantidad")
canasta = {}
for i in pedidos_detalles2.index:
numero_pedido = pedidos_detalles2["Numero Pedido"][i]
marca_producto = pedidos_detalles2["Marca"][i].upper()
if numero_pedido in canasta.keys():
lista = canasta[numero_pedido]
lista.append(marca_producto)
canasta[numero_pedido] = lista
else:
canasta[numero_pedido]=[marca_producto]
otra_canasta = {}
for i in pedidos.index:
numero_pedido = pedidos["Numero Pedido"][i]
id_cliente = pedidos["ID Cliente"][i]
if id_cliente in otra_canasta.keys():
lista = otra_canasta[id_cliente]
indice = otra_canasta[id_cliente][-1][-2:]
new_indice = int(indice)+1
for i in canasta[numero_pedido]:
agregar= i +' ' +str(new_indice)
lista.append(agregar)
otra_canasta[id_cliente]=lista
else:
la_lista = []
for i in canasta[numero_pedido]:
agregar= i + " 1"
la_lista.append(agregar)
otra_canasta[id_cliente]=la_lista
lista_canasta =[]
for persona in otra_canasta:
lista_canasta.append(otra_canasta[persona])
import csv
with open('canastas_pedidos.csv', 'w', newline='') as file:
writer = csv.writer(file, quoting=csv.QUOTE_ALL,delimiter=';')
writer.writerows(lista_canasta)
pedidos_detalles2 = pd.read_csv("ORDER_DETAIL_OFICIAL2.csv", sep=";")
for i in pedidos_detalles2.index:
cantidad = pedidos_detalles2["Cantidad"][i]
if cantidad!=1:
for n in range(cantidad-1):
pedidos_detalles2=pedidos_detalles2.append({"id" : pedidos_detalles2["id"][i],
"Numero Pedido" : pedidos_detalles2["Numero Pedido"][i],
"Fecha Pedido" : pedidos_detalles2["Fecha Pedido"][i],
"Nombre Producto" : pedidos_detalles2["Nombre Producto"][i],
"SKU" : pedidos_detalles2["SKU"][i],
"Cantidad" : 1,
"Precio Producto" : pedidos_detalles2["Precio Producto"][i],
"Precio Total Productos" : pedidos_detalles2["Precio Total Productos"][i],
"Marca" : pedidos_detalles2["Marca"][i],
"Tipo Producto" : pedidos_detalles2["Tipo Producto"][i]}, ignore_index=True)
pedidos_detalles2=pedidos_detalles2.drop(columns="Cantidad")
pedidos_detalles2
canasta = {}
for i in pedidos_detalles2.index:
numero_pedido = pedidos_detalles2["Numero Pedido"][i]
marca_producto = pedidos_detalles2["Marca"][i].upper()
if numero_pedido in canasta.keys():
lista = canasta[numero_pedido]
lista.append(marca_producto)
canasta[numero_pedido] = lista
else:
canasta[numero_pedido]=[marca_producto]
lista =[]
for numero_pedido in canasta:
lista.append(canasta[numero_pedido])
import csv
with open('canastas_marcas.csv', 'w', newline='') as file:
writer = csv.writer(file, quoting=csv.QUOTE_ALL,delimiter=';')
writer.writerows(lista)
from mlxtend.frequent_patterns import apriori
details = pd.read_csv("ORDER_DETAIL_OFICIAL2.csv", sep=";")
details.head(3)
# Activate R magic:
%load_ext rpy2.ipython
%%R
library('arules') # cargamos arules
%%R
canastas_marcas <- read.transactions("canastas_marcas.csv", sep=";")
%%R
rules_marcas <- apriori(canastas_marcas, parameter=list(support=0.0001, confidence=0.1), minlen=2)
%%R
rules_marcas.sorted <- sort(rules_marcas, by="lift")
rules_marcas.sorted.first3 <- head(rules_marcas.sorted, 50)
inspect(rules_marcas.sorted.first3)
%%R
install.packages('graphlayouts')
%%R
library('arulesViz')
plot(rules_marcas)
%%R
plot(rules_marcas, method = "grouped")
%%R
subrules <- head(rules_marcas, n = 20, by = "lift")
plot(subrules, method = "graph")
df = pd.read_csv("comuna_precio_pedido_1y2.csv", sep=";")
X= df[['Precio pedido 1','Precio pedido 2']].to_numpy()
y=df['Comuna'].to_numpy()
print("X:\n", X[:10])
print("Y:\n", y[:10])
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(X, y) ## Entrenar usando X (features), y (clase)
y_pred = clf.predict(X) ## predecir 'y' usando la matriz 'X'
print(y_pred)
from sklearn.metrics import accuracy_score
print("Accuracy:", accuracy_score(y, y_pred))
from sklearn.metrics import classification_report
print(classification_report(y, y_pred))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.33, random_state=37,
stratify=y)
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train) ## Entrenamos con features X_train y clases y_train
y_pred = clf.predict(X_test) ## Predecimos con nuevos datos (los de test X_test)
print("Accuracy en test set:", accuracy_score(y_test, y_pred)) ## Evaluamos la predicción comparando y_test con y_pred
%%R
canastas_pedidos <- read.transactions("canastas_pedidos.csv", sep=";")
%%R
rules_pedidos <- apriori(canastas_pedidos, parameter=list(support=0.001, confidence=0.1), minlen=2)
%%R
rules_pedidos.sorted <- sort(rules_pedidos, by="lift")
rules_pedidos.sorted.first3 <- head(rules_pedidos.sorted, 15)
inspect(rules_pedidos.sorted.first3)
%%R
out <- data.frame(
lhs=labels(lhs(rules_pedidos.sorted)),
rhs=labels(rhs(rules_pedidos.sorted)),
support = quality(rules_pedidos.sorted)$support,
count=quality(rules_pedidos.sorted)$count,
confidence=quality(rules_pedidos.sorted)$confidence)
write.csv(out, "datos_cherrypoto0.csv", row.names = FALSE)
from sklearn.cluster import KMeans
from sklearn import datasets
import matplotlib.pyplot as plt
import numpy as np
from sklearn.neighbors import NearestNeighbors
from sklearn.cluster import DBSCAN
from sklearn.metrics import silhouette_score
# crear tabla a partir de la otra
pedidos3 = pedidos2[['ID Cliente', 'Cantidad Pedidos Cliente', 'Total Gastado Cliente']].copy()
print("Numero de filas con clientes repetidos:",pedidos3.size)
pedidos3 = pedidos3.drop_duplicates(subset = ['ID Cliente'])
print("Numero de filas luego de quitar clientes repetidos:",pedidos3.size)
pedidos3 = pedidos3.drop(['ID Cliente'], axis=1)
pedidos3.head()
X = pd.DataFrame(pedidos3).to_numpy()
X
sse = []
clusters = list(range(1, 10))
for k in clusters:
kmeans = KMeans(n_clusters=k).fit(X)
sse.append(kmeans.inertia_)
plt.plot(clusters, sse, marker="o")
plt.title("Metodo del codo de 1 a 10 clusters")
plt.grid(True)
plt.show()
kmeans3 = KMeans(n_clusters=3, random_state=100).fit(X)
plt.scatter(X[:, 0], X[:, 1], c=kmeans3.labels_)
plt.title("K-Means")
plt.ylabel("Monto Gastado")
plt.xlabel("Cantidad de pedidos")
plt.show()
nbrs = NearestNeighbors(n_neighbors=3).fit(X)
distances, indices = nbrs.kneighbors(X)
distances = np.sort(distances, axis=0)
distances = distances[:,1]
plt.axhline(y=0.1, color='r', linestyle='--') #Ajuste el valor para "y" en esta línea
plt.axhline(y=0.9, color='orange', linestyle='--') #Ajuste el valor para "y" en esta línea
plt.axhline(y=4, color='g', linestyle='--') #Ajuste el valor para "y" en esta línea
plt.ylim(0, 5)
plt.plot(distances)
eps = 0.1
min_samples = 3
dbscan01 = DBSCAN(eps=eps, min_samples=min_samples).fit(X)
plt.scatter(X[:,0], X[:,1], c=dbscan01.labels_)
plt.title(f"DBSCAN: eps={eps}, min_samples={min_samples}")
#plt.ylim(0, 10000)
plt.show()
eps = 0.9
min_samples = 3
dbscan09 = DBSCAN(eps=eps, min_samples=min_samples).fit(X)
plt.scatter(X[:,0], X[:,1], c=dbscan09.labels_)
plt.title(f"DBSCAN: eps={eps}, min_samples={min_samples}")
plt.show()
eps = 4
min_samples = 3
dbscan4 = DBSCAN(eps=eps, min_samples=min_samples).fit(X)
plt.scatter(X[:,0], X[:,1], c=dbscan4.labels_)
plt.title(f"DBSCAN: eps={eps}, min_samples={min_samples}")
plt.show()
print("Dataset X K-Means 3\t", silhouette_score(X, kmeans3.labels_))
# para DBSCAN tenermos que filtrar las labels negativas, ya que representan ruido, no otro cluster
_filter_label = dbscan01.labels_ >= 0
print("Dataset X DBSCAN 0.1 \t", silhouette_score(X[_filter_label], dbscan01.labels_[_filter_label]))
_filter_label = dbscan09.labels_ >= 0
print("Dataset X DBSCAN 0.9 \t", silhouette_score(X[_filter_label], dbscan09.labels_[_filter_label]))
_filter_label = dbscan4.labels_ >= 0
print("Dataset X DBSCAN 4 \t", silhouette_score(X[_filter_label], dbscan4.labels_[_filter_label]))
| 0.113826 | 0.951414 |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# AutoML 00. Configuration
In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.
## Prerequisites:
Before running this notebook, run the `automl_setup` script described in README.md.
### Connect to Your Azure Subscription
In order to use an Azure ML workspace, you need access to an Azure Subscription. You can [create a new Azure Subscription](https://azure.microsoft.com/en-us/free) or get existing subscription information from the [Azure portal](https://portal.azure.com).
First login to Azure and follow prompts to authenticate. Then check that your subscription is correct.
```
!az login
!az account show
```
If you have multiple subscriptions and need to change the active one, you can use this command:
```shell
az account set -s <subscription-id>
```
### Register Machine Learning Services Resource Provider
This step is required to use the Azure ML services backing the SDK.
```
# Register the new resource provider.
!az provider register -n Microsoft.MachineLearningServices
# Check resource provider registration status.
!az provider show -n Microsoft.MachineLearningServices
```
### Check the Azure ML Core SDK Version to Validate Your Installation
```
import azureml.core
print("SDK Version:", azureml.core.VERSION)
```
## Initialize an Azure ML Workspace
### What is an Azure ML Workspace and Why Do I Need One?
An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.
### What do I Need?
To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:
* A name for your workspace. You can choose one.
* Your subscription id. Use the `id` value from the `az account show` command output above.
* The resource group name. The resource group organizes Azure resources and provides default region for the resources in the group. You can either specify a new one, in which case it gets created for your workspace, or use an existing one or create a new one from [Azure portal](https://portal.azure.com)
* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`.
```
subscription_id = "<subscription_id>"
resource_group = "myrg"
workspace_name = "myws"
workspace_region = "eastus2"
```
## Creating a Workspace
If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.
This will fail when:
1. The workspace already exists.
2. You do not have permission to create a workspace in the resource group.
3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.
If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.
**Note:** Creation of a new workspace can take several minutes.
```
# Import the Workspace class and check the Azure ML SDK version.
from azureml.core import Workspace
ws = Workspace.create(name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
location = workspace_region)
ws.get_details()
```
## Configuring Your Local Environment
You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`.
```
from azureml.core import Workspace
ws = Workspace(workspace_name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group)
# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.
ws.write_config()
```
You can then load the workspace from this config file from any notebook in the current directory.
```
# Load workspace configuration from ./aml_config/config.json file.
my_workspace = Workspace.from_config()
my_workspace.get_details()
```
## Create a Folder to Host All Sample Projects
Finally, create a folder where all the sample projects will be hosted.
```
import os
sample_projects_folder = './sample_projects'
if not os.path.isdir(sample_projects_folder):
os.mkdir(sample_projects_folder)
print('Sample projects will be created in {}.'.format(sample_projects_folder))
```
## Success!
Great, you are ready to move on to the rest of the sample notebooks.
|
github_jupyter
|
!az login
!az account show
az account set -s <subscription-id>
# Register the new resource provider.
!az provider register -n Microsoft.MachineLearningServices
# Check resource provider registration status.
!az provider show -n Microsoft.MachineLearningServices
import azureml.core
print("SDK Version:", azureml.core.VERSION)
subscription_id = "<subscription_id>"
resource_group = "myrg"
workspace_name = "myws"
workspace_region = "eastus2"
# Import the Workspace class and check the Azure ML SDK version.
from azureml.core import Workspace
ws = Workspace.create(name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
location = workspace_region)
ws.get_details()
from azureml.core import Workspace
ws = Workspace(workspace_name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group)
# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.
ws.write_config()
# Load workspace configuration from ./aml_config/config.json file.
my_workspace = Workspace.from_config()
my_workspace.get_details()
import os
sample_projects_folder = './sample_projects'
if not os.path.isdir(sample_projects_folder):
os.mkdir(sample_projects_folder)
print('Sample projects will be created in {}.'.format(sample_projects_folder))
| 0.448909 | 0.92297 |
```
from NYTAnalysis import *
font = {'family' : 'normal',
'weight':'normal',
'size' : 20}
plt.rc('font', **font)
plt.close()
fig, ax1 = plt.subplots()
plt.title('Rate of headlines including key words: President, Trump, ... \n')
fig.set_size_inches(10,6)
presVyears=load_obj('presVyears')
years=presVyears['Years']
pres=presVyears['President']
clintonsVyears=load_obj('clintonsVyears')
clinton=clintonsVyears['Clinton']
trumpsVyears=load_obj('trumpsVyears')
trump=trumpsVyears['Trump']
obamasVyears=load_obj('obamasVyears')
obama=obamasVyears['Obama']
bushsVyears=load_obj('bushsVyears')
bush=bushsVyears['Bush']
nixonsVyears=load_obj('nixonsVyears')
nixon=nixonsVyears['Nixon']
reagansVyears=load_obj('reagansVyears')
reagan=reagansVyears['reagans']
# congresssVyears=load_obj('congresssVyears')
# congress=congresssVyears['congresss']
# Plots only between administration years
cartersVyears=load_obj('cartersVyears')
ax1.plot(cartersVyears['Years'], cartersVyears['carters'], '.',color='darkgray')
trumansVyears=load_obj('trumansVyears')
ax1.plot(trumansVyears['Years'], trumansVyears['trumans'], '.',color='darkgray')
eisenhowersVyears=load_obj('eisenhowersVyears')
ax1.plot(eisenhowersVyears['Years'], eisenhowersVyears['eisenhowers'], '.',color='darkgray')
kennedysVyears=load_obj('kennedysVyears')
ax1.plot(kennedysVyears['Years'], kennedysVyears['kennedys'], '.',color='darkgray')
johnsonsVyears=load_obj('johnsonsVyears')
ax1.plot(johnsonsVyears['Years'], johnsonsVyears['johnsons'], '.',color='darkgray')
fordsVyears=load_obj('fordsVyears')
ax1.plot(fordsVyears['Years'], fordsVyears['fords'], '.',color='darkgray')
# Full plots below, those plotted above were cut to administration years
t = years
ax1.plot(t, pres, 'k-')
# ax1.plot(t, congress, '-',color='darkgray')
ax1.plot(t, trump, '-',color='orange')
ax1.plot(t, trump, '.',color='orange',markersize=20)
ax1.plot(t, obama, '.',color='blue')
ax1.plot(t, bush, '.',color='red')
ax1.plot(t, clinton, '.',color='aqua')
ax1.plot(t, nixon, '.',color='darkgray')
ax1.plot(t, reagan, '.',color='darkgray')
ax1.set_xlabel('Years')
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Headline mentions / Total articles', color='k')
ax1.tick_params('y', colors='k')
ax1.set_xlim([1950, 2018])
ax1.set_ylim([0.0005, .07])
fig.tight_layout()
plt.show()
```
## Figure 2
Following reduction and data munging of the 20GB of metadata covering the articles from the NYTimes archives API, we produce the following from the 2GB subset including publication dates, word counts, and headlines:
(Black line) We observe that rate at which the word 'president' (case-insensitive) was mentioned in a NYTimes has descreased from a high of 0.018 in 1955 to a lull of 0.002 from 1990-2013. We only see at the start of the 2017 primary campaigns in 2014 an uptick in the rate of 'president' being mentioned in a headline.
<span style="color:orange">(Orange)</span> Remarkably, we observe that articles with headlines mentioning 'Trump' make up 6% of the NYTimes content for 2017.
Compare that to a peak coverage rate of 1.5% for <span style="color:blue">Obama</span>, 1.0% for either <span style="color:red">Bush</span>, and 1.5% for either <span style="color:aqua">Clinton</span>.
<span style="color:DarkGrey">(Dark Grey)</span> Looking at every other president since 1950, we see the reporting rate of presidents hovers consistently around 1-2.5%. This gives indication that Trump is an outlier with respect to the NYTimes coverage.
|
github_jupyter
|
from NYTAnalysis import *
font = {'family' : 'normal',
'weight':'normal',
'size' : 20}
plt.rc('font', **font)
plt.close()
fig, ax1 = plt.subplots()
plt.title('Rate of headlines including key words: President, Trump, ... \n')
fig.set_size_inches(10,6)
presVyears=load_obj('presVyears')
years=presVyears['Years']
pres=presVyears['President']
clintonsVyears=load_obj('clintonsVyears')
clinton=clintonsVyears['Clinton']
trumpsVyears=load_obj('trumpsVyears')
trump=trumpsVyears['Trump']
obamasVyears=load_obj('obamasVyears')
obama=obamasVyears['Obama']
bushsVyears=load_obj('bushsVyears')
bush=bushsVyears['Bush']
nixonsVyears=load_obj('nixonsVyears')
nixon=nixonsVyears['Nixon']
reagansVyears=load_obj('reagansVyears')
reagan=reagansVyears['reagans']
# congresssVyears=load_obj('congresssVyears')
# congress=congresssVyears['congresss']
# Plots only between administration years
cartersVyears=load_obj('cartersVyears')
ax1.plot(cartersVyears['Years'], cartersVyears['carters'], '.',color='darkgray')
trumansVyears=load_obj('trumansVyears')
ax1.plot(trumansVyears['Years'], trumansVyears['trumans'], '.',color='darkgray')
eisenhowersVyears=load_obj('eisenhowersVyears')
ax1.plot(eisenhowersVyears['Years'], eisenhowersVyears['eisenhowers'], '.',color='darkgray')
kennedysVyears=load_obj('kennedysVyears')
ax1.plot(kennedysVyears['Years'], kennedysVyears['kennedys'], '.',color='darkgray')
johnsonsVyears=load_obj('johnsonsVyears')
ax1.plot(johnsonsVyears['Years'], johnsonsVyears['johnsons'], '.',color='darkgray')
fordsVyears=load_obj('fordsVyears')
ax1.plot(fordsVyears['Years'], fordsVyears['fords'], '.',color='darkgray')
# Full plots below, those plotted above were cut to administration years
t = years
ax1.plot(t, pres, 'k-')
# ax1.plot(t, congress, '-',color='darkgray')
ax1.plot(t, trump, '-',color='orange')
ax1.plot(t, trump, '.',color='orange',markersize=20)
ax1.plot(t, obama, '.',color='blue')
ax1.plot(t, bush, '.',color='red')
ax1.plot(t, clinton, '.',color='aqua')
ax1.plot(t, nixon, '.',color='darkgray')
ax1.plot(t, reagan, '.',color='darkgray')
ax1.set_xlabel('Years')
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Headline mentions / Total articles', color='k')
ax1.tick_params('y', colors='k')
ax1.set_xlim([1950, 2018])
ax1.set_ylim([0.0005, .07])
fig.tight_layout()
plt.show()
| 0.658418 | 0.623764 |
# 需求(18日单日数据)
## 1、统计人数
截止到2021年7月17日晚上20:30左右,当天有学生XX人次、XX名教师、XX名校长完成了测试;累计共有学生XX人次、XX名教师、XX名校长完成了测试。
## 2、作答时间
智能计算素养和问题解决素养两个专题的规定测试时间均为60分钟,截止目前完成测试学生实际作答的平均时间为XX分钟。
(一)智能计算素养专题
智能计算素养专题的题目在预测试中作答时长均值为40-50分钟,少数同学的作答时长在30分钟以下和1小时以上。7月17日当天,绵阳地区有XX名学生的实际作答时长小于等于10分钟,XX名学生的实际作答时长小于等于20分钟;XX名学生的实际作答时长小于等于30分钟。总体作答时长分布如下图:
【一个7月17日当天学生完成作答智能计算素养专题的时长分布饼状图,时长单位为分钟,每10分钟一块就可以,每一块上标注上所占比例,超过60分钟的就一块“60分钟及以上”】
截止到7月17日20:30左右,累计共有XX名学生的实际作答时长小于等于10分钟,XX名学生的实际作答时长小于等于20分钟;XX名学生的实际作答时长小于等于30分钟。总体作答时长分布如下图:
【一个累计学生完成作答智能计算素养专题的时长分布饼状图,时长单位为分钟,每10分钟一块就可以,每一块上标注上所占比例,超过60分钟的就一块“60分钟及以上”】
(二)问题解决素养专题
问题解决素养专题的题目在预测试中33%的学生作答时间超过45分钟,23%的学生超过50分钟,9%的学生超过60分钟,大部分学生都需要50分钟才能完成测试。7月17日当天,绵阳地区有XX名学生的实际作答时长小于等于10分钟,XX名学生的实际作答时长小于等于20分钟;XX名学生的实际作答时长小于等于30分钟。总体作答时长分布如下图:
【一个7月17日当天学生完成作答问题解决专题的时长分布饼状图,时长单位为分钟,每10分钟一块就可以,每一块上标注上所占比例,超过60分钟的就一块“60分钟及以上”】
截止到7月17日20:30左右,累计共有XX名学生的实际作答时长小于等于10分钟,XX名学生的实际作答时长小于等于20分钟;XX名学生的实际作答时长小于等于30分钟。总体作答时长分布如下图:
【一个累计学生完成作答问题解决素养专题的时长分布饼状图,时长单位为分钟,每10分钟一块就可以,每一块上标注上所占比例,超过60分钟的就一块“60分钟及以上”】
## 3、作答学校分布
7月17日当天,作答时长小于等于10分钟的学生所在学校(大于10人):
学校名称 | 人数
累计到7月17日20:30左右,作答时长小于等于10分钟的学生所在学校(大于50人):
学校名称 | 人数
## 4、输出EXCEL文件
添加一列做题时长
# 首先引入第三方库
```
import pandas as pd
import json
import numpy as np
import ast
from datetime import datetime
import plotly.graph_objs as go
from plotly.offline import plot
import plotly.offline as offline
import plotly.figure_factory as ff
from pandas.core.indexes import interval
import re
```
# 提前定义每一行的计算时间函数
```
# 2021-07-16T19:31:13+08:00
def get_interval_per_row(index, df):
row_data = df.loc[index,:]
start_time = row_data['start_time']
if start_time != start_time:
return -1
start_time = datetime.strptime(str(start_time),"%Y-%m-%dT%H:%M:%S+08:00")
expire_time = row_data['expire_time']
if expire_time != expire_time:
return -1
expire_time = datetime.strptime(str(expire_time),"%Y-%m-%dT%H:%M:%S+08:00")
stop_time = row_data['stop_time']
if stop_time != stop_time:
return -1
stop_time = datetime.strptime(str(stop_time),"%Y-%m-%dT%H:%M:%S+08:00")
total_minu = (stop_time - start_time).seconds / 60.0
return total_minu
```
# 提前定义转码task_answers字段的函数
```
def remove_str_per_row(data_per_row):
frame_list = ast.literal_eval(data_per_row)
frame_dic_list = []
for index in range(len(frame_list)):
temp = json.loads(frame_list[index])
if 'frame' in temp.keys():
if 'data' in temp.keys():
frame_dic_list.append(list(temp['frame']['data'].values()))
else:
frame_dic_list.append(list(temp['frame'].values()))
else:
frame_dic_list.append(temp)
return frame_dic_list
```
# 读取数据
```
df_main = pd.read_csv('./data/单日数据-18日-new.csv')
df_main
```
# 测试函数运行是否正常
```
dic_list_test = remove_str_per_row(df_main.loc[1, 'task_answers'])
dic_list_test, len(dic_list_test)
time_test_minu = get_interval_per_row(0, df_main)
time_test_minu
```
# 增加做题时间属性('interval')
```
time_minu_list = []
drop_index_list = []
for row in range(len(df_main)):
interval_minu = get_interval_per_row(row, df_main)
if interval_minu == -1:
drop_index_list.append(row)
else:
time_minu_list.append(interval_minu)
len(df_main)
len(drop_index_list)
len(time_minu_list)
df_main = df_main.drop(drop_index_list)
if 'interval_minutes' not in df_main.columns:
df_main.insert(len(df_main.columns), 'interval_minutes', time_minu_list)
```
# 数据针对试卷分类
```
grouped_main = df_main.groupby('contest_id')
df_contest_list = [tup[1] for tup in list(grouped_main)]
df_contest_name_list = [tup[0] for tup in list(grouped_main)]
df_contest_name_list
```
# 分类结果
智能计算和问题解决分别在列表中的index=1和index=2
```
df_res_1 = df_contest_list[0] # 智能计算
df_res_2 = df_contest_list[1] # 问题解决
grouped_main['interval_minutes'].mean()
```
# 教师和校长数量
```
len(df_contest_list[2]), len(df_contest_list[3])
```
# 智能计算学生数量
```
x1_sum = len(df_contest_list[0])
x1_sum
```
# 问题解决学生数量
```
x2_sum = len(df_contest_list[1])
x2_sum
```
# 学生总数
```
x1_sum + x2_sum
```
# 再将教师和校长的数据剔除
```
[tup[0] for tup in list(df_res_1.groupby('tag'))]
[tup[0] for tup in list(df_res_2.groupby('tag'))]
```
# 提取最终画图用到的dataframe
```
df_res_1.columns
x1 = []
x2 = []
for row in range(len(df_res_1)):
interval = df_res_1.iloc[row, 17]
if interval != -1:
x1.append(interval)
for row in range(len(df_res_2)):
interval = df_res_2.iloc[row, 17]
if interval != -1:
x2.append(interval)
layout={"title": "学生用时分布",
"xaxis_title": "学生用时,单位秒",
"yaxis_title": "学生个数",
# x轴坐标倾斜60度
"xaxis": {"tickangle": 60}
}
#数据组
hist_data=[x1,x2]
group_labels=['智能计算','问题解决']
fig=ff.create_distplot(hist_data,group_labels,bin_size=10,histnorm = 'probability')
fig['layout'].update(xaxis = dict(range = [0,100]))
plot(fig,filename='./plot/单日时间分布直方图.html')
offline.iplot(fig)
x1_ary, _ = np.histogram(x1, bins=[0,10,20,30,40,50,60])
x1_list = list(x1_ary)
x1_list.append(x1_sum - x1_ary.sum())
x1_list
x2_ary, _ = np.histogram(x2, bins=[0,10,20,30,40,50,60])
x2_list = list(x2_ary)
x2_list.append(x2_sum - x2_ary.sum())
x2_list
colors = [
'#1f77b4', # muted blue
'#ff7f0e', # safety orange
'#2ca02c', # cooked asparagus green
'#d62728', # brick red
'#9467bd', # muted purple
'#8c564b', # chestnut brown
'#e377c2', # raspberry yogurt pink
'#7f7f7f', # middle gray
'#bcbd22', # curry yellow-green
'#17becf' # blue-teal
]
colors[0:7]
import plotly
plotly.colors.qualitative.Plotly
import plotly as py
import plotly.graph_objs as go
pyplt=py.offline.plot
labels=['0~10分钟','10~20分钟','20~30分钟','30~40分钟','40~50分钟', '50~60分钟', '超过60分钟']
values=x1_list
trace=[go.Pie(labels=labels,values=values)]
layout=go.Layout(
title='智能计算做题时间分布比例图(当日)'
)
fig=go.Figure(data=trace,layout=layout)
fig.update_traces(hoverinfo='label+percent',
# textinfo='value',
textfont_size=20, marker=dict(colors=plotly.colors.qualitative.Plotly[0:7], line=dict(color='#000000', width=2)))
pyplt(fig,filename='plot/单日智能计算时间分布饼图.html')
offline.iplot(fig)
import plotly as py
import plotly.graph_objs as go
pyplt=py.offline.plot
labels=['0~10分钟','10~20分钟','20~30分钟','30~40分钟','40~50分钟', '50~60分钟', '超过60分钟']
values=x2_list
trace=[go.Pie(labels=labels,values=values)]
layout=go.Layout(
title='问题解决做题时间分布比例图(当日)'
)
fig=go.Figure(data=trace,layout=layout)
fig.update_traces(hoverinfo='label+percent',
# textinfo='value',
textfont_size=20, marker=dict(colors=plotly.colors.qualitative.Plotly[0:7], line=dict(color='#000000', width=2)))
pyplt(fig,filename='plot/问题解决时间分布饼图.html')
offline.iplot(fig)
```
# 以下找出偷懒学校排名
首先对偷懒学生(做题时间十分钟以下),添加列'lazy',真值为'1'
```
# df_main.loc[0, 'interval_minutes']
df_main.columns
df_main.iloc[0,17]
lazy_list = []
for row in range(len(df_main)):
if df_main.iloc[row, 17] <= 10:
lazy_list.append(1)
else:
lazy_list.append(0)
if 'lazy' not in df_main.columns:
df_main.insert(len(df_main.columns), 'lazy', lazy_list)
school_list =[tup[0] for tup in list(df_main.groupby('school'))]
df_school_list = [tup[1] for tup in list(df_main.groupby('school'))]
school_total_list = [len(df) for df in df_school_list]
df_lazy_count = pd.DataFrame(df_main.groupby('school')['lazy'].sum())
df_total_count = pd.DataFrame(df_main.groupby('school')['lazy'].count())
df_lazy_count.insert(len(df_lazy_count.columns), 'total', list(df_total_count.loc[:, 'lazy']))
df_res = df_lazy_count
ritio_list = []
for row in range(len(df_res)):
ritio_list.append(float(df_res.iloc[row, 0]) / float(df_res.iloc[row, 1]))
df_res.insert(len(df_res.columns), 'ritio', ritio_list)
df_res.sort_values(by = 'lazy', ascending=False).to_excel('./output/18日单日学生偷懒状况(按学校分类).xlsx')
df_res.sort_values(by = 'lazy', ascending=False)
```
|
github_jupyter
|
import pandas as pd
import json
import numpy as np
import ast
from datetime import datetime
import plotly.graph_objs as go
from plotly.offline import plot
import plotly.offline as offline
import plotly.figure_factory as ff
from pandas.core.indexes import interval
import re
# 2021-07-16T19:31:13+08:00
def get_interval_per_row(index, df):
row_data = df.loc[index,:]
start_time = row_data['start_time']
if start_time != start_time:
return -1
start_time = datetime.strptime(str(start_time),"%Y-%m-%dT%H:%M:%S+08:00")
expire_time = row_data['expire_time']
if expire_time != expire_time:
return -1
expire_time = datetime.strptime(str(expire_time),"%Y-%m-%dT%H:%M:%S+08:00")
stop_time = row_data['stop_time']
if stop_time != stop_time:
return -1
stop_time = datetime.strptime(str(stop_time),"%Y-%m-%dT%H:%M:%S+08:00")
total_minu = (stop_time - start_time).seconds / 60.0
return total_minu
def remove_str_per_row(data_per_row):
frame_list = ast.literal_eval(data_per_row)
frame_dic_list = []
for index in range(len(frame_list)):
temp = json.loads(frame_list[index])
if 'frame' in temp.keys():
if 'data' in temp.keys():
frame_dic_list.append(list(temp['frame']['data'].values()))
else:
frame_dic_list.append(list(temp['frame'].values()))
else:
frame_dic_list.append(temp)
return frame_dic_list
df_main = pd.read_csv('./data/单日数据-18日-new.csv')
df_main
dic_list_test = remove_str_per_row(df_main.loc[1, 'task_answers'])
dic_list_test, len(dic_list_test)
time_test_minu = get_interval_per_row(0, df_main)
time_test_minu
time_minu_list = []
drop_index_list = []
for row in range(len(df_main)):
interval_minu = get_interval_per_row(row, df_main)
if interval_minu == -1:
drop_index_list.append(row)
else:
time_minu_list.append(interval_minu)
len(df_main)
len(drop_index_list)
len(time_minu_list)
df_main = df_main.drop(drop_index_list)
if 'interval_minutes' not in df_main.columns:
df_main.insert(len(df_main.columns), 'interval_minutes', time_minu_list)
grouped_main = df_main.groupby('contest_id')
df_contest_list = [tup[1] for tup in list(grouped_main)]
df_contest_name_list = [tup[0] for tup in list(grouped_main)]
df_contest_name_list
df_res_1 = df_contest_list[0] # 智能计算
df_res_2 = df_contest_list[1] # 问题解决
grouped_main['interval_minutes'].mean()
len(df_contest_list[2]), len(df_contest_list[3])
x1_sum = len(df_contest_list[0])
x1_sum
x2_sum = len(df_contest_list[1])
x2_sum
x1_sum + x2_sum
[tup[0] for tup in list(df_res_1.groupby('tag'))]
[tup[0] for tup in list(df_res_2.groupby('tag'))]
df_res_1.columns
x1 = []
x2 = []
for row in range(len(df_res_1)):
interval = df_res_1.iloc[row, 17]
if interval != -1:
x1.append(interval)
for row in range(len(df_res_2)):
interval = df_res_2.iloc[row, 17]
if interval != -1:
x2.append(interval)
layout={"title": "学生用时分布",
"xaxis_title": "学生用时,单位秒",
"yaxis_title": "学生个数",
# x轴坐标倾斜60度
"xaxis": {"tickangle": 60}
}
#数据组
hist_data=[x1,x2]
group_labels=['智能计算','问题解决']
fig=ff.create_distplot(hist_data,group_labels,bin_size=10,histnorm = 'probability')
fig['layout'].update(xaxis = dict(range = [0,100]))
plot(fig,filename='./plot/单日时间分布直方图.html')
offline.iplot(fig)
x1_ary, _ = np.histogram(x1, bins=[0,10,20,30,40,50,60])
x1_list = list(x1_ary)
x1_list.append(x1_sum - x1_ary.sum())
x1_list
x2_ary, _ = np.histogram(x2, bins=[0,10,20,30,40,50,60])
x2_list = list(x2_ary)
x2_list.append(x2_sum - x2_ary.sum())
x2_list
colors = [
'#1f77b4', # muted blue
'#ff7f0e', # safety orange
'#2ca02c', # cooked asparagus green
'#d62728', # brick red
'#9467bd', # muted purple
'#8c564b', # chestnut brown
'#e377c2', # raspberry yogurt pink
'#7f7f7f', # middle gray
'#bcbd22', # curry yellow-green
'#17becf' # blue-teal
]
colors[0:7]
import plotly
plotly.colors.qualitative.Plotly
import plotly as py
import plotly.graph_objs as go
pyplt=py.offline.plot
labels=['0~10分钟','10~20分钟','20~30分钟','30~40分钟','40~50分钟', '50~60分钟', '超过60分钟']
values=x1_list
trace=[go.Pie(labels=labels,values=values)]
layout=go.Layout(
title='智能计算做题时间分布比例图(当日)'
)
fig=go.Figure(data=trace,layout=layout)
fig.update_traces(hoverinfo='label+percent',
# textinfo='value',
textfont_size=20, marker=dict(colors=plotly.colors.qualitative.Plotly[0:7], line=dict(color='#000000', width=2)))
pyplt(fig,filename='plot/单日智能计算时间分布饼图.html')
offline.iplot(fig)
import plotly as py
import plotly.graph_objs as go
pyplt=py.offline.plot
labels=['0~10分钟','10~20分钟','20~30分钟','30~40分钟','40~50分钟', '50~60分钟', '超过60分钟']
values=x2_list
trace=[go.Pie(labels=labels,values=values)]
layout=go.Layout(
title='问题解决做题时间分布比例图(当日)'
)
fig=go.Figure(data=trace,layout=layout)
fig.update_traces(hoverinfo='label+percent',
# textinfo='value',
textfont_size=20, marker=dict(colors=plotly.colors.qualitative.Plotly[0:7], line=dict(color='#000000', width=2)))
pyplt(fig,filename='plot/问题解决时间分布饼图.html')
offline.iplot(fig)
# df_main.loc[0, 'interval_minutes']
df_main.columns
df_main.iloc[0,17]
lazy_list = []
for row in range(len(df_main)):
if df_main.iloc[row, 17] <= 10:
lazy_list.append(1)
else:
lazy_list.append(0)
if 'lazy' not in df_main.columns:
df_main.insert(len(df_main.columns), 'lazy', lazy_list)
school_list =[tup[0] for tup in list(df_main.groupby('school'))]
df_school_list = [tup[1] for tup in list(df_main.groupby('school'))]
school_total_list = [len(df) for df in df_school_list]
df_lazy_count = pd.DataFrame(df_main.groupby('school')['lazy'].sum())
df_total_count = pd.DataFrame(df_main.groupby('school')['lazy'].count())
df_lazy_count.insert(len(df_lazy_count.columns), 'total', list(df_total_count.loc[:, 'lazy']))
df_res = df_lazy_count
ritio_list = []
for row in range(len(df_res)):
ritio_list.append(float(df_res.iloc[row, 0]) / float(df_res.iloc[row, 1]))
df_res.insert(len(df_res.columns), 'ritio', ritio_list)
df_res.sort_values(by = 'lazy', ascending=False).to_excel('./output/18日单日学生偷懒状况(按学校分类).xlsx')
df_res.sort_values(by = 'lazy', ascending=False)
| 0.249722 | 0.71561 |
### Import Module Library
```
import pandas as pd
import numpy as np
from functools import reduce
```
### Membaca File Csv
```
data_csv = pd.read_csv('./Data/data_train_us_trending_youtube.csv')
da = pd.DataFrame(data_csv)
da.head()
```
### Mengecek Jumlah Baris & Kolom
```
da.shape
```
## Data Cleaning
### Mencari Data Yang Null atau Nan Pada Seluruh Baris
```
da.isnull()
```
### Mencari Kolom Yang Ada Data Nullnya
```
da.isnull().any() # Bernilai True berarti ada data yang Null atau Nan
da.isnull().any().any() #bernilai false jika tidak memiliki nilai Nan
da.isnull().sum() # Mencari jumlah data yang Nan
```
### Drop Kolom Yang Tidak Digunakan
```
column_drop = ['ratings_disabled','video_error_or_removed'] # Menghapus kolom ratings_disabled dan video_error_or_remmoved karana tidak digunakan
da.drop(column_drop, inplace = True, axis = 1)
da
```
### Mengatur Index Dataset
```
da['video_id'].is_unique
#Mencek apakah kolom video_id unik jika iya maka akan diganti menjadi index data set baru
#Jika hasil False maka tetap menggunakan index dataset pertama
```
### Melakukan Perbaikan Pada Kolom Tags
```
def clean_tags(tags):
if tags == 'nan':
return 'NaN'
if tags == '[none]':
return 'NaN'
if '|"' in tags:
tags = tags[:tags.find('|"')]
elif '"|"' in tags:
tags = tags[:tags.find('"|"')]
elif '|"' in tags:
tags = tags[:tags.find('|"')]
tags = '#' + tags[:]
tags = list(map(str.lower, tags.split()))
return '#'.join(tags)
da['tags'] = da['tags'].apply(clean_tags)
da.head(60)
da.head()
```
## Transformasi
### Melakukan Grouping dan Penjumlahan
#### Menggroup sesuai kolom channel_title dan menjumlahan hasil likes yang didapatkan dari semua kontent youtubenya
```
sum_likes = da.groupby('channel_title')['likes'].apply(sum).rename('total_likes').reset_index()
sum_likes
```
### Melakukan Marge atau Pengabungan Tabel
```
da_new = pd.merge(da, sum_likes, how='left')
da_new
```
#### Mencari Prosentase Likes Dari Semua Content
```
da_new['pct_likes'] = da_new['likes'] / da_new['total_likes']
da_new['pct_likes'] = da_new['pct_likes'].apply(lambda x: format(x, '.2%'))
da_new
```
### Menggunakan Fungsi Transform()
```
da['total_likes'] = da.groupby('channel_title')['likes'].transform('sum')
da
```
#### Mencari Total Likes Yang > 1000
```
like_lebih1k = da_new[da_new.groupby('channel_title')['total_likes'].transform('sum') > 1000]
like_lebih1k
```
## Menyimpan Olah Data Cleaning dan Transform Ke File Csv
```
da_new.to_csv('./Data/hasilPertama.csv', index=True)
da_new1 = pd.read_csv('./Data/hasilPertama.csv')
da_new1
```
## Duplikasi Data
```
def load_data():
da_all = pd.read_csv('./Data/hasilPertama.csv')
# Buat subset dengan slicing data
return da_all.loc[:, ['video_id','trending_date','title','channel_title','views','likes','dislikes','total_likes','pct_likes']].dropna()
# Load subset
da_new2 = load_data()
da_new2
```
### Pencarian Baris Duplikasi
```
#Mencari duplikasi 1 kolom
da_new2.title.duplicated()
# Mencari duplikasi seluruh data
da_new2.duplicated()
# Mempertimbangkan kolom tertentu untuk mengidentifikasi duplikasi
da_new2.duplicated(subset=['trending_date','title','channel_title'])
```
### Penghitungan Duplikasi dan Non-Duplikasi
```
da_new2.title.duplicated().sum()
da_new2.duplicated().sum()
da_new2.duplicated(subset=['title','channel_title']).sum()
da_new2.duplicated(subset=['trending_date','title','channel_title']).sum()
(~da_new2.duplicated()).sum() #data non-duplikasi
```
### Mengekstraksi Baris Duplikat dengan Menggunakan Loc
```
# Memungkinkan kita untuk melihat baris yang diidentifikasi oleh duplikasi()
da_new2.loc[da_new2.duplicated(), :]
da_new2
```
### Menentukan Data Duplikat Mana Yang Akan Ditandai Menggunakan Keep
```
# `keep` data yang pertama
da_new2.loc[da_new2.duplicated(keep='first'), :]
# 'keep' data yang terakhir
da_new2.loc[da_new2.duplicated(keep='last'), :]
# Opsi ketiga yang bisa kita gunakan keep=False
da_new2.loc[da_new2.duplicated(keep=False), :]
```
## Menghapus Baris Duplikasi
```
da_new2
# Perhatikan jumlah baris, awalnya 40949 --> sekarang 40901
da_new3 = da_new2.drop_duplicates()
da_new3
da_new2.drop_duplicates(subset=['trending_date','title','channel_title'])
```
## Menyimpan Olah Data Cleaning dan Transform Ke File Csv Yang Terakhir
```
da_new3.to_csv('./Data/hasilAkhir.csv', index=False)
da_akhir = pd.read_csv('./Data/hasilAkhir.csv')
da_akhir
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from functools import reduce
data_csv = pd.read_csv('./Data/data_train_us_trending_youtube.csv')
da = pd.DataFrame(data_csv)
da.head()
da.shape
da.isnull()
da.isnull().any() # Bernilai True berarti ada data yang Null atau Nan
da.isnull().any().any() #bernilai false jika tidak memiliki nilai Nan
da.isnull().sum() # Mencari jumlah data yang Nan
column_drop = ['ratings_disabled','video_error_or_removed'] # Menghapus kolom ratings_disabled dan video_error_or_remmoved karana tidak digunakan
da.drop(column_drop, inplace = True, axis = 1)
da
da['video_id'].is_unique
#Mencek apakah kolom video_id unik jika iya maka akan diganti menjadi index data set baru
#Jika hasil False maka tetap menggunakan index dataset pertama
def clean_tags(tags):
if tags == 'nan':
return 'NaN'
if tags == '[none]':
return 'NaN'
if '|"' in tags:
tags = tags[:tags.find('|"')]
elif '"|"' in tags:
tags = tags[:tags.find('"|"')]
elif '|"' in tags:
tags = tags[:tags.find('|"')]
tags = '#' + tags[:]
tags = list(map(str.lower, tags.split()))
return '#'.join(tags)
da['tags'] = da['tags'].apply(clean_tags)
da.head(60)
da.head()
sum_likes = da.groupby('channel_title')['likes'].apply(sum).rename('total_likes').reset_index()
sum_likes
da_new = pd.merge(da, sum_likes, how='left')
da_new
da_new['pct_likes'] = da_new['likes'] / da_new['total_likes']
da_new['pct_likes'] = da_new['pct_likes'].apply(lambda x: format(x, '.2%'))
da_new
da['total_likes'] = da.groupby('channel_title')['likes'].transform('sum')
da
like_lebih1k = da_new[da_new.groupby('channel_title')['total_likes'].transform('sum') > 1000]
like_lebih1k
da_new.to_csv('./Data/hasilPertama.csv', index=True)
da_new1 = pd.read_csv('./Data/hasilPertama.csv')
da_new1
def load_data():
da_all = pd.read_csv('./Data/hasilPertama.csv')
# Buat subset dengan slicing data
return da_all.loc[:, ['video_id','trending_date','title','channel_title','views','likes','dislikes','total_likes','pct_likes']].dropna()
# Load subset
da_new2 = load_data()
da_new2
#Mencari duplikasi 1 kolom
da_new2.title.duplicated()
# Mencari duplikasi seluruh data
da_new2.duplicated()
# Mempertimbangkan kolom tertentu untuk mengidentifikasi duplikasi
da_new2.duplicated(subset=['trending_date','title','channel_title'])
da_new2.title.duplicated().sum()
da_new2.duplicated().sum()
da_new2.duplicated(subset=['title','channel_title']).sum()
da_new2.duplicated(subset=['trending_date','title','channel_title']).sum()
(~da_new2.duplicated()).sum() #data non-duplikasi
# Memungkinkan kita untuk melihat baris yang diidentifikasi oleh duplikasi()
da_new2.loc[da_new2.duplicated(), :]
da_new2
# `keep` data yang pertama
da_new2.loc[da_new2.duplicated(keep='first'), :]
# 'keep' data yang terakhir
da_new2.loc[da_new2.duplicated(keep='last'), :]
# Opsi ketiga yang bisa kita gunakan keep=False
da_new2.loc[da_new2.duplicated(keep=False), :]
da_new2
# Perhatikan jumlah baris, awalnya 40949 --> sekarang 40901
da_new3 = da_new2.drop_duplicates()
da_new3
da_new2.drop_duplicates(subset=['trending_date','title','channel_title'])
da_new3.to_csv('./Data/hasilAkhir.csv', index=False)
da_akhir = pd.read_csv('./Data/hasilAkhir.csv')
da_akhir
| 0.256273 | 0.794345 |
<em>We'll start off by importing numpy and the linear algebra class (linalg) from numpy</em>
```
import numpy as np
import numpy.linalg as npla
```
*Let's go back to the Ax = b setup*
```
A = np.array([[ 2. , 7. , 1. , 8. ],
[ 1. , 5.5, 8.5, 5. ],
[ 0. , 1. , 12. , 2.5],
[-1. , -4.5, -4.5, 3.5]])
print("A:", A,'\n')
# Recall we found U and L
# We used Gaussian elimination on the blackboard to triangularize A, giving U
# During Gaussian elimination, we wrote down the multipliers in a lower triangular array
# I and then put ones on the diagonal, giving L
U = np.array([[2,7,1,8],[0,2,8,1],[0,0,8,2],[0,0,0,8]])
L = np.array([[1,0,0,0],[.5,1,0,0],[0,.5,1,0],[-.5,-.5,0,1]])
print(U, "\n\n", L)
# The theorem: Gaussian elimination factors A as the product L time U
print( L @ U)
print()
print(A)
def LUfactorNoPiv(A):
"""Factor a square matrix, A == L @ U (no partial pivoting)
Parameters:
A: the matrix.
Outputs (in order):
L: the lower triangular factor, same dimensions as A, with ones on the diagonal
U: the upper triangular factor, same dimensions as A
"""
# Check the input
m, n = A.shape
assert m == n, 'input matrix A must be square'
# Make a copy of the matrix that we will transform into L and U
LU = A.astype(np.float64).copy()
# Eliminate each column in turn
for piv_col in range(n):
# Update the rest of the matrix
pivot = LU[piv_col, piv_col]
assert pivot != 0., "pivot is zero, can't continue"
for row in range(piv_col + 1, n):
multiplier = LU[row, piv_col] / pivot
LU[row, piv_col] = multiplier
LU[row, (piv_col+1):] -= multiplier * LU[piv_col, (piv_col+1):]
# Separate L and U in the result
#print("LU:\n", LU)
U = np.triu(LU)
L = LU - U + np.eye(n)
return (L, U)
A = np.array([[1, 2, 3], [1,1,1], [-1,1,2]])
L,U = LUfactorNoPiv(A)
print("\nA\n", A, "\n\nL\n", L, "\n\nU\n", U)
```
*Using the fact that, Ax = b means LUx = b:*
```
def Lsolve(L, b):
"""Forward solve a unit lower triangular system Ly = b for y
Parameters:
L: the matrix, must be square, lower triangular, with ones on the diagonal
b: the right-hand side vector
Output:
y: the solution vector to L @ y == b
"""
# Check the input
m, n = L.shape
assert m == n, "matrix L must be square"
assert np.all(np.tril(L) == L), "matrix L must be lower triangular"
assert np.all(np.diag(L) == 1), "matrix L must have ones on the diagonal"
# Make a copy of b that we will transform into the solution
y = b.astype(np.float64).copy()
# Forward solve
for col in range(n):
y[col+1:] -= y[col] * L[col+1:, col]
return y
def Usolve(U, y):
"""Backward solve an upper triangular system Ux = y for x
Parameters:
U: the matrix, must be square, upper triangular, with nonzeros on the diagonal
y: the right-hand side vector
Output:
x: the solution vector to U @ x == y
"""
print("\nyou will write Usolve in hw2\n")
return
```
*Testing it out...*
```
A = np.array([[ 2. , 7. , 1. , 8. ],
[ 1. , 5.5, 8.5, 5. ],
[ 0. , 1. , 12. , 2.5],
[-1. , -4.5, -4.5, 3.5]])
L, U = LUfactorNoPiv(A)
b = np.array([1, 2, 3, 4])
y = Lsolve(L,b)
print("Ran Lsolve(L,b) and got y:\n", y)
x = Usolve(U,y)
print("Ran Usolve(U,y) and got x:\n", x)
print("\nA@x:", A@x)
print("\nresidual norm:", npla.norm(b - A @ x))
```
*Back to our **LUfactorNoPiv()** function...*
```
# But LU factorization (without pivoting) fails if it encounters a zero pivot
A = np.array([[0, 1], [1, 2]])
L,U = LUfactorNoPiv(A)
def LUfactor(A, pivoting = True):
"""Factor a square matrix with partial pivoting, A[p,:] == L @ U
Parameters:
A: the matrix.
pivoting: whether or not to do partial pivoting
Outputs (in order):
L: the lower triangular factor, same dimensions as A, with ones on the diagonal
U: the upper triangular factor, same dimensions as A
p: the permutation vector that permutes the rows of A by partial pivoting
"""
# Check the input
m, n = A.shape
assert m == n, 'input matrix A must be square'
# Initialize p to be the identity permutation
p = np.array(range(n))
# Make a copy of the matrix that we will transform into L and U
LU = A.astype(np.float64).copy()
# Eliminate each column in turn
for piv_col in range(n):
# Choose the pivot row and swap it into place
if pivoting:
piv_row = piv_col + np.argmax(LU[piv_col:, piv_col])
assert LU[piv_row, piv_col] != 0., "can't find nonzero pivot, matrix is singular"
LU[[piv_col, piv_row], :] = LU[[piv_row, piv_col], :]
p[[piv_col, piv_row]] = p[[piv_row, piv_col]]
# Update the rest of the matrix
pivot = LU[piv_col, piv_col]
assert pivot != 0., "pivot is zero, can't continue"
for row in range(piv_col + 1, n):
multiplier = LU[row, piv_col] / pivot
LU[row, piv_col] = multiplier
LU[row, (piv_col+1):] -= multiplier * LU[piv_col, (piv_col+1):]
# Separate L and U in the result
U = np.triu(LU)
L = LU - U + np.eye(n)
return (L, U, p)
```
*Testing out our **LUfactor()** function with our counter example that failed the previous function...*
```
# Our 2-by-2 counter-example again
print('A:\n', A)
L, U, p = LUfactor(A)
print('\nL:\n', L)
print('\nU:\n', U)
print('\np: ', p)
# Other examples!
A = np.round(20*np.random.rand(5,5))
print('A:\n', A)
L, U, p = LUfactor(A)
print('\nL:\n', L)
print('\nU:\n', U)
print('\np: ', p)
# Do it again with the no-pivot function,
# but use the p matrix we found to our advantage:
p = [1, 0]
A = np.array([[0, 1], [1, 2]])
L, U = LUfactorNoPiv(A[p])
print('\nA[p]:\n', A[p])
print('\nL:\n', L)
print('\nU:\n', U)
# A larger example of LU with partial pivoting
A = np.round(20*np.random.rand(5,5))
print('matrix A:\n', A)
xorig = np.round(10*np.random.rand(5))
print('\noriginal x:', xorig)
b = A @ xorig
print('\nright-hand side b:', b)
# Factor the larger example
L, U, p = LUfactor(A)
print(L,"\n\n",U,"\n\n",p,"\n")
print("norm of difference between L times U and permuted A:", npla.norm( L@U - A[p,:]))
# Solve with the larger example
y = Lsolve(L,b[p])
print("y:", y)
x = Usolve(U,y)
print("\nx:", x)
print("\nresidual norm:", npla.norm(b - A @ x))
```
*Cholesky Factorization*
```
# The example I had on the slides in class:
C = np.array([[4,-1],[-1,3]])
# Let's make sure that it IS a symmetrical matrix visually!
print(C,"\n\n",C.T,"\n")
# Run Cholesky on the matrix
L = npla.cholesky(C)
print(L,"\n")
# Check that L is indeed a solution!
print(L @ L.T)
print("\nresidual norm:", npla.norm(C - L @ L.T))
```
*Another one!*
```
# Another Cholesky example!
C = np.array([[7,3,7,3,7],[3,7,3,7,3],[7,3,7,3,7],[3,7,3,7,3],[7,3,7,3,7]])
# Let's make sure that it IS a symmetrical matrix visually!
print(C,"\n\n",C.T,"\n")
# Run Cholesky on the matrix
L = npla.cholesky(C)
print(L,"\n")
# Check that L is indeed a solution!
print(L @ L.T)
print("\nresidual norm:", npla.norm(C - L @ L.T))
```
|
github_jupyter
|
import numpy as np
import numpy.linalg as npla
A = np.array([[ 2. , 7. , 1. , 8. ],
[ 1. , 5.5, 8.5, 5. ],
[ 0. , 1. , 12. , 2.5],
[-1. , -4.5, -4.5, 3.5]])
print("A:", A,'\n')
# Recall we found U and L
# We used Gaussian elimination on the blackboard to triangularize A, giving U
# During Gaussian elimination, we wrote down the multipliers in a lower triangular array
# I and then put ones on the diagonal, giving L
U = np.array([[2,7,1,8],[0,2,8,1],[0,0,8,2],[0,0,0,8]])
L = np.array([[1,0,0,0],[.5,1,0,0],[0,.5,1,0],[-.5,-.5,0,1]])
print(U, "\n\n", L)
# The theorem: Gaussian elimination factors A as the product L time U
print( L @ U)
print()
print(A)
def LUfactorNoPiv(A):
"""Factor a square matrix, A == L @ U (no partial pivoting)
Parameters:
A: the matrix.
Outputs (in order):
L: the lower triangular factor, same dimensions as A, with ones on the diagonal
U: the upper triangular factor, same dimensions as A
"""
# Check the input
m, n = A.shape
assert m == n, 'input matrix A must be square'
# Make a copy of the matrix that we will transform into L and U
LU = A.astype(np.float64).copy()
# Eliminate each column in turn
for piv_col in range(n):
# Update the rest of the matrix
pivot = LU[piv_col, piv_col]
assert pivot != 0., "pivot is zero, can't continue"
for row in range(piv_col + 1, n):
multiplier = LU[row, piv_col] / pivot
LU[row, piv_col] = multiplier
LU[row, (piv_col+1):] -= multiplier * LU[piv_col, (piv_col+1):]
# Separate L and U in the result
#print("LU:\n", LU)
U = np.triu(LU)
L = LU - U + np.eye(n)
return (L, U)
A = np.array([[1, 2, 3], [1,1,1], [-1,1,2]])
L,U = LUfactorNoPiv(A)
print("\nA\n", A, "\n\nL\n", L, "\n\nU\n", U)
def Lsolve(L, b):
"""Forward solve a unit lower triangular system Ly = b for y
Parameters:
L: the matrix, must be square, lower triangular, with ones on the diagonal
b: the right-hand side vector
Output:
y: the solution vector to L @ y == b
"""
# Check the input
m, n = L.shape
assert m == n, "matrix L must be square"
assert np.all(np.tril(L) == L), "matrix L must be lower triangular"
assert np.all(np.diag(L) == 1), "matrix L must have ones on the diagonal"
# Make a copy of b that we will transform into the solution
y = b.astype(np.float64).copy()
# Forward solve
for col in range(n):
y[col+1:] -= y[col] * L[col+1:, col]
return y
def Usolve(U, y):
"""Backward solve an upper triangular system Ux = y for x
Parameters:
U: the matrix, must be square, upper triangular, with nonzeros on the diagonal
y: the right-hand side vector
Output:
x: the solution vector to U @ x == y
"""
print("\nyou will write Usolve in hw2\n")
return
A = np.array([[ 2. , 7. , 1. , 8. ],
[ 1. , 5.5, 8.5, 5. ],
[ 0. , 1. , 12. , 2.5],
[-1. , -4.5, -4.5, 3.5]])
L, U = LUfactorNoPiv(A)
b = np.array([1, 2, 3, 4])
y = Lsolve(L,b)
print("Ran Lsolve(L,b) and got y:\n", y)
x = Usolve(U,y)
print("Ran Usolve(U,y) and got x:\n", x)
print("\nA@x:", A@x)
print("\nresidual norm:", npla.norm(b - A @ x))
# But LU factorization (without pivoting) fails if it encounters a zero pivot
A = np.array([[0, 1], [1, 2]])
L,U = LUfactorNoPiv(A)
def LUfactor(A, pivoting = True):
"""Factor a square matrix with partial pivoting, A[p,:] == L @ U
Parameters:
A: the matrix.
pivoting: whether or not to do partial pivoting
Outputs (in order):
L: the lower triangular factor, same dimensions as A, with ones on the diagonal
U: the upper triangular factor, same dimensions as A
p: the permutation vector that permutes the rows of A by partial pivoting
"""
# Check the input
m, n = A.shape
assert m == n, 'input matrix A must be square'
# Initialize p to be the identity permutation
p = np.array(range(n))
# Make a copy of the matrix that we will transform into L and U
LU = A.astype(np.float64).copy()
# Eliminate each column in turn
for piv_col in range(n):
# Choose the pivot row and swap it into place
if pivoting:
piv_row = piv_col + np.argmax(LU[piv_col:, piv_col])
assert LU[piv_row, piv_col] != 0., "can't find nonzero pivot, matrix is singular"
LU[[piv_col, piv_row], :] = LU[[piv_row, piv_col], :]
p[[piv_col, piv_row]] = p[[piv_row, piv_col]]
# Update the rest of the matrix
pivot = LU[piv_col, piv_col]
assert pivot != 0., "pivot is zero, can't continue"
for row in range(piv_col + 1, n):
multiplier = LU[row, piv_col] / pivot
LU[row, piv_col] = multiplier
LU[row, (piv_col+1):] -= multiplier * LU[piv_col, (piv_col+1):]
# Separate L and U in the result
U = np.triu(LU)
L = LU - U + np.eye(n)
return (L, U, p)
# Our 2-by-2 counter-example again
print('A:\n', A)
L, U, p = LUfactor(A)
print('\nL:\n', L)
print('\nU:\n', U)
print('\np: ', p)
# Other examples!
A = np.round(20*np.random.rand(5,5))
print('A:\n', A)
L, U, p = LUfactor(A)
print('\nL:\n', L)
print('\nU:\n', U)
print('\np: ', p)
# Do it again with the no-pivot function,
# but use the p matrix we found to our advantage:
p = [1, 0]
A = np.array([[0, 1], [1, 2]])
L, U = LUfactorNoPiv(A[p])
print('\nA[p]:\n', A[p])
print('\nL:\n', L)
print('\nU:\n', U)
# A larger example of LU with partial pivoting
A = np.round(20*np.random.rand(5,5))
print('matrix A:\n', A)
xorig = np.round(10*np.random.rand(5))
print('\noriginal x:', xorig)
b = A @ xorig
print('\nright-hand side b:', b)
# Factor the larger example
L, U, p = LUfactor(A)
print(L,"\n\n",U,"\n\n",p,"\n")
print("norm of difference between L times U and permuted A:", npla.norm( L@U - A[p,:]))
# Solve with the larger example
y = Lsolve(L,b[p])
print("y:", y)
x = Usolve(U,y)
print("\nx:", x)
print("\nresidual norm:", npla.norm(b - A @ x))
# The example I had on the slides in class:
C = np.array([[4,-1],[-1,3]])
# Let's make sure that it IS a symmetrical matrix visually!
print(C,"\n\n",C.T,"\n")
# Run Cholesky on the matrix
L = npla.cholesky(C)
print(L,"\n")
# Check that L is indeed a solution!
print(L @ L.T)
print("\nresidual norm:", npla.norm(C - L @ L.T))
# Another Cholesky example!
C = np.array([[7,3,7,3,7],[3,7,3,7,3],[7,3,7,3,7],[3,7,3,7,3],[7,3,7,3,7]])
# Let's make sure that it IS a symmetrical matrix visually!
print(C,"\n\n",C.T,"\n")
# Run Cholesky on the matrix
L = npla.cholesky(C)
print(L,"\n")
# Check that L is indeed a solution!
print(L @ L.T)
print("\nresidual norm:", npla.norm(C - L @ L.T))
| 0.582372 | 0.931836 |
```
"|IMPORT PACKAGES|"
import numpy as np
import pandas as pd
import datetime
from math import pi
from bokeh.plotting import show, figure, output_file, save
from bokeh.io import show, output_notebook, curdoc, export_png
from bokeh.models import ColumnDataSource,LinearAxis, Range1d, NumeralTickFormatter, LabelSet, Label, BoxAnnotation, DatetimeTickFormatter, Text, Span
from bokeh.models.tools import HoverTool
from bokeh.models import Arrow, NormalHead, OpenHead, VeeHead
from bokeh.transform import dodge
from datetime import datetime as dt
"|IMPORT DATA|"
path = r'https://github.com/ncachanosky/research/blob/master/Economic%20Series/'
file = r'Resumen%20Estadistico%20-%20Internacional.xlsx?raw=true'
#file = r'Resumen%20Estadistico%20-%20Argentina.xlsx'
IO = path + file
sheet = 'DATA'
data = pd.read_excel(IO, sheet_name = sheet, usecols="A:E,S", skiprows=2, engine='openpyxl') # Be patient...
data = data[data.YEAR == 2018].reset_index()
data = data.rename(columns={"LABOR_SHARE":"LABOR"})
data = data.drop(['index'], axis = 1)
data["LABOR"] = data["LABOR"]*100
"|CHECK DATA|"
data = data.dropna()
data
"|BUILD TRENDLINES|"
"Africa"
x_africa = data.loc[data["REGION"]=="Europe", "EFW"]
y_africa = data.loc[data["REGION"]=="Europe", "LABOR"]
africa = np.polyfit(x_africa, y_africa, 1)
trend_africa = africa[1] + africa[0]*x_africa
"Asia"
x_asia = data.loc[data["REGION"]=="Asia", "EFW"]
y_asia = data.loc[data["REGION"]=="Asia", "LABOR"]
asia = np.polyfit(x_asia, y_asia, 1)
trend_asia = asia[1] + asia[0]*x_asia
"Europe"
x_europe = data.loc[data["REGION"]=="Europe", "EFW"]
y_europe = data.loc[data["REGION"]=="Europe", "LABOR"]
europe = np.polyfit(x_europe, y_europe, 1)
trend_europe = europe[1] + europe[0]*x_europe
"Latin America"
x_latam = data.loc[data["REGION"]=="Latin America", "EFW"]
y_latam = data.loc[data["REGION"]=="Latin America", "LABOR"]
latam = np.polyfit(x_latam, y_latam, 1)
trend_latam = latam[1] + latam[0]*x_latam
"North America"
x_noram = data.loc[data["REGION"]=="North America", "EFW"]
y_noram = data.loc[data["REGION"]=="North America", "LABOR"]
noram = np.polyfit(x_noram, y_noram, 1)
trend_noram = noram[1] + noram[0]*x_noram
"Oceania"
x_oceania = data.loc[data["REGION"]=="Oceania", "EFW"]
y_oceania = data.loc[data["REGION"]=="Oceania", "LABOR"]
oceania = np.polyfit(x_oceania, y_oceania, 1)
trend_oceania = oceania[1] + oceania[0]*x_oceania
"|BUILD PLOT|"
colormap = {'Africa' :'black',
'Latin America':'teal' ,
'North America':'red' ,
'Asia' :'olive' ,
'Europe' :'purple',
'Oceania' :'blue'}
colors = [colormap[i] for i in data['REGION']]
data['COLORS'] = colors
x = data['EFW']
y = data['LABOR']
cds = ColumnDataSource(data)
#BUILD FIGURE 1
p = figure(title = "EL HUB ECONÓMICO | ÍNDICE DE LIBERTAD ECONÓMICA E INGRESO FACTOR TRABAJO (%PBI)",
# x_range = (2,10),
x_axis_label = "(-) Índice de libertad económica (+)",
plot_height = 400,
plot_width = 700)
p1 = p.circle('EFW', 'LABOR', fill_color='COLORS', fill_alpha=0.50, line_color='COLORS', legend_group='REGION', line_alpha=0.50, size=8, source=cds)
p2 = p.circle('EFW', 'COUNTRY', fill_alpha=0, line_alpha=0, size=1, source=cds)
p3 = p.circle('EFW', 'REGION' , fill_alpha=0, line_alpha=0, size=1, source=cds)
p4 = p.line(x_africa , trend_africa , alpha=0.5, color='black')
p5 = p.line(x_asia , trend_asia , alpha=0.5, color='olive')
p6 = p.line(x_europe , trend_europe , alpha=0.5, color='purple')
p7 = p.line(x_latam , trend_latam , alpha=0.5, color='teal')
#p8 = p.line(x_noram , trend_noram , alpha=0.5, color='red')
p9 = p.line(x_oceania, trend_oceania, alpha=0.5, color='blue')
p.add_tools(HoverTool(renderers=[p1], tooltips = [("País","@COUNTRY"),("Regions","@REGION"),("Índce", "@EFW{0.0}"),("Trabajo (%)", "@LABOR{$0,0}")]))
year = Label(x=3.5, y=25, text="2018", text_font_size='20px', text_alpha=0.75, text_align="center", text_baseline="middle")
p.add_layout(year)
p.legend.location = "top_left"
p.legend.orientation = "horizontal"
output_notebook()
show(p)
"|EXPORT .PNG FILE|"
export_png(p, filename="efw_labor_share.png")
"|# CREATE HTML FILE|"
output_file(filename="efw_labor_share.html", title="EFW y TRABAJO")
save(p)
"|CREATE JSON FILE|"
import json
import bokeh.embed
from bokeh.embed import json_item
j = json.dumps(json_item(p, "efw_labor_share"))
with open("07.10_efw_labor_share.json", "w") as fp:
json.dump(j, fp)
```
|
github_jupyter
|
"|IMPORT PACKAGES|"
import numpy as np
import pandas as pd
import datetime
from math import pi
from bokeh.plotting import show, figure, output_file, save
from bokeh.io import show, output_notebook, curdoc, export_png
from bokeh.models import ColumnDataSource,LinearAxis, Range1d, NumeralTickFormatter, LabelSet, Label, BoxAnnotation, DatetimeTickFormatter, Text, Span
from bokeh.models.tools import HoverTool
from bokeh.models import Arrow, NormalHead, OpenHead, VeeHead
from bokeh.transform import dodge
from datetime import datetime as dt
"|IMPORT DATA|"
path = r'https://github.com/ncachanosky/research/blob/master/Economic%20Series/'
file = r'Resumen%20Estadistico%20-%20Internacional.xlsx?raw=true'
#file = r'Resumen%20Estadistico%20-%20Argentina.xlsx'
IO = path + file
sheet = 'DATA'
data = pd.read_excel(IO, sheet_name = sheet, usecols="A:E,S", skiprows=2, engine='openpyxl') # Be patient...
data = data[data.YEAR == 2018].reset_index()
data = data.rename(columns={"LABOR_SHARE":"LABOR"})
data = data.drop(['index'], axis = 1)
data["LABOR"] = data["LABOR"]*100
"|CHECK DATA|"
data = data.dropna()
data
"|BUILD TRENDLINES|"
"Africa"
x_africa = data.loc[data["REGION"]=="Europe", "EFW"]
y_africa = data.loc[data["REGION"]=="Europe", "LABOR"]
africa = np.polyfit(x_africa, y_africa, 1)
trend_africa = africa[1] + africa[0]*x_africa
"Asia"
x_asia = data.loc[data["REGION"]=="Asia", "EFW"]
y_asia = data.loc[data["REGION"]=="Asia", "LABOR"]
asia = np.polyfit(x_asia, y_asia, 1)
trend_asia = asia[1] + asia[0]*x_asia
"Europe"
x_europe = data.loc[data["REGION"]=="Europe", "EFW"]
y_europe = data.loc[data["REGION"]=="Europe", "LABOR"]
europe = np.polyfit(x_europe, y_europe, 1)
trend_europe = europe[1] + europe[0]*x_europe
"Latin America"
x_latam = data.loc[data["REGION"]=="Latin America", "EFW"]
y_latam = data.loc[data["REGION"]=="Latin America", "LABOR"]
latam = np.polyfit(x_latam, y_latam, 1)
trend_latam = latam[1] + latam[0]*x_latam
"North America"
x_noram = data.loc[data["REGION"]=="North America", "EFW"]
y_noram = data.loc[data["REGION"]=="North America", "LABOR"]
noram = np.polyfit(x_noram, y_noram, 1)
trend_noram = noram[1] + noram[0]*x_noram
"Oceania"
x_oceania = data.loc[data["REGION"]=="Oceania", "EFW"]
y_oceania = data.loc[data["REGION"]=="Oceania", "LABOR"]
oceania = np.polyfit(x_oceania, y_oceania, 1)
trend_oceania = oceania[1] + oceania[0]*x_oceania
"|BUILD PLOT|"
colormap = {'Africa' :'black',
'Latin America':'teal' ,
'North America':'red' ,
'Asia' :'olive' ,
'Europe' :'purple',
'Oceania' :'blue'}
colors = [colormap[i] for i in data['REGION']]
data['COLORS'] = colors
x = data['EFW']
y = data['LABOR']
cds = ColumnDataSource(data)
#BUILD FIGURE 1
p = figure(title = "EL HUB ECONÓMICO | ÍNDICE DE LIBERTAD ECONÓMICA E INGRESO FACTOR TRABAJO (%PBI)",
# x_range = (2,10),
x_axis_label = "(-) Índice de libertad económica (+)",
plot_height = 400,
plot_width = 700)
p1 = p.circle('EFW', 'LABOR', fill_color='COLORS', fill_alpha=0.50, line_color='COLORS', legend_group='REGION', line_alpha=0.50, size=8, source=cds)
p2 = p.circle('EFW', 'COUNTRY', fill_alpha=0, line_alpha=0, size=1, source=cds)
p3 = p.circle('EFW', 'REGION' , fill_alpha=0, line_alpha=0, size=1, source=cds)
p4 = p.line(x_africa , trend_africa , alpha=0.5, color='black')
p5 = p.line(x_asia , trend_asia , alpha=0.5, color='olive')
p6 = p.line(x_europe , trend_europe , alpha=0.5, color='purple')
p7 = p.line(x_latam , trend_latam , alpha=0.5, color='teal')
#p8 = p.line(x_noram , trend_noram , alpha=0.5, color='red')
p9 = p.line(x_oceania, trend_oceania, alpha=0.5, color='blue')
p.add_tools(HoverTool(renderers=[p1], tooltips = [("País","@COUNTRY"),("Regions","@REGION"),("Índce", "@EFW{0.0}"),("Trabajo (%)", "@LABOR{$0,0}")]))
year = Label(x=3.5, y=25, text="2018", text_font_size='20px', text_alpha=0.75, text_align="center", text_baseline="middle")
p.add_layout(year)
p.legend.location = "top_left"
p.legend.orientation = "horizontal"
output_notebook()
show(p)
"|EXPORT .PNG FILE|"
export_png(p, filename="efw_labor_share.png")
"|# CREATE HTML FILE|"
output_file(filename="efw_labor_share.html", title="EFW y TRABAJO")
save(p)
"|CREATE JSON FILE|"
import json
import bokeh.embed
from bokeh.embed import json_item
j = json.dumps(json_item(p, "efw_labor_share"))
with open("07.10_efw_labor_share.json", "w") as fp:
json.dump(j, fp)
| 0.362179 | 0.394084 |
# Introduction
<hr style="border:2px solid black"> </hr>
**What?** Time and space complexity
# What constitute an efficient algorithm?
<hr style="border:2px solid black"> </hr>
- Algorithm’s efficiency measures both the time and space (memory) it takes when executed.
- The **best algorithm** = least amount of time + least amount of space.
- **Rreality** is that algorithms have a **tradeoff** between saving space or time.
- Admiting there is ia tradeoff means that:
- The best algorithm **depends** on our requirements.
- If need both least time and memory we can settle for the average.
# Space complexity
<hr style="border:2px solid black"> </hr>
- Space complexity of an algorithm is the amount of space it uses for execution in relation to the size of the input. The keyword here is **in relation to the iput size**.
- Assume that adding a single integer to the list takes `c` amout of space and other initial operations, including creating a new list, takes `d` amount space. Then the space taken can be computed as: `c*n+d` where `n` is our input size.
- The values of the **constants** `c` and `d` are outside of the control of the algorithm and depend on factors such as programming language, hardware specifications, etc.<br>
- `when n -> c*n + d`<br>
- `when n = 10 -> c*10 + d`<br>
- `when n = 100 -> c*100 + d`<br>
```
n = int(input())
nums = []
for i in range(1, n+1):
nums.append(i*i)
print(nums)
```
# Time complexity
<hr style="border:2px solid black"> </hr>
- Time complexity is the number of elementary operations an algorithm performs in relation to the input size. Here, we count the number of operations, instead of time itself, based on the assumption that each operation takes a fixed amount of time to complete.
- The algorithm performs `n` number of operations (n iterations of the loop) to complete its execution.
- As we did earlier the equation for the time complexity is still: `c*n + d` thus the time complexity of this algorithm is also in the order of `n`.
```
n = int(input())
nums = []
for i in range(1, n+1):
nums.append(i*i)
print(nums)
```
# Asymptotic analysis
<hr style="border:2px solid black"> </hr>
- Do we need the exact value of the equation? No, we don't! Instead, we use the highest order of the variable `n` as a representative of the space complexity.
- This type of analysis is called **asymptotic analysis** and it evaluates how the performance changes as the input size increases.
- In conclusion:
- `c*n + d` has an order of `n` space complexity.
- `c*n^2 + d*n + e` has an order of `n^2` space complexity.
# Best, Worst, and Average Cases
<hr style="border:2px solid black"> </hr>
- However asymptotic analysis is not all about worst case scenatio.
- **Usually**, we consider three cases when analyzing an algorithm: best, worst, and average.
- Scenarios:
- `k` is stored in the 0th index, just need one interation.
- `k` is NOT stored in the list, it takes `n+1` iterations
- In linear search, the number of iterations the algorithm takes to complete execution follows this pattern.
- `When k is stored at the 0th index -> 1 iteration`
- `When k is stored at the 1st index -> 2 iterations`
- `When k is stored at the 2nd index -> 3 iterations`
- `When k is stored at the 3rd index -> 4 iterations`
- `: :`
- `When k is stored at the nth index -> n iterations`
- `When k is not in the list -> n+1 iterations`
- This can computed with this formula:

```
def linear_search(nums, n, k):
"""
Linear search algorithm: a simple for
loop to search if a given integer k is present
in a list named nums of size n.
"""
for i in range(n):
if k == nums[i]:
return i
return -1
import numpy as np
print("List being probed", list(np.arange(1,7,1)))
print("Best case scenario, index found at", linear_search(list(np.arange(1,7,1)), 6, 1))
print("Worst case scenario, index found at", linear_search(list(np.arange(1,7,1)), 6, 7))
```
# Asymptotic notation
<hr style="border:2px solid black"> </hr>
- There exhists 3 asymptotic notations:
- `Ω`(Big-Omega) notation for the lower bound. It's used to represent the **best-case** scenario of an algorithm.
- `O` (Big-O) notation for the upper bound. It's used to represent the **worst-case** scenario of an algorithm.
- `Θ` (Big-Theta) notation denotes an upper and a lower bound of a function. Therefore, it defines both at most and at least boundaries for the values the function can take for a given input. Big-theta notation is used to define the **average case** time and space complexity of an algorithm.
## Ω (Big-Omega) Notation definition
- `Ω(g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= c*g(n) <= f(n) for all n >= n0}`
<br><br>
- Given `g(n) = n2` and `f(n) = 2n2 + 4` then `f(n) = Ω(g(n))` because:
<br><br>
- `For c = 1 and n0 = 1, 0 <= c*g(n) <= f(n) for all n >= n0`
<br><br>
- Now, if we consider `f(n) = 3n + 5`, we can’t find values for constants c and n0 that satisfy the above conditions. Therefore, `f(n) = 3n + 5` doesn’t belong to big-omega of `g(n)`.
## O (Big-O) Notation definition
- `O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= f(n) <= c*g(n) for all n >= n0}`
<br><br>
- Given `g(n) = n2` and `f(n) = 2n2 + 4` then `f(n) = O(g(n))` because:
<br><br>
- `For c = 5 and n0 = 1, 0 <= f(n) <= c*g(n) for all n >= n0`
<br><br>
- And if we consider `f(n) = n3+2`, it doesn’t belong to `O(g(n))` because no combinations of values for `c` and `n0` satisfies the required condition.
## Θ (Big-Theta) Notation definition
- `Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}`
<br><br>
- Given `g(n) = n2` and `f(n) = 2n2 + 4` then `f(n) = Θ(g(n))` because:
<br><br>
- `For n0 = 1, c0 = 1, and c1 = 5, 0 <= c0*g(n) <= f(n) <= c1*g(n) for all n >= n0`
# References
<hr style="border:2px solid black"> </hr>
- https://livecodestream.dev/post/complete-guide-to-understanding-time-and-space-complexity-of-algorithms/
|
github_jupyter
|
n = int(input())
nums = []
for i in range(1, n+1):
nums.append(i*i)
print(nums)
n = int(input())
nums = []
for i in range(1, n+1):
nums.append(i*i)
print(nums)
def linear_search(nums, n, k):
"""
Linear search algorithm: a simple for
loop to search if a given integer k is present
in a list named nums of size n.
"""
for i in range(n):
if k == nums[i]:
return i
return -1
import numpy as np
print("List being probed", list(np.arange(1,7,1)))
print("Best case scenario, index found at", linear_search(list(np.arange(1,7,1)), 6, 1))
print("Worst case scenario, index found at", linear_search(list(np.arange(1,7,1)), 6, 7))
| 0.253953 | 0.991546 |
# Weakly Supervised Recommendation Systems
Experiments steps:
1. **User's Preferences Model**: Leverage the most *explicit* ratings to build a *rate/rank prediction model*. This is a simple *Explicit Matrix Factorization* model.
2. **Generate Weak DataSet**: Use the above model to *predict* for all user/item pairs $(u,i)$ in *implicit feedback dataset* to build a new *weak explicit dataset* $(u, i, r^*)$.
3. **Evaluate**: Use the intact test split in the most explicit feedback, in order to evaluate the performance of any model.
## Explicit Model Experiments
This section contains all the experiments based on the explicit matrix factorization model.
### Explicit Review/Recommend Model
```
import utils
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_play, dataset_purchase = utils.parse_steam()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (PLAY) contains %s interactions of %s users and %s items'%(
format(len(dataset_play.ratings), ','),
format(dataset_play.num_users, ','),
format(dataset_play.num_items, ',')))
print('Implicit dataset (PURCHASE) contains %s interactions of %s users and %s items'%(
format(len(dataset_purchase.ratings), ','),
format(dataset_purchase.num_users, ','),
format(dataset_purchase.num_items, ',')))
# train the explicit model based on recommend feedback
model = utils.train_explicit(train_interactions=dataset_recommend_train,
valid_interactions=dataset_recommend_dev,
run_name='model_steam_explicit_recommend')
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
```
## Remove all valid/test rating values
```
test_interact = set()
for (uid, iid) in zip(dataset_recommend_test.user_ids, dataset_recommend_test.item_ids):
test_interact.add((uid, iid))
for (uid, iid) in zip(dataset_recommend_dev.user_ids, dataset_recommend_dev.item_ids):
test_interact.add((uid, iid))
# clean implicit dataset from test/dev rating
for idx, (uid, iid, r) in enumerate(zip(dataset_play.user_ids, dataset_play.item_ids, dataset_play.ratings)):
if (uid, iid) in test_interact:
dataset_play.ratings[idx] = -1
```
### Explicit Play Model
Leverage the **explicit review/recommend model** trained at the previous section to annotate **missing values** in the **play** dataset.
```
# annotate the missing values in the play dataset based on the explicit recommend model
dataset_play = utils.annotate(interactions=dataset_play,
model=model,
run_name='dataset_steam_play_explicit_annotated')
# train the explicit model based on recommend feedback
model = utils.train_explicit(train_interactions=dataset_play,
valid_interactions=dataset_recommend_dev,
run_name='model_steam_explicit_play')
# evaluate the new model
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
```
## Implicit Model Experiments
This section contains all the experiments based on the implicit matrix factorization model.
### Implicit Review/Recommend Model
```
import utils
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_play, dataset_purchase = utils.parse_steam()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (PLAY) contains %s interactions of %s users and %s items'%(
format(len(dataset_play.ratings), ','),
format(dataset_play.num_users, ','),
format(dataset_play.num_items, ',')))
print('Implicit dataset (PURCHASE) contains %s interactions of %s users and %s items'%(
format(len(dataset_purchase.ratings), ','),
format(dataset_purchase.num_users, ','),
format(dataset_purchase.num_items, ',')))
# train the explicit model based on recommend feedback
model = utils.train_implicit_negative_sampling(train_interactions=dataset_play,
valid_interactions=dataset_recommend_dev,
run_name='model_steam_implicit')
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
```
## Popularity
```
import utils
from popularity import PopularityModel
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_play, dataset_purchase = utils.parse_steam()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (PLAY) contains %s interactions of %s users and %s items'%(
format(len(dataset_play.ratings), ','),
format(dataset_play.num_users, ','),
format(dataset_play.num_items, ',')))
print('Implicit dataset (PURCHASE) contains %s interactions of %s users and %s items'%(
format(len(dataset_purchase.ratings), ','),
format(dataset_purchase.num_users, ','),
format(dataset_purchase.num_items, ',')))
# train the explicit model based on recommend feedback
model = PopularityModel()
print('fit the model')
model.fit(interactions=dataset_recommend_train)
# evaluate the new model
print('evaluate the model')
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
# rmse = rmse_score(model=model, test=dataset_rate_test, batch_size=512)
# print('-'*20)
# print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
```
|
github_jupyter
|
import utils
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_play, dataset_purchase = utils.parse_steam()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (PLAY) contains %s interactions of %s users and %s items'%(
format(len(dataset_play.ratings), ','),
format(dataset_play.num_users, ','),
format(dataset_play.num_items, ',')))
print('Implicit dataset (PURCHASE) contains %s interactions of %s users and %s items'%(
format(len(dataset_purchase.ratings), ','),
format(dataset_purchase.num_users, ','),
format(dataset_purchase.num_items, ',')))
# train the explicit model based on recommend feedback
model = utils.train_explicit(train_interactions=dataset_recommend_train,
valid_interactions=dataset_recommend_dev,
run_name='model_steam_explicit_recommend')
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
test_interact = set()
for (uid, iid) in zip(dataset_recommend_test.user_ids, dataset_recommend_test.item_ids):
test_interact.add((uid, iid))
for (uid, iid) in zip(dataset_recommend_dev.user_ids, dataset_recommend_dev.item_ids):
test_interact.add((uid, iid))
# clean implicit dataset from test/dev rating
for idx, (uid, iid, r) in enumerate(zip(dataset_play.user_ids, dataset_play.item_ids, dataset_play.ratings)):
if (uid, iid) in test_interact:
dataset_play.ratings[idx] = -1
# annotate the missing values in the play dataset based on the explicit recommend model
dataset_play = utils.annotate(interactions=dataset_play,
model=model,
run_name='dataset_steam_play_explicit_annotated')
# train the explicit model based on recommend feedback
model = utils.train_explicit(train_interactions=dataset_play,
valid_interactions=dataset_recommend_dev,
run_name='model_steam_explicit_play')
# evaluate the new model
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
import utils
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_play, dataset_purchase = utils.parse_steam()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (PLAY) contains %s interactions of %s users and %s items'%(
format(len(dataset_play.ratings), ','),
format(dataset_play.num_users, ','),
format(dataset_play.num_items, ',')))
print('Implicit dataset (PURCHASE) contains %s interactions of %s users and %s items'%(
format(len(dataset_purchase.ratings), ','),
format(dataset_purchase.num_users, ','),
format(dataset_purchase.num_items, ',')))
# train the explicit model based on recommend feedback
model = utils.train_implicit_negative_sampling(train_interactions=dataset_play,
valid_interactions=dataset_recommend_dev,
run_name='model_steam_implicit')
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
rmse = rmse_score(model=model, test=dataset_recommend_test)
print('-'*20)
print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
import utils
from popularity import PopularityModel
from spotlight.evaluation import rmse_score
dataset_recommend_train, dataset_recommend_test, dataset_recommend_dev, dataset_play, dataset_purchase = utils.parse_steam()
print('Explicit dataset (TEST) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_test.ratings), ','),
format(dataset_recommend_test.num_users, ','),
format(dataset_recommend_test.num_items, ',')))
print('Explicit dataset (VALID) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_dev.ratings), ','),
format(dataset_recommend_dev.num_users, ','),
format(dataset_recommend_dev.num_items, ',')))
print('Explicit dataset (TRAIN) contains %s interactions of %s users and %s items'%(
format(len(dataset_recommend_train.ratings), ','),
format(dataset_recommend_train.num_users, ','),
format(dataset_recommend_train.num_items, ',')))
print('Implicit dataset (PLAY) contains %s interactions of %s users and %s items'%(
format(len(dataset_play.ratings), ','),
format(dataset_play.num_users, ','),
format(dataset_play.num_items, ',')))
print('Implicit dataset (PURCHASE) contains %s interactions of %s users and %s items'%(
format(len(dataset_purchase.ratings), ','),
format(dataset_purchase.num_users, ','),
format(dataset_purchase.num_items, ',')))
# train the explicit model based on recommend feedback
model = PopularityModel()
print('fit the model')
model.fit(interactions=dataset_recommend_train)
# evaluate the new model
print('evaluate the model')
mrr, ndcg, ndcg10, ndcg_5, mmap, success_10, success_5 = utils.evaluate(interactions=dataset_recommend_test,
model=model,
topk=20)
# rmse = rmse_score(model=model, test=dataset_rate_test, batch_size=512)
# print('-'*20)
# print('RMSE: {:.4f}'.format(rmse))
print('MRR: {:.4f}'.format(mrr))
print('nDCG: {:.4f}'.format(ndcg))
print('nDCG@10: {:.4f}'.format(ndcg10))
print('nDCG@5: {:.4f}'.format(ndcg_5))
print('MAP: {:.4f}'.format(mmap))
print('success@10: {:.4f}'.format(success_10))
print('success@5: {:.4f}'.format(success_5))
| 0.269999 | 0.85747 |
###### Material desenvolvido para o minicurso: Introdução à solução numérica de EDP's, ministrado no ERMAC/2018 de 5 a 6 de abril de 2018 na Universidade Federal de Lavras, Lavras/MG, Brasil. Autor: [Jonas Laerte Ansoni](http://jonasansoni.blogspot.com.br/).
<img src="./figuras/logoemarc.png" width="30%">
# <center> Minicurso:<font color='blue'> Introdução à solução numérica de EDP's
Neste _IPython notebook_ iniciaremos os estudos da solução numérica das EDP's apresentando um modelo simplificado da dispersão de poluentes no ar. Através deste exemplo será possível fazer a introdução ao método das diferenças finitas para resolvermos numericamente a equação de transporte (equação de convecção-difusão).
## 2. Modelo para a dispersão de poluentes
<div class="alert alert-light" role="alert">
Exemplo adaptado do material: Nachbin, A. ; TABAK, E. . Introdução à Modelagem Matemática e Computação Científica II, SBMAC. XX CNMAC/SBMAC, 1997. 114p - https://www.youtube.com/watch?v=iRzUUmEOUO8
</div>
A chaminé de uma fábrica solta fumaça com um produto tóxico, com uma concentração inicial $\alpha$. Uma pessoa interessada em comprar a casa a uma distância $d$ da fábrica nos consulta sobre a condição do ar na vizinhança.
<img src="./figuras/simpsons.gif" width="38%">
<img src="./figuras/fig1.png" width="75%">
#### <center> Figura 1. Diagrama ilustrando o problema de dispersão de poluentes.
Considerando a pior situação possível, na qual o vento sopra na direção da casa com velocidade máxima $u$. O mecanismo de __transporte__, no qual a nuvem é carregada (transportada) pelo vento sem mudar de forma, é chamado de __advecção__. Também devemos levar em consideração o mecanismos de __difusão__, no qual a nuvem vai se espalhando ao mesmo tempo que a concentração do poluente vai baixando.
Acabamos de escolher o modelo físico, ou seja, decidimos que tipos de mecanismos levaremos em conta no nosso estudo. Agora precisamos fazer a representação matemática dos mecanismos de transporte e difusão.
<div class="alert alert-warning" role="alert">
<p><font color='Red'> __Objetivo__:</font> Calcular o nível final do poluente (ou seja, a concentração $c=c(x,t)$) nos arredores da casa.</p>
</div>
### 2.1. Modelo simplificado: Equação de transporte
\begin{equation}
\frac{\partial c}{\partial t} = -u \frac{\partial c}{\partial x}+ k \frac{\partial^2 c}{\partial x^2}
\tag{1}
\end{equation}
<div class="alert alert-danger" role="alert">
<p> Termo advectivo (convectivo) + Temo difusivo: Equação de transporte</p>
</div>
Vamos desprezar inicialmente o termo difusivo: $k \frac{\partial^2 c}{\partial x^2}$. Temos então a equação de advecção (convecção), em que $u>0$ é constante:
\begin{equation}
\frac{\partial c}{\partial t} + u \frac{\partial c}{\partial x} = 0 \Longrightarrow \frac{\partial c}{\partial t} = -u \frac{\partial c}{\partial x}
\tag{2}
\end{equation}
onde o termo $-u \frac{\partial c}{\partial x}$ é o termo convectivo, inercial ou advectivo.
A equação representa uma *onda* propagando com velocidade $u$ na direção $x$. Com condição inicial $c(x,0)=c_0(x)$ conforme a figura abaixo, a equação tem solução exata dada por:
\begin{equation}c(x,t)=c_0(x-ut)\tag{3}\end{equation}
<img src="./figuras/fig2.png" width="65%" label="assa">
#### <center> Figura 2. Modelagem de uma nuvem de poluente.
<div class="alert alert-warning" role="alert">
<p> Verifique isso!!!</p>
</div>
##### (Pode-se utilizar por exemplo o método das características. Uma exelente explicação pode ser vista nas aulas do prof. André Nachbin (IMPA) [Link](https://youtu.be/2ugsEkBko-0))
##### *Livro J. C. Strickwerda, 'Finite Difference Schemes and Partial Differential Equations' , Chapman & Hall, 1989
<img src="./figuras/fig3.png" width="45%">
#### <center> Figura 3. Curvas características para uma onda positiva.
<div class="alert alert-warning" role="alert">
<h4><font color='red'> _Vamos ver agora como resolver numericamente essa equação da onda, e verificar quais as conseguencias das nossas escolhas de qual metodologia numérica utilizada._</font></h4>
</div>
### <font color='blue'> 2.2. Método das diferenças finitas
Para que a solução seja calculada precisamos realizar a discretização da equação em relação ao tempo e ao espaço.
Vamos considerar o domínio discreto em relação ao *espaço-tempo*, onde as coordenadas na direção horizontal representam o avanço no tempo de $t^n$ à $t^{n+1}$ - e as coordenadas na direção vertical em relação ao movimento no espaço: pontos consecutivos são $x_{i-1}$, $x_i$, e $x_{i+1}$. Isso cria uma malha onde um ponto tem ambos os índices, temporal e espacial. A seguir temos uma representação gráfica da malha tempo-espaço:
\begin{matrix}
t^{n+1} & \rightarrow & \bullet && \bullet && \bullet \\
t^n & \rightarrow & \bullet && \bullet && \bullet \\
& & x_{i-1} && x_i && x_{i+1}
\end{matrix}
Para a solução numérica de $c(x,t)$, utilizaremos os índices subscritos para denotar a posição espacial, como $c_i$, e sobrescrito denota o instante temporal, como $c^n$. Temos portanto a representação discreta como $c^{n}_{i}$.
\begin{matrix}
& &\bullet & & \bullet & & \bullet \\
& &c^{n+1}_{i-1} & & c^{n+1}_i & & c^{n+1}_{i+1} \\
& &\bullet & & \bullet & & \bullet \\
& &c^n_{i-1} & & c^n_i & & c^n_{i+1} \\
& &\bullet & & \bullet & & \bullet \\
& &c^{n-1}_{i-1} & & c^{n-1}_i & & c^{n-1}_{i+1} \\
\end{matrix}
Outra maneira de obter a nossa discretização é dizer que ela é obtida por passos constantes no tempo e espaço, $\Delta t$ e $\Delta x$, como a seguir:
\begin{eqnarray}
x_i &=& i\, \Delta x \quad \text{and} \quad t^n= n\, \Delta t \nonumber \\
c_i^n &=& c(i\, \Delta x, n\, \Delta t)
\end{eqnarray}
### <font color='red'> Desenvolvimento _"The Flash"_ dos métodos de diferença finitas na lousa.
<font color="blue"> Graficamente:

#### <center> Figura 4. Pontos utilizados aproximações derivada primeira ordem.
Seja $f$ uma função contínua
* Aproximação por diferença atrasada (_backward_):
$\frac{\partial f}{\partial x} \approx \frac{f_i - f_{i-1}}{\Delta x}$
* Aproximação por diferença adiantada (_forward_):
$\frac{\partial f}{\partial x} \approx \frac{f_{i+1} - f_{i}}{\Delta x}$
* Aproximação por diferença central:
$\frac{\partial f}{\partial x} \approx \frac{f_{i+1} - f_{i-1}}{2\Delta x}$
#### Expansão em série de Taylor
Supondo uma função $f$ qualquer contínua no intervalo $[a, b]$ de interesse e que possua derivada até ordem $N$ contínuas nesse intervalo, o Teorema de Taylor permite escrever, para todo $x \in [a,b]$,
\begin{equation}
f(x)=f(x_i)+(\Delta x) \frac{\partial f}{\partial x}\big|_i + \frac{(\Delta x)^2}{2!} \frac{\partial ^2 f}{\partial x^2}\big|_i + \frac{(\Delta x)^3}{3!} \frac{\partial ^3 f}{\partial x^3}\big|_i + ...
\tag{4}
\end{equation}
em que $\Delta x=x-x_i$.
Queremos determinar a primeira derivada de uma função $f$ no ponto $x_i$. Expandindo $f(x_i + \Delta x)$ em série de Taylor em torno do ponto $x_i$
\begin{equation}
f(x_i+\Delta x)=f(x_i)+(\Delta x) \frac{\partial f}{\partial x}\big|_i + \frac{(\Delta x)^2}{2!} \frac{\partial ^2 f}{\partial x^2}\big|_i + \frac{(\Delta x)^3}{3!} \frac{\partial ^3 f}{\partial x^3}\big|_i + ...
\tag{5}
\end{equation}
Isolando a primeira derivada $\frac{\partial c}{\partial x}\big|_i$, podemos escrever:
\begin{equation}
\frac{\partial f}{\partial x}\big|_i = \frac{f(x_i+\Delta x) - f(x_i)}{\Delta x}+ \left[-\frac{(\Delta x)}{2!} \frac{\partial ^2 f}{\partial x^2}\big|_i - \frac{(\Delta x)^2}{3!} \frac{\partial ^3 f}{\partial x^3}\big|_i - ...\right]
\tag{6}
\end{equation}
A expressão 6 indica que a primeira derivada (__diferenças progressivas__) é:
\begin{equation}
\frac{\partial f}{\partial x}\big|_i = \frac{f(x_i+\Delta x) - f(x_i)}{\Delta x} + ELT
\tag{7}
\end{equation}
onde
\begin{equation}
ELT = \left[-\frac{(\Delta x)}{2!} \frac{\partial ^2 f}{\partial x^2}\big|_i - \frac{(\Delta x)^2}{3!} \frac{\partial ^3 f}{\partial x^3}\big|_i - ...\right]
\end{equation}
é o _erro local de truncamento_. Este erro aparece porque utilizamos um número finito de termos na série de Taylor. O ELT fornece uma medida da diferença entre o valor exato da derivada e sua aproximação numérica, indicando também como essa diferença varia com a redução do $\Delta x$.
Como é considerado $0<\Delta x<1$, supõem-se que o termo dominante do ELT será o primeiro, isto é, aquele com a menor potência de $\Delta x$.
Os termos do ELT (Eq. 7) são então representados por ${\mathcal O}(\Delta x)$ ("ordem $\Delta x$. (Obs. A expressão do tipo ${\mathcal O}(\Delta x)$ somente indica como o erro local de truncamento varia com o refinamento da malha, e não o valor do erro.
Realizando o mesmo processo a partir da expressão $f(x_i - \Delta x)$ em série de Taylor em torno do ponto $x_i$:
\begin{equation}
f(x_i-\Delta x)=f(x_i)-(\Delta x) \frac{\partial f}{\partial x}\big|_i + \frac{(\Delta x)^2}{2!} \frac{\partial^2 f}{\partial x^2}\big|_i + {\mathcal O}(\Delta x)^3
\tag{8}
\end{equation}
Isolando a primeira derivada temos:
\begin{equation}
\frac{\partial f}{\partial x}\big|_i = \frac{f_{i}-f_{i-1}}{\Delta x}+{\mathcal O}(\Delta x)
\tag{9}
\end{equation}
que é outra aproximação para a derivada primeira. (__diferenças atrasadas__)
Realizando a subtração das expressões 5 e 8 ($f(x_i+\Delta x)-f(x_i-\Delta x)$), de modo a eliminar o termo contendo a segunda derivada de $f$, obtemos:
\begin{equation}
f(x_i+\Delta x)=f(x_i)+(\Delta x) \frac{\partial f}{\partial x}\big|_i + \frac{(\Delta x)^2}{2!} \frac{\partial^2 f}{\partial x^2}\big|_i + {\mathcal O}(\Delta x)^3
\end{equation}
\begin{equation}
f(x_i-\Delta x)=f(x_i)-(\Delta x) \frac{\partial f}{\partial x}\big|_i + \frac{(\Delta x)^2}{2!} \frac{\partial^2 f}{\partial x^2}\big|_i + {\mathcal O}(\Delta x)^3
\end{equation}
Obtemos
\begin{equation}
\frac{\partial f}{\partial x}\big|_i = \frac{f_{i+i}-f_{i-1}}{2 \Delta x}+{\mathcal O}(\Delta x)^2
\tag{10}
\end{equation}
Observe que a expressão anterior utiliza os pontos $x_{i-1}$ e $x_{i+1}$ para o cálculo da primeira derivada de $f$ no ponto central $x_i$. Assim esta expressão é denominada aproximação por __diferenças centrais__. Ela também é uma aproximação de ordem 2.
Realizando a soma das expressões 5 e 8 ($f(x_i+\Delta x)+f(x_i-\Delta x)$) e realizando as operações de isolar o termo derivativo de menor ordem, obtemos a seguinte expressão:
\begin{equation}
\frac{\partial^2 f}{\partial x^2}\big|_i = \frac{f_{i+i}-2f_i+f_{i-1}}{(\Delta x)^2}+{\mathcal O}(\Delta x)^2
\tag{11}
\end{equation}
o que é uma aproximação para derivadas de segunda ordem.
<div class="alert alert-light" role="alert">
Para ver uma explicação passo-a-passo assista a aula
(https://www.youtube.com/watch?v=iz22_37mMkk&list=PL30F4C5ABCE62CB61&index=5)
</div>
<div class="alert alert-success" role="alert">
<h4 class="alert-heading">Em resumo temos até agora:</h4>
<p>Diferenças progressivas:</p>
<p>\begin{equation}
\frac{\partial f}{\partial x}\big|_i = \frac{f_{i+i}-f_i}{(\Delta x)}+{\mathcal O}(\Delta x)
\end{equation}</p>
<p>Diferenças atrasadas:</p>
<p>\begin{equation}
\frac{\partial f}{\partial x}\big|_i = \frac{f_{i}-f_{i-1}}{(\Delta x)}+{\mathcal O}(\Delta x)
\end{equation}</p>
<p>Diferenças centrais:</p>
<p>\begin{equation}
\frac{\partial f}{\partial x}\big|_i = \frac{f_{i+i}-f_{i-1}}{2 \Delta x}+{\mathcal O}(\Delta x)^2
\end{equation}</p>
<p>Diferenças para derivadas de segunda ordem:</p>
<p>\begin{equation}
\frac{\partial^2 f}{\partial x^2}\big|_i = \frac{f_{i+i}-2f_i+f_{i-1}}{(\Delta x)^2}+{\mathcal O}(\Delta x)^2
\end{equation}</p>
</div>
### 2.3. Discretizando nossa equação modelo: Equação hiperbólica
Vamos ver como discretizar a equação de convecção linear 1D tanto no espaço como no tempo. Por definição, a derivada parcial em relação ao tempo muda apenas com o tempo e não com o espaço; sua forma discretizada altera apenas os indices $n$. Da mesma forma, a derivada parcial em relação a $x$ muda com o espaço e não o tempo, e apenas os indices $i$ são afetados.
Vamos discretizar a coordenada espacial $x$ em pontos indexados de $i=0$ à $N$, e depois realizar os passos no tempo discreto em intervalos de tamanho $\Delta t$.
Considerando a derivada em relação ao tempo e discretizando com diferenças progressivas:
\begin{equation}\frac{\partial c}{\partial t}(i\Delta x, n\Delta t)\approx \frac{c^{n+1}_i-c^n_i}{\Delta t}\end{equation}
Aproximando a derivada para o espaço com diferenças atrasadas:
\begin{equation}
\frac{\partial c}{\partial x}(i\Delta x, n\Delta t)\approx \frac{c^{n}_i-c^n_{i-1}}{\Delta x}\end{equation}
Esta discretização é chamada de <font color='blue'> **upwind,** </font> em que a aproximação da derivada espacial depende da direção de propagação das caracteristicas (na direção de onde vem o escoamento).
Para a equação abordada $u>0$ e aplicando a discretização *upwind* acima representada, temos:
\begin{equation} \frac{c^{n+1}_i-c^n_i}{\Delta t}+u\frac{c^{n}_i-c^n_{i-1}}{\Delta x}=0 \end{equation}
ou
\begin{equation} c^{n+1}_i=c^n_i-u\frac{\Delta t}{\Delta x}(c^{n}_i-c^n_{i-1}) \end{equation}

#### <center> Figura 4. Molêcula computacional para o esquema "forward-time/backward-space".
### <span class="badge badge-pill badge-warning">_Pergunta:_</span> <font color='Orange'> Pergunta: O método apresentado é explícito ou implícito??</font>
### Vamos calcular!!!
A seguir vamos carregar as bibliotecas que vamos utilizar para ler vetores e plotar gráficos. *Let's get a little Python on the road.*
```
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
```
Como o primeira exercício, vamos resolver a equação linear unidimencional com a condição inicial sendo uma *onda quadrada* como descrito a seguir:
\begin{equation}
c(x,0)=\begin{cases}1 & \text{se } 0,5\leq x \leq 1,\\
0 & \text{em qualquer outro lugar em } (0, 2)
\end{cases}
\end{equation}
Com a condição de contorno em $c=0$ em $x=0$. O domínio espacial para a solução numérica sendo o intervalo $x\in (0, 2)$.
<img src="./figuras/fig4.png" width="50%" align="center">
#### <center> Figura 5. Onda "quadrada" condição inicial.
Agora vamos definir algumas variáveis;
Now let's define a few variables; queremos fazer uma malha de pontos uniformemente espaçada dentro do domínio espacial. No código abaixo, definimos uma variável chamada`nx` que será o número de pontos de malha espacial, e a variável `dx` que será a distância entre qualquer par de pontos adjacentes. Também definimos o passo de tempo, `dt`, sendo o número total de passos no tempo, `nt`. A velocidade da onda (velocidade do vento) é definida como $u=1$.
```
nx = 41
dx = 2/(nx-1)
nt = 25 #25
dt = .025
u = 1 #assume wavespeed of u = 1
x = numpy.linspace(0,2,nx)
```
Precisamos configurar a condição inicial do problema. Aqui utilizamos a função NumPy `zeros()`, definindo um vetor com `nx` elementos todos com valores igual a $0$. (Existe também a função `ones()`). Agora podemos *alterar um pedaço* do vetor em que $c=1$, para obtermos a onda e em seguida imprimir o vetor inicial, somente para verificarmos.
Mas quais valores podemos alterar?
O problema consiste em verificar quais os indices de `c` a onda quadrada inicia em $x = 0.5$ e termina em $x = 1$.
Para isto, utilizamos a função `numpy.where` para retornar uma lista dos indices onde o vetor $x$ encontra (ou não encontra) alguma condição.
```python
c = numpy.zeros(nx) #numpy function ones()
lbound = numpy.where(x >= 0.5)
ubound = numpy.where(x <= 1)
print(lbound)
print(ubound)
```
```
c = numpy.zeros(nx) #numpy function ones()
lbound = numpy.where(x >= 0.5)
ubound = numpy.where(x <= 1)
print(lbound)
print(ubound)
```
Com isto obtemos dois vetores. `lbound`, que indica os indices para $x \geq .5$ e 'ubound', que indica os indices para $x \leq 1$. Para combinar ambos os vetores, utilizamos uma intersecção entre os vetores com a função `numpy.intersect1d`.
```
bounds = numpy.intersect1d(lbound, ubound)
c[bounds]=1 #setting c = 1 between 0.5 and 1 as per our I.C.s
print(c)
```
Em Python podemos combinar comandos. Podemos escrever
```Python
c[numpy.intersect1d(numpy.where(x >= 0.5), numpy.where(x <= 1))] = 1
```
mas isto dificulta um pouco a leitura do código.
Agora vamos verificar as condições iniciais do problema se está tudo OK.
```
pyplot.plot(x, c, color='#003366', ls='--', lw=3)
pyplot.ylim(0,1.5);
```
### <font color='orange'> Parece muito perto do que esperávamos. Mas parece que os lados da onda quadrada não são perfeitamente verticais. Isso esta certo? Pense um pouco!!!
### Forma discreta da equação de convecção linear
Agora é hora de escrever algum código para a forma discreta da equação de convecção usando nosso esquema de diferenças finitas escolhido.
Para todo elemento da nossa matriz `c`, precisamos realizar a operação:
$$c_i^{n+1} = c_i^n - u \frac{\Delta t}{\Delta x}(c_i^n-c_{i-1}^n)$$
Vamos armazenar o resultado em uma nova matriz (temporária) `cn`, a qual será a solução $c$ para o próximo passo. Vamos repetir esta operação por tantos passos de tempo como especificamos e então podemos ver o quão longe a onda já percorreu.
Então, podemos pensar que temos duas operações iterativas: uma no espaço e uma em tempo (aprenderemos de forma diferente mais tarde), então podemos começar aninhando um loop espacial dentro do ciclo de tempo, como mostrado abaixo. Você vê que o código do esquema de diferenças finitas é uma expressão direta da equação discreta:
```
for n in range(1,nt):
cn = c.copy()
for i in range(1,nx):
c[i] = cn[i]-u*dt/dx*(cn[i]-cn[i-1])
```
**Observação 1:** Destacamos no código acima que o problema necessita de condição de contorno em $x=0$. Aqui não foi necessário impor esta condição para toda iteração porque a discretização não altera o valor de c[0].
**Observação 2:** Aprenderemos mais tarde que o código como escrito acima é bastante ineficiente, e há melhores maneiras de escrever isso, Python-style. Mas vamos continuar.
Agora vamos inspecionar nossa matriz de soluções e depois avançar no tempo com um gráfico de linha.
```
pyplot.plot(x, c, color='#003366', ls='--', lw=3)
pyplot.ylim(0,1.5);
```
### <font color='orange'> Nossa onda quadrada definitivamente se moveu para a direita, mas já não está na forma de um chapéu alto. **O que está acontecendo?**
```
import io
import base64
from IPython.display import HTML
video = io.open('./figuras/wave1.mp4', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls><source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii')))
```
### Olhar mais profundo
A solução difere da onda quadrada esperada porque a equação discretizada é uma aproximação da equação diferencial contínua que queremos resolver. Há erros: nós sabíamos disso. Mas a forma modificada da onda inicial é algo curioso. Talvez possa ser melhorado, tornando o espaçamento da malha mais fino. Por que você não tenta? Isso ajuda?
### 2.4 Erro de truncamento espacial
Lembrando que a aproximação por diferença finitas para a derivada espacial:
\begin{equation}\frac{\partial c}{\partial x}\approx \frac{c(x+\Delta x)-c(x)}{\Delta x}\end{equation}
Obtemos usando a definição da derivada em um ponto, e simplesmente removendo o limite, assumindo que $\Delta x$ é muito pequeno. Mas já aprendemos com o método de Euler que isso apresenta um erro, chamado *erro de truncamento*.
Usando uma expansão da série Taylor para os termos espaciais agora, vemos que o esquema de diferença inversa produz um método de primeira ordem, no espaço.
\begin{equation}
\frac{\partial c}{\partial x}(x_i) = \frac{c(x_i)-c(x_{i-1})}{\Delta x} + \frac{\Delta x}{2} \frac{\partial^2 c}{\partial x^2}(x_i) - \frac{\Delta x^2}{6} \frac{\partial^3 c}{\partial x^3}(x_i)+ \cdots
\end{equation}
O termo dominante que é negligenciado na aproximação das diferenças finitas é de $\mathcal{O}(\Delta x)$. Também vemos que a aproximação *converge* para a derivada exata quando $\Delta x \rightarrow 0$.
Em resumo, o esquema de diferença de *"forward-time/backward space"* escolhido é de primeira ordem tanto no espaço quanto no tempo: os erros de truncamento são $\mathcal{O}(\Delta t, \Delta x)$.
__Nós vamos voltar para a falar disso!__
### 2.5. Convecção não-linear
Vamos considerar a equação de convecção não-linear, usando os mesmos métodos de antes. A equação de convecção 1-D é:
\begin{equation}\frac{\partial c}{\partial t} + c \frac{\partial c}{\partial x} = 0\end{equation}
A única diferença com o caso linear é que substituímos a velocidade da onda constante $u$ pela velocidade variável $c$. A equação é não-linear porque agora temos um produto da solução e uma de suas derivadas: o produto $c\partial c / \partial x$. Isso muda tudo!
Vamos utilizar a mesma discretização que aprendemos para o caso de convecção linear: diferenças adiantadas no tempo e atrasadas no espaço. Aqui temos a equação discretizada:
\begin{equation}\frac{c_i^{n+1}-c_i^n}{\Delta t} + c_i^n \frac{c_i^n-c_{i-1}^n}{\Delta x} = 0\end{equation}
Resolvendo para o termos $u_i^{n+1}$, temos a seguinte expressão utilizada para avançar no tempo:
\begin{equation}c_i^{n+1} = c_i^n - c_i^n \frac{\Delta t}{\Delta x} (c_i^n - c_{i-1}^n)\end{equation}
Há muito pouco que precisa mudar do código escrito até agora. Na verdade, até usaremos a mesma condição inicial de onda quadrada. Mas vamos reinicializar a variável `c` com os valores iniciais e redigitar os parâmetros numéricos aqui, por conveniência (já não precisamos mais de $u$).
```
##problem parameters
nx = 41
dx = 2/(nx-1)
nt = 10
dt = .02
##initial conditions
u = numpy.ones(nx)
u[numpy.intersect1d(lbound, ubound)]=2
```
How does it look?
```
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
```
Alterando apenas uma linha de código na solução de convecção linear, podemos agora obter a solução não-linear: a linha que corresponde à equação discreta agora tem `cn[i]` no lugar onde antes tínhamos apenas `u`. Então você poderia escrever algo como:
```Python
for n in range(1,nt):
un = u.copy()
for i in range(1,nx):
u[i] = un[i]-un[i]*dt/dx*(un[i]-un[i-1])
```
Nós vamos ser mais espertos do que isso e usar o NumPy para atualizar todos os valores da grade espacial de uma só vez. Nós realmente não precisamos escrever uma linha de código que seja executada *para cada* valor de $c$ na grade espacial. O Python pode atualizá-los todos de uma vez! Estude o código abaixo e compare-o com o código acima. Aqui está um esboço útil, para ilustrar a operação da matriz - também chamada de operação "vetorizada" - por $c_i-c_{i-1} $.

<br>
#### <center>Figura 2: Esboço para explicar a operação de estêncil vetorizado.. Adapitado de ["Indices point between elements"](https://blog.nelhage.com/2015/08/indices-point-between-elements/) by Nelson Elhage.
```
for n in range(1, nt):
un = u.copy()
u[1:] = un[1:]-un[1:]*dt/dx*(un[1:]-un[0:-1])
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
```
Hmm. Isso é bem interessante: como no caso linear, vemos que perdemos os lados afiados da nossa onda quadrada inicial, mas há mais. Agora, a onda também perdeu simetria! Parece estar ficando para trás, enquanto a frente da onda está se inclinando. Essa é outra forma de erro numérico, você pode se perguntar? Não! É física!
#### <font color="orange"> Pense um pouco!!!
Pense no efeito de ter substituído a velocidade de onda constante $u$ pela velocidade variável dada pela solução $c$. Isso significa que diferentes partes da onda se movem em velocidades diferentes. Faça um esboço de uma onda inicial e pense em onde a velocidade é mais alta e onde ela é mais baixa ...
## Exercício:
Com os parâmetros de solução que inicialmente sugerimos, a malha espacial teve 41 pontos e o *timestep* (passo de tempo) foi de 0,025. Agora, vamos alterar o número de pontos na malha. Escreva o código corresponde ao caso de convecção linear como uma função para que possamos examinar facilmente o que acontece quando ajustamos apenas uma variável: o tamanho da malha. Teste com *nx=*41; 61; 71; e 85. Veja o que acontece.
```
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
def linearconv(nx):
"""Solve the linear convection equation.
Solves the equation d_t c + u d_x c = 0 where
* the wavespeed u is set to 1
* the domain is x \in [0, 5]
* 20 timesteps are taken, with \Delta t = 0.025
* the initial data is the hat function
Produces a plot of the results
Parameters
----------
nx : integer
number of internal grid points
Returns
-------
None : none
"""
dx = 2/(nx-1)
nt = 25
dt = .025
u = 1
x = numpy.linspace(0,2,nx)
c = numpy.zeros(nx)
lbound = numpy.where(x >= 0.5)
ubound = numpy.where(x <= 1)
c[numpy.intersect1d(lbound, ubound)]=1
cn = numpy.zeros(nx)
for n in range(nt):
cn = c.copy()
c[1:] = cn[1:] -u*dt/dx*(cn[1:] -cn[0:-1])
c[0] = 0.0
pyplot.plot(x, c, color='#003366', ls='--', lw=3)
pyplot.ylim(-0.5,1.5);
```
### Vamos calcular!!
```
linearconv(41) #convection using 60 grid points
linearconv(61) #convection using 75 grid points
linearconv(71) #convection using 80 grid points
linearconv(85) #convection using 81 grid points
```
### O que aconteceu??
### <font color='red'> Vamos ver em mais detalhes!
```
from IPython.core.display import HTML
css_file = '../styles/custom.css'
HTML(open(css_file, "r").read())
```
|
github_jupyter
|
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
nx = 41
dx = 2/(nx-1)
nt = 25 #25
dt = .025
u = 1 #assume wavespeed of u = 1
x = numpy.linspace(0,2,nx)
c = numpy.zeros(nx) #numpy function ones()
lbound = numpy.where(x >= 0.5)
ubound = numpy.where(x <= 1)
print(lbound)
print(ubound)
c = numpy.zeros(nx) #numpy function ones()
lbound = numpy.where(x >= 0.5)
ubound = numpy.where(x <= 1)
print(lbound)
print(ubound)
bounds = numpy.intersect1d(lbound, ubound)
c[bounds]=1 #setting c = 1 between 0.5 and 1 as per our I.C.s
print(c)
c[numpy.intersect1d(numpy.where(x >= 0.5), numpy.where(x <= 1))] = 1
pyplot.plot(x, c, color='#003366', ls='--', lw=3)
pyplot.ylim(0,1.5);
for n in range(1,nt):
cn = c.copy()
for i in range(1,nx):
c[i] = cn[i]-u*dt/dx*(cn[i]-cn[i-1])
pyplot.plot(x, c, color='#003366', ls='--', lw=3)
pyplot.ylim(0,1.5);
import io
import base64
from IPython.display import HTML
video = io.open('./figuras/wave1.mp4', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls><source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii')))
##problem parameters
nx = 41
dx = 2/(nx-1)
nt = 10
dt = .02
##initial conditions
u = numpy.ones(nx)
u[numpy.intersect1d(lbound, ubound)]=2
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
for n in range(1,nt):
un = u.copy()
for i in range(1,nx):
u[i] = un[i]-un[i]*dt/dx*(un[i]-un[i-1])
for n in range(1, nt):
un = u.copy()
u[1:] = un[1:]-un[1:]*dt/dx*(un[1:]-un[0:-1])
pyplot.plot(x, u, color='#003366', ls='--', lw=3)
pyplot.ylim(0,2.5);
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
def linearconv(nx):
"""Solve the linear convection equation.
Solves the equation d_t c + u d_x c = 0 where
* the wavespeed u is set to 1
* the domain is x \in [0, 5]
* 20 timesteps are taken, with \Delta t = 0.025
* the initial data is the hat function
Produces a plot of the results
Parameters
----------
nx : integer
number of internal grid points
Returns
-------
None : none
"""
dx = 2/(nx-1)
nt = 25
dt = .025
u = 1
x = numpy.linspace(0,2,nx)
c = numpy.zeros(nx)
lbound = numpy.where(x >= 0.5)
ubound = numpy.where(x <= 1)
c[numpy.intersect1d(lbound, ubound)]=1
cn = numpy.zeros(nx)
for n in range(nt):
cn = c.copy()
c[1:] = cn[1:] -u*dt/dx*(cn[1:] -cn[0:-1])
c[0] = 0.0
pyplot.plot(x, c, color='#003366', ls='--', lw=3)
pyplot.ylim(-0.5,1.5);
linearconv(41) #convection using 60 grid points
linearconv(61) #convection using 75 grid points
linearconv(71) #convection using 80 grid points
linearconv(85) #convection using 81 grid points
from IPython.core.display import HTML
css_file = '../styles/custom.css'
HTML(open(css_file, "r").read())
| 0.5144 | 0.915959 |
# Grid Search
Let's incorporate grid search into your modeling process. To start, include an import statement for `GridSearchCV` below.
```
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
```
### View parameters in pipeline
Before modifying your build_model method to include grid search, view the parameters in your pipeline here.
```
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
pipeline.get_params()
```
### Modify your `build_model` function to return a GridSearchCV object.
Try to grid search some parameters in your data transformation steps as well as those for your classifier! Browse the parameters you can search above.
```
def build_model():
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
# specify parameters for grid search
parameters = {
'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'features__text_pipeline__vect__max_df': (0.5, 0.75, 1.0),
'features__text_pipeline__vect__max_features': (None, 5000, 10000),
'features__text_pipeline__tfidf__use_idf': (True, False),
'clf__n_estimators': [50, 100, 200],
'clf__min_samples_split': [2, 3, 4],
'features__transformer_weights': (
{'text_pipeline': 1, 'starting_verb': 0.5},
{'text_pipeline': 0.5, 'starting_verb': 1},
{'text_pipeline': 0.8, 'starting_verb': 1},
)
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters)
return cv
```
### Run program to test
Running grid search can take a while, especially if you are searching over a lot of parameters! If you want to reduce it to a few minutes, try commenting out some of your parameters to grid search over just 1 or 2 parameters with a small number of values each. Once you know that works, feel free to add more parameters and see how well your final model can perform! You can try this out in the next page.
```
def load_data():
df = pd.read_csv('corporate_messaging.csv', encoding='latin-1')
df = df[(df["category:confidence"] == 1) & (df['category'] != 'Exclude')]
X = df.text.values
y = df.category.values
return X, y
def display_results(cv, y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
print("\nBest Parameters:", cv.best_params_)
def main():
X, y = load_data()
X_train, X_test, y_train, y_test = train_test_split(X, y)
model = build_model()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
display_results(model, y_test, y_pred)
main()
```
|
github_jupyter
|
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
pipeline.get_params()
def build_model():
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
# specify parameters for grid search
parameters = {
'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'features__text_pipeline__vect__max_df': (0.5, 0.75, 1.0),
'features__text_pipeline__vect__max_features': (None, 5000, 10000),
'features__text_pipeline__tfidf__use_idf': (True, False),
'clf__n_estimators': [50, 100, 200],
'clf__min_samples_split': [2, 3, 4],
'features__transformer_weights': (
{'text_pipeline': 1, 'starting_verb': 0.5},
{'text_pipeline': 0.5, 'starting_verb': 1},
{'text_pipeline': 0.8, 'starting_verb': 1},
)
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters)
return cv
def load_data():
df = pd.read_csv('corporate_messaging.csv', encoding='latin-1')
df = df[(df["category:confidence"] == 1) & (df['category'] != 'Exclude')]
X = df.text.values
y = df.category.values
return X, y
def display_results(cv, y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
print("\nBest Parameters:", cv.best_params_)
def main():
X, y = load_data()
X_train, X_test, y_train, y_test = train_test_split(X, y)
model = build_model()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
display_results(model, y_test, y_pred)
main()
| 0.582254 | 0.790894 |
## QM9 Dataset exploration
The purpose of this notebook is as follows,
- Explain [QM9 dataset](http://quantum-machine.org/datasets/): Check the labels and visualization of molecules to understand what kind of data are stored.
- Explain internal structure of QM9 dataset in `chainer_chemistry`: We handle the dataset with `NumpyTupleDataset`.
- Explain how `preprocessor` and `parser` work on `chainer_chemistry`: One concrete example using `GGNNPreprocessor` is explained.
It is out of scope of this notebook to explain how to train graph convolutional network using this dataset, please refer [document tutorial](http://chainer-chemistry.readthedocs.io/en/latest/tutorial.html#) or try `train_qm9.py` in [QM9 example](https://github.com/pfnet-research/chainer-chemistry/tree/master/examples/qm9) for the model training.
[Note]
This notebook is executed on 1, March, 2018.
The behavior of QM9 dataset in `chainer_chemistry` might change in the future.
Loading modules and set loglevel.
```
import logging
from rdkit import RDLogger
from chainer_chemistry import datasets
# Disable errors by RDKit occurred in preprocessing QM9 dataset.
lg = RDLogger.logger()
lg.setLevel(RDLogger.CRITICAL)
# show INFO level log from chainer chemistry
logging.basicConfig(level=logging.INFO)
```
QM9 dataset can be downloaded automatically by chainer chemistry.
Original format of QM9 dataset is zipped file where each molecule's information is stored in each "xyz" file.
Chainer Chemistry automatically merge these information in one csv file internally, you may check the file path of this csv file with `get_qm9_filepath` method.
```
dataset_filepath = datasets.get_qm9_filepath()
print('dataset_filepath =', dataset_filepath)
```
The dataset contains several chemical/physical properties. The labels of QM9 dataset can be checked by `get_qm9_label_names` method.
```
label_names = datasets.get_qm9_label_names()
print('QM9 label_names =', label_names)
```
More detail information is described in `readme.txt` of QM9 dataset, which can be downloaded from
- [https://figshare.com/articles/Readme_file%3A_Data_description_for__Quantum_chemistry_structures_and_properties_of_134_kilo_molecules_/1057641](https://figshare.com/articles/Readme_file%3A_Data_description_for__Quantum_chemistry_structures_and_properties_of_134_kilo_molecules_/1057641)
Below is the description of each property(label), written in readme.txt
<a id='table1'></a>
<blockquote cite="https://figshare.com/articles/Readme_file%3A_Data_description_for__Quantum_chemistry_structures_and_properties_of_134_kilo_molecules_/1057641">
<pre>
I. Property Unit Description
-- -------- ----------- --------------
1 tag - "gdb9"; string constant to ease extraction via grep
2 index - Consecutive, 1-based integer identifier of molecule
3 A GHz Rotational constant A
4 B GHz Rotational constant B
5 C GHz Rotational constant C
6 mu Debye Dipole moment
7 alpha Bohr^3 Isotropic polarizability
8 homo Hartree Energy of Highest occupied molecular orbital (HOMO)
9 lumo Hartree Energy of Lowest occupied molecular orbital (LUMO)
10 gap Hartree Gap, difference between LUMO and HOMO
11 r2 Bohr^2 Electronic spatial extent
12 zpve Hartree Zero point vibrational energy
13 U0 Hartree Internal energy at 0 K
14 U Hartree Internal energy at 298.15 K
15 H Hartree Enthalpy at 298.15 K
16 G Hartree Free energy at 298.15 K
17 Cv cal/(mol K) Heat capacity at 298.15 K
</pre>
</blockquote>
### Preprocessing dataset
Dataset extraction depends on the preprocessing method, which is determined by `preprocessor`.
Here, let's look an example of using `GGNNPreprocessor` preprocessor for QM9 dataset extraction.
Procedure is as follows,
1. Instantiate `preprocessor` (here `GGNNPreprocessor` is used).
2. call `get_qm9` method with `preprocessor`.
- `labels=None` option is used to extract all labels. In this case, 15 types of physical properties are extracted (see above).
Note that `return_smiles` option can be used to get SMILES information together with the dataset itself.
```
from chainer_chemistry.dataset.preprocessors.ggnn_preprocessor import \
GGNNPreprocessor
preprocessor = GGNNPreprocessor()
dataset, dataset_smiles = datasets.get_qm9(preprocessor, labels=None, return_smiles=True)
```
## Check extracted dataset
First, let's check type and number of dataset.
```
print('dataset information...')
print('dataset', type(dataset), len(dataset))
print('smiles information...')
print('dataset_smiles', type(dataset_smiles), len(dataset_smiles))
```
As you can see, QM9 dataset consists of 133885 data.
The dataset is a class of `NumpyTupleDataset`, where i-th dataset features can be accessed by `dataset[i]`.
When `GGNNPreprocessor` is used, each dataset consists of following features
1. atom feature: representing atomic number of given molecule.
2. adjacency matrix feature: representing adjacency matrix of given molecule.
`GGNNPreprocessor` extracts adjacency matrix of each bonding type.
3. label feature: representing chemical properties (label) of given molecule.
Please refer [above table](#table1) for details.
Let's look an example of 7777-th dataset
```
index = 7777
print('index={}, SMILES={}'.format(index, dataset_smiles[index]))
atom, adj, labels = dataset[index]
# This molecule has N=8 atoms.
print('atom', atom.shape, atom)
# adjacency matrix is NxN matrix, where N is number of atoms in the molecule.
# Unlike usual adjacency matrix, diagonal elements are filled with 1, for NFP calculation purpose.
print('adj', adj.shape)
print('adjacency matrix for SINGLE bond type\n', adj[0])
print('adjacency matrix for DOUBLE bond type\n', adj[1])
print('adjacency matrix for TRIPLE bond type\n', adj[2])
print('adjacency matrix for AROMATIC bond type\n', adj[3])
print('labels', labels)
```
## Visualizing the molecule
One might want to visualize molecule given SMILES information. Here is an example code:
```
# This script is referred from http://rdkit.blogspot.jp/2015/02/new-drawing-code.html
# and http://cheminformist.itmol.com/TEST/wp-content/uploads/2015/07/rdkit_moldraw2d_2.html
from __future__ import print_function
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import SVG
from rdkit.Chem import rdDepictor
from rdkit.Chem.Draw import rdMolDraw2D
def moltosvg(mol,molSize=(450,150),kekulize=True):
mc = Chem.Mol(mol.ToBinary())
if kekulize:
try:
Chem.Kekulize(mc)
except:
mc = Chem.Mol(mol.ToBinary())
if not mc.GetNumConformers():
rdDepictor.Compute2DCoords(mc)
drawer = rdMolDraw2D.MolDraw2DSVG(molSize[0],molSize[1])
drawer.DrawMolecule(mc)
drawer.FinishDrawing()
svg = drawer.GetDrawingText()
return svg
def render_svg(svg):
# It seems that the svg renderer used doesn't quite hit the spec.
# Here are some fixes to make it work in the notebook, although I think
# the underlying issue needs to be resolved at the generation step
return SVG(svg.replace('svg:',''))
smiles = dataset_smiles[index]
mol = Chem.MolFromSmiles(dataset_smiles[index])
print('smiles:', smiles)
svg = moltosvg(mol)
render_svg(svg)
```
[Note] SVG images cannot be displayed on GitHub, but you can see an image of molecule when you execute it on jupyter notebook.
## Interactively watch through the QM9 dataset
Jupyter notebook provides handy module to check/visualize the data. Here interact module can be used to interactively check the internal of QM9 dataset.
```
from ipywidgets import interact
import numpy as np
np.set_printoptions(precision=3, suppress=True)
def show_dataset(index):
print('index={}, SMILES={}'.format(index, dataset_smiles[index]))
atom, adj, labels = dataset[index]
print('atom', atom)
# print('adj', adj)
print('labels', labels)
mol = Chem.MolFromSmiles(dataset_smiles[index])
return render_svg(moltosvg(mol))
interact(show_dataset, index=(0, len(dataset) - 1, 1))
```
## Appendix: how to save the molecule figure?
### 1. Save with SVG format
First method is simply save svg in file.
```
import os
dirpath = 'images'
if not os.path.exists(dirpath):
os.mkdir(dirpath)
def save_svg(mol, filepath):
svg = moltosvg(mol)
with open(filepath, "w") as fw:
fw.write(svg)
index = 7777
save_filepath = os.path.join(dirpath, 'mol_{}.svg'.format(index))
print('drawing {}'.format(save_filepath))
mol = Chem.MolFromSmiles(dataset_smiles[index])
save_svg(mol, save_filepath)
```
### 2. Save with png format
`rdkit` provides `Draw.MolToFile` method to visualize mol instance and save it to png format.
```
from rdkit.Chem import Draw
def save_png(mol, filepath, size=(600, 600)):
Draw.MolToFile(mol, filepath, size=size)
from rdkit.Chem import Draw
index = 7777
save_filepath = os.path.join(dirpath, 'mol_{}.png'.format(index))
print('drawing {}'.format(save_filepath))
mol = Chem.MolFromSmiles(dataset_smiles[index])
save_png(mol, save_filepath, size=(600, 600))
```
|
github_jupyter
|
import logging
from rdkit import RDLogger
from chainer_chemistry import datasets
# Disable errors by RDKit occurred in preprocessing QM9 dataset.
lg = RDLogger.logger()
lg.setLevel(RDLogger.CRITICAL)
# show INFO level log from chainer chemistry
logging.basicConfig(level=logging.INFO)
dataset_filepath = datasets.get_qm9_filepath()
print('dataset_filepath =', dataset_filepath)
label_names = datasets.get_qm9_label_names()
print('QM9 label_names =', label_names)
from chainer_chemistry.dataset.preprocessors.ggnn_preprocessor import \
GGNNPreprocessor
preprocessor = GGNNPreprocessor()
dataset, dataset_smiles = datasets.get_qm9(preprocessor, labels=None, return_smiles=True)
print('dataset information...')
print('dataset', type(dataset), len(dataset))
print('smiles information...')
print('dataset_smiles', type(dataset_smiles), len(dataset_smiles))
index = 7777
print('index={}, SMILES={}'.format(index, dataset_smiles[index]))
atom, adj, labels = dataset[index]
# This molecule has N=8 atoms.
print('atom', atom.shape, atom)
# adjacency matrix is NxN matrix, where N is number of atoms in the molecule.
# Unlike usual adjacency matrix, diagonal elements are filled with 1, for NFP calculation purpose.
print('adj', adj.shape)
print('adjacency matrix for SINGLE bond type\n', adj[0])
print('adjacency matrix for DOUBLE bond type\n', adj[1])
print('adjacency matrix for TRIPLE bond type\n', adj[2])
print('adjacency matrix for AROMATIC bond type\n', adj[3])
print('labels', labels)
# This script is referred from http://rdkit.blogspot.jp/2015/02/new-drawing-code.html
# and http://cheminformist.itmol.com/TEST/wp-content/uploads/2015/07/rdkit_moldraw2d_2.html
from __future__ import print_function
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import SVG
from rdkit.Chem import rdDepictor
from rdkit.Chem.Draw import rdMolDraw2D
def moltosvg(mol,molSize=(450,150),kekulize=True):
mc = Chem.Mol(mol.ToBinary())
if kekulize:
try:
Chem.Kekulize(mc)
except:
mc = Chem.Mol(mol.ToBinary())
if not mc.GetNumConformers():
rdDepictor.Compute2DCoords(mc)
drawer = rdMolDraw2D.MolDraw2DSVG(molSize[0],molSize[1])
drawer.DrawMolecule(mc)
drawer.FinishDrawing()
svg = drawer.GetDrawingText()
return svg
def render_svg(svg):
# It seems that the svg renderer used doesn't quite hit the spec.
# Here are some fixes to make it work in the notebook, although I think
# the underlying issue needs to be resolved at the generation step
return SVG(svg.replace('svg:',''))
smiles = dataset_smiles[index]
mol = Chem.MolFromSmiles(dataset_smiles[index])
print('smiles:', smiles)
svg = moltosvg(mol)
render_svg(svg)
from ipywidgets import interact
import numpy as np
np.set_printoptions(precision=3, suppress=True)
def show_dataset(index):
print('index={}, SMILES={}'.format(index, dataset_smiles[index]))
atom, adj, labels = dataset[index]
print('atom', atom)
# print('adj', adj)
print('labels', labels)
mol = Chem.MolFromSmiles(dataset_smiles[index])
return render_svg(moltosvg(mol))
interact(show_dataset, index=(0, len(dataset) - 1, 1))
import os
dirpath = 'images'
if not os.path.exists(dirpath):
os.mkdir(dirpath)
def save_svg(mol, filepath):
svg = moltosvg(mol)
with open(filepath, "w") as fw:
fw.write(svg)
index = 7777
save_filepath = os.path.join(dirpath, 'mol_{}.svg'.format(index))
print('drawing {}'.format(save_filepath))
mol = Chem.MolFromSmiles(dataset_smiles[index])
save_svg(mol, save_filepath)
from rdkit.Chem import Draw
def save_png(mol, filepath, size=(600, 600)):
Draw.MolToFile(mol, filepath, size=size)
from rdkit.Chem import Draw
index = 7777
save_filepath = os.path.join(dirpath, 'mol_{}.png'.format(index))
print('drawing {}'.format(save_filepath))
mol = Chem.MolFromSmiles(dataset_smiles[index])
save_png(mol, save_filepath, size=(600, 600))
| 0.532182 | 0.976736 |
```
keywords = ["alignas", "alignof", "and", "and_eq", "asm", "atomic_cancel", "atomic_commit",
"atomic_noexcept", "auto", "bitand", "bitor", "bool", "break", "case", "catch",
"char", "char8_t", "char16_t", "char32_t", "class", "compl", "concept", "const",
"consteval", "constexpr", "constinit", "const_cast", "continue", "co_await",
"co_return", "co_yield", "decltype", "default", "delete", "do", "double", "dynamic_cast",
"else", "enum", "explicit", "export", "extern", "false", "float", "for", "friend", "goto",
"if", "inline", "int", "long", "mutable", "namespace", "new", "noexcept", "not", "not_eq",
"nullptr", "operator", "or", "or_eq", "private", "protected", "public", "reflexpr",
"register", "reinterpret_cast", "requires", "return", "short", "signed", "sizeof", "static",
"static_assert", "static_cast", "struct", "switch", "synchronized", "template", "this",
"thread_local", "throw", "true", "try", "typedef", "typeid", "typename", "union", "unsigned",
"using", "virtual", "void", "volatile", "wchar_t", "while", "xor", "xor_eq", "NULL"]
puncs = '~`!@#$%^&*()-+={[]}|\\;:\'\"<,>.?/'
puncs = list(puncs)
l_funcs = ['StrNCat', 'getaddrinfo', '_ui64toa', 'fclose', 'pthread_mutex_lock', 'gets_s', 'sleep',
'_ui64tot', 'freopen_s', '_ui64tow', 'send', 'lstrcat', 'HMAC_Update', '__fxstat', 'StrCatBuff',
'_mbscat', '_mbstok_s', '_cprintf_s', 'ldap_search_init_page', 'memmove_s', 'ctime_s', 'vswprintf',
'vswprintf_s', '_snwprintf', '_gmtime_s', '_tccpy', '*RC6*', '_mbslwr_s', 'random',
'__wcstof_internal', '_wcslwr_s', '_ctime32_s', 'wcsncat*', 'MD5_Init', '_ultoa',
'snprintf', 'memset', 'syslog', '_vsnprintf_s', 'HeapAlloc', 'pthread_mutex_destroy',
'ChangeWindowMessageFilter', '_ultot', 'crypt_r', '_strupr_s_l', 'LoadLibraryExA', '_strerror_s',
'LoadLibraryExW', 'wvsprintf', 'MoveFileEx', '_strdate_s', 'SHA1', 'sprintfW', 'StrCatNW',
'_scanf_s_l', 'pthread_attr_init', '_wtmpnam_s', 'snscanf', '_sprintf_s_l', 'dlopen',
'sprintfA', 'timed_mutex', 'OemToCharA', 'ldap_delete_ext', 'sethostid', 'popen', 'OemToCharW',
'_gettws', 'vfork', '_wcsnset_s_l', 'sendmsg', '_mbsncat', 'wvnsprintfA', 'HeapFree', '_wcserror_s',
'realloc', '_snprintf*', 'wcstok', '_strncat*', 'StrNCpy', '_wasctime_s', 'push*', '_lfind_s',
'CC_SHA512', 'ldap_compare_ext_s', 'wcscat_s', 'strdup', '_chsize_s', 'sprintf_s', 'CC_MD4_Init',
'wcsncpy', '_wfreopen_s', '_wcsupr_s', '_searchenv_s', 'ldap_modify_ext_s', '_wsplitpath',
'CC_SHA384_Final', 'MD2', 'RtlCopyMemory', 'lstrcatW', 'MD4', 'MD5', '_wcstok_s_l', '_vsnwprintf_s',
'ldap_modify_s', 'strerror', '_lsearch_s', '_mbsnbcat_s', '_wsplitpath_s', 'MD4_Update', '_mbccpy_s',
'_strncpy_s_l', '_snprintf_s', 'CC_SHA512_Init', 'fwscanf_s', '_snwprintf_s', 'CC_SHA1', 'swprintf',
'fprintf', 'EVP_DigestInit_ex', 'strlen', 'SHA1_Init', 'strncat', '_getws_s', 'CC_MD4_Final',
'wnsprintfW', 'lcong48', 'lrand48', 'write', 'HMAC_Init', '_wfopen_s', 'wmemchr', '_tmakepath',
'wnsprintfA', 'lstrcpynW', 'scanf_s', '_mbsncpy_s_l', '_localtime64_s', 'fstream.open', '_wmakepath',
'Connection.open', '_tccat', 'valloc', 'setgroups', 'unlink', 'fstream.put', 'wsprintfA', '*SHA1*',
'_wsearchenv_s', 'ualstrcpyA', 'CC_MD5_Update', 'strerror_s', 'HeapCreate', 'ualstrcpyW', '__xstat',
'_wmktemp_s', 'StrCatChainW', 'ldap_search_st', '_mbstowcs_s_l', 'ldap_modify_ext', '_mbsset_s',
'strncpy_s', 'move', 'execle', 'StrCat', 'xrealloc', 'wcsncpy_s', '_tcsncpy*', 'execlp',
'RIPEMD160_Final', 'ldap_search_s', 'EnterCriticalSection', '_wctomb_s_l', 'fwrite', '_gmtime64_s',
'sscanf_s', 'wcscat', '_strupr_s', 'wcrtomb_s', 'VirtualLock', 'ldap_add_ext_s', '_mbscpy',
'_localtime32_s', 'lstrcpy', '_wcsncpy*', 'CC_SHA1_Init', '_getts', '_wfopen', '__xstat64',
'strcoll', '_fwscanf_s_l', '_mbslwr_s_l', 'RegOpenKey', 'makepath', 'seed48', 'CC_SHA256',
'sendto', 'execv', 'CalculateDigest', 'memchr', '_mbscpy_s', '_strtime_s', 'ldap_search_ext_s',
'_chmod', 'flock', '__fxstat64', '_vsntprintf', 'CC_SHA256_Init', '_itoa_s', '__wcserror_s',
'_gcvt_s', 'fstream.write', 'sprintf', 'recursive_mutex', 'strrchr', 'gethostbyaddr', '_wcsupr_s_l',
'strcspn', 'MD5_Final', 'asprintf', '_wcstombs_s_l', '_tcstok', 'free', 'MD2_Final', 'asctime_s',
'_alloca', '_wputenv_s', '_wcsset_s', '_wcslwr_s_l', 'SHA1_Update', 'filebuf.sputc', 'filebuf.sputn',
'SQLConnect', 'ldap_compare', 'mbstowcs_s', 'HMAC_Final', 'pthread_condattr_init', '_ultow_s', 'rand',
'ofstream.put', 'CC_SHA224_Final', 'lstrcpynA', 'bcopy', 'system', 'CreateFile*', 'wcscpy_s',
'_mbsnbcpy*', 'open', '_vsnwprintf', 'strncpy', 'getopt_long', 'CC_SHA512_Final', '_vsprintf_s_l',
'scanf', 'mkdir', '_localtime_s', '_snprintf', '_mbccpy_s_l', 'memcmp', 'final', '_ultoa_s',
'lstrcpyW', 'LoadModule', '_swprintf_s_l', 'MD5_Update', '_mbsnset_s_l', '_wstrtime_s', '_strnset_s',
'lstrcpyA', '_mbsnbcpy_s', 'mlock', 'IsBadHugeWritePtr', 'copy', '_mbsnbcpy_s_l', 'wnsprintf',
'wcscpy', 'ShellExecute', 'CC_MD4', '_ultow', '_vsnwprintf_s_l', 'lstrcpyn', 'CC_SHA1_Final',
'vsnprintf', '_mbsnbset_s', '_i64tow', 'SHA256_Init', 'wvnsprintf', 'RegCreateKey', 'strtok_s',
'_wctime32_s', '_i64toa', 'CC_MD5_Final', 'wmemcpy', 'WinExec', 'CreateDirectory*',
'CC_SHA256_Update', '_vsnprintf_s_l', 'jrand48', 'wsprintf', 'ldap_rename_ext_s', 'filebuf.open',
'_wsystem', 'SHA256_Update', '_cwscanf_s', 'wsprintfW', '_sntscanf', '_splitpath', 'fscanf_s',
'strpbrk', 'wcstombs_s', 'wscanf', '_mbsnbcat_s_l', 'strcpynA', 'pthread_cond_init', 'wcsrtombs_s',
'_wsopen_s', 'CharToOemBuffA', 'RIPEMD160_Update', '_tscanf', 'HMAC', 'StrCCpy', 'Connection.connect',
'lstrcatn', '_mbstok', '_mbsncpy', 'CC_SHA384_Update', 'create_directories', 'pthread_mutex_unlock',
'CFile.Open', 'connect', '_vswprintf_s_l', '_snscanf_s_l', 'fputc', '_wscanf_s', '_snprintf_s_l',
'strtok', '_strtok_s_l', 'lstrcatA', 'snwscanf', 'pthread_mutex_init', 'fputs', 'CC_SHA384_Init',
'_putenv_s', 'CharToOemBuffW', 'pthread_mutex_trylock', '__wcstoul_internal', '_memccpy',
'_snwprintf_s_l', '_strncpy*', 'wmemset', 'MD4_Init', '*RC4*', 'strcpyW', '_ecvt_s', 'memcpy_s',
'erand48', 'IsBadHugeReadPtr', 'strcpyA', 'HeapReAlloc', 'memcpy', 'ldap_rename_ext', 'fopen_s',
'srandom', '_cgetws_s', '_makepath', 'SHA256_Final', 'remove', '_mbsupr_s', 'pthread_mutexattr_init',
'__wcstold_internal', 'StrCpy', 'ldap_delete', 'wmemmove_s', '_mkdir', 'strcat', '_cscanf_s_l',
'StrCAdd', 'swprintf_s', '_strnset_s_l', 'close', 'ldap_delete_ext_s', 'ldap_modrdn', 'strchr',
'_gmtime32_s', '_ftcscat', 'lstrcatnA', '_tcsncat', 'OemToChar', 'mutex', 'CharToOem', 'strcpy_s',
'lstrcatnW', '_wscanf_s_l', '__lxstat64', 'memalign', 'MD2_Init', 'StrCatBuffW', 'StrCpyN', 'CC_MD5',
'StrCpyA', 'StrCatBuffA', 'StrCpyW', 'tmpnam_r', '_vsnprintf', 'strcatA', 'StrCpyNW', '_mbsnbset_s_l',
'EVP_DigestInit', '_stscanf', 'CC_MD2', '_tcscat', 'StrCpyNA', 'xmalloc', '_tcslen', '*MD4*',
'vasprintf', 'strxfrm', 'chmod', 'ldap_add_ext', 'alloca', '_snscanf_s', 'IsBadWritePtr', 'swscanf_s',
'wmemcpy_s', '_itoa', '_ui64toa_s', 'EVP_DigestUpdate', '__wcstol_internal', '_itow', 'StrNCatW',
'strncat_s', 'ualstrcpy', 'execvp', '_mbccat', 'EVP_MD_CTX_init', 'assert', 'ofstream.write',
'ldap_add', '_sscanf_s_l', 'drand48', 'CharToOemW', 'swscanf', '_itow_s', 'RIPEMD160_Init',
'CopyMemory', 'initstate', 'getpwuid', 'vsprintf', '_fcvt_s', 'CharToOemA', 'setuid', 'malloc',
'StrCatNA', 'strcat_s', 'srand', 'getwd', '_controlfp_s', 'olestrcpy', '__wcstod_internal',
'_mbsnbcat', 'lstrncat', 'des_*', 'CC_SHA224_Init', 'set*', 'vsprintf_s', 'SHA1_Final', '_umask_s',
'gets', 'setstate', 'wvsprintfW', 'LoadLibraryEx', 'ofstream.open', 'calloc', '_mbstrlen',
'_cgets_s', '_sopen_s', 'IsBadStringPtr', 'wcsncat_s', 'add*', 'nrand48', 'create_directory',
'ldap_search_ext', '_i64toa_s', '_ltoa_s', '_cwscanf_s_l', 'wmemcmp', '__lxstat', 'lstrlen',
'pthread_condattr_destroy', '_ftcscpy', 'wcstok_s', '__xmknod', 'pthread_attr_destroy', 'sethostname',
'_fscanf_s_l', 'StrCatN', 'RegEnumKey', '_tcsncpy', 'strcatW', 'AfxLoadLibrary', 'setenv', 'tmpnam',
'_mbsncat_s_l', '_wstrdate_s', '_wctime64_s', '_i64tow_s', 'CC_MD4_Update', 'ldap_add_s', '_umask',
'CC_SHA1_Update', '_wcsset_s_l', '_mbsupr_s_l', 'strstr', '_tsplitpath', 'memmove', '_tcscpy',
'vsnprintf_s', 'strcmp', 'wvnsprintfW', 'tmpfile', 'ldap_modify', '_mbsncat*', 'mrand48', 'sizeof',
'StrCatA', '_ltow_s', '*desencrypt*', 'StrCatW', '_mbccpy', 'CC_MD2_Init', 'RIPEMD160', 'ldap_search',
'CC_SHA224', 'mbsrtowcs_s', 'update', 'ldap_delete_s', 'getnameinfo', '*RC5*', '_wcsncat_s_l',
'DriverManager.getConnection', 'socket', '_cscanf_s', 'ldap_modrdn_s', '_wopen', 'CC_SHA256_Final',
'_snwprintf*', 'MD2_Update', 'strcpy', '_strncat_s_l', 'CC_MD5_Init', 'mbscpy', 'wmemmove',
'LoadLibraryW', '_mbslen', '*alloc', '_mbsncat_s', 'LoadLibraryA', 'fopen', 'StrLen', 'delete',
'_splitpath_s', 'CreateFileTransacted*', 'MD4_Final', '_open', 'CC_SHA384', 'wcslen', 'wcsncat',
'_mktemp_s', 'pthread_mutexattr_destroy', '_snwscanf_s', '_strset_s', '_wcsncpy_s_l', 'CC_MD2_Final',
'_mbstok_s_l', 'wctomb_s', 'MySQL_Driver.connect', '_snwscanf_s_l', '*_des_*', 'LoadLibrary',
'_swscanf_s_l', 'ldap_compare_s', 'ldap_compare_ext', '_strlwr_s', 'GetEnvironmentVariable',
'cuserid', '_mbscat_s', 'strspn', '_mbsncpy_s', 'ldap_modrdn2', 'LeaveCriticalSection', 'CopyFile',
'getpwd', 'sscanf', 'creat', 'RegSetValue', 'ldap_modrdn2_s', 'CFile.Close', '*SHA_1*',
'pthread_cond_destroy', 'CC_SHA512_Update', '*RC2*', 'StrNCatA', '_mbsnbcpy', '_mbsnset_s',
'crypt', 'excel', '_vstprintf', 'xstrdup', 'wvsprintfA', 'getopt', 'mkstemp', '_wcsnset_s',
'_stprintf', '_sntprintf', 'tmpfile_s', 'OpenDocumentFile', '_mbsset_s_l', '_strset_s_l',
'_strlwr_s_l', 'ifstream.open', 'xcalloc', 'StrNCpyA', '_wctime_s', 'CC_SHA224_Update', '_ctime64_s',
'MoveFile', 'chown', 'StrNCpyW', 'IsBadReadPtr', '_ui64tow_s', 'IsBadCodePtr', 'getc',
'OracleCommand.ExecuteOracleScalar', 'AccessDataSource.Insert', 'IDbDataAdapter.FillSchema',
'IDbDataAdapter.Update', 'GetWindowText*', 'SendMessage', 'SqlCommand.ExecuteNonQuery', 'streambuf.sgetc',
'streambuf.sgetn', 'OracleCommand.ExecuteScalar', 'SqlDataSource.Update', '_Read_s', 'IDataAdapter.Fill',
'_wgetenv', '_RecordsetPtr.Open*', 'AccessDataSource.Delete', 'Recordset.Open*', 'filebuf.sbumpc', 'DDX_*',
'RegGetValue', 'fstream.read*', 'SqlCeCommand.ExecuteResultSet', 'SqlCommand.ExecuteXmlReader', 'main',
'streambuf.sputbackc', 'read', 'm_lpCmdLine', 'CRichEditCtrl.Get*', 'istream.putback',
'SqlCeCommand.ExecuteXmlReader', 'SqlCeCommand.BeginExecuteXmlReader', 'filebuf.sgetn',
'OdbcDataAdapter.Update', 'filebuf.sgetc', 'SQLPutData', 'recvfrom', 'OleDbDataAdapter.FillSchema',
'IDataAdapter.FillSchema', 'CRichEditCtrl.GetLine', 'DbDataAdapter.Update', 'SqlCommand.ExecuteReader',
'istream.get', 'ReceiveFrom', '_main', 'fgetc', 'DbDataAdapter.FillSchema', 'kbhit', 'UpdateCommand.Execute*',
'Statement.execute', 'fgets', 'SelectCommand.Execute*', 'getch', 'OdbcCommand.ExecuteNonQuery',
'CDaoQueryDef.Execute', 'fstream.getline', 'ifstream.getline', 'SqlDataAdapter.FillSchema',
'OleDbCommand.ExecuteReader', 'Statement.execute*', 'SqlCeCommand.BeginExecuteNonQuery',
'OdbcCommand.ExecuteScalar', 'SqlCeDataAdapter.Update', 'sendmessage', 'mysqlpp.DBDriver', 'fstream.peek',
'Receive', 'CDaoRecordset.Open', 'OdbcDataAdapter.FillSchema', '_wgetenv_s', 'OleDbDataAdapter.Update',
'readsome', 'SqlCommand.BeginExecuteXmlReader', 'recv', 'ifstream.peek', '_Main', '_tmain', '_Readsome_s',
'SqlCeCommand.ExecuteReader', 'OleDbCommand.ExecuteNonQuery', 'fstream.get', 'IDbCommand.ExecuteScalar',
'filebuf.sputbackc', 'IDataAdapter.Update', 'streambuf.sbumpc', 'InsertCommand.Execute*', 'RegQueryValue',
'IDbCommand.ExecuteReader', 'SqlPipe.ExecuteAndSend', 'Connection.Execute*', 'getdlgtext', 'ReceiveFromEx',
'SqlDataAdapter.Update', 'RegQueryValueEx', 'SQLExecute', 'pread', 'SqlCommand.BeginExecuteReader', 'AfxWinMain',
'getchar', 'istream.getline', 'SqlCeDataAdapter.Fill', 'OleDbDataReader.ExecuteReader', 'SqlDataSource.Insert',
'istream.peek', 'SendMessageCallback', 'ifstream.read*', 'SqlDataSource.Select', 'SqlCommand.ExecuteScalar',
'SqlDataAdapter.Fill', 'SqlCommand.BeginExecuteNonQuery', 'getche', 'SqlCeCommand.BeginExecuteReader', 'getenv',
'streambuf.snextc', 'Command.Execute*', '_CommandPtr.Execute*', 'SendNotifyMessage', 'OdbcDataAdapter.Fill',
'AccessDataSource.Update', 'fscanf', 'QSqlQuery.execBatch', 'DbDataAdapter.Fill', 'cin',
'DeleteCommand.Execute*', 'QSqlQuery.exec', 'PostMessage', 'ifstream.get', 'filebuf.snextc',
'IDbCommand.ExecuteNonQuery', 'Winmain', 'fread', 'getpass', 'GetDlgItemTextCCheckListBox.GetCheck',
'DISP_PROPERTY_EX', 'pread64', 'Socket.Receive*', 'SACommand.Execute*', 'SQLExecDirect',
'SqlCeDataAdapter.FillSchema', 'DISP_FUNCTION', 'OracleCommand.ExecuteNonQuery', 'CEdit.GetLine',
'OdbcCommand.ExecuteReader', 'CEdit.Get*', 'AccessDataSource.Select', 'OracleCommand.ExecuteReader',
'OCIStmtExecute', 'getenv_s', 'DB2Command.Execute*', 'OracleDataAdapter.FillSchema', 'OracleDataAdapter.Fill',
'CComboBox.Get*', 'SqlCeCommand.ExecuteNonQuery', 'OracleCommand.ExecuteOracleNonQuery', 'mysqlpp.Query',
'istream.read*', 'CListBox.GetText', 'SqlCeCommand.ExecuteScalar', 'ifstream.putback', 'readlink',
'CHtmlEditCtrl.GetDHtmlDocument', 'PostThreadMessage', 'CListCtrl.GetItemText', 'OracleDataAdapter.Update',
'OleDbCommand.ExecuteScalar', 'stdin', 'SqlDataSource.Delete', 'OleDbDataAdapter.Fill', 'fstream.putback',
'IDbDataAdapter.Fill', '_wspawnl', 'fwprintf', 'sem_wait', '_unlink', 'ldap_search_ext_sW', 'signal', 'PQclear',
'PQfinish', 'PQexec', 'PQresultStatus']
import re
import nltk
import warnings
warnings.filterwarnings('ignore')
def symbolic_tokenize(code):
tokens = nltk.word_tokenize(code)
c_tokens = []
for t in tokens:
if t.strip() != '':
c_tokens.append(t.strip())
f_count = 1
var_count = 1
symbol_table = {}
final_tokens = []
for idx in range(len(c_tokens)):
t = c_tokens[idx]
if t in keywords:
final_tokens.append(t)
elif t in puncs:
final_tokens.append(t)
elif t in l_funcs:
final_tokens.append(t)
elif c_tokens[idx + 1] == '(':
if t in keywords:
final_tokens.append(t + '(')
else:
if t not in symbol_table.keys():
symbol_table[t] = "FUNC" + str(f_count)
f_count += 1
final_tokens.append(symbol_table[t] + '(')
idx += 1
elif t.endswith('('):
t = t[:-1]
if t in keywords:
final_tokens.append(t + '(')
else:
if t not in symbol_table.keys():
symbol_table[t] = "FUNC" + str(f_count)
f_count += 1
final_tokens.append(symbol_table[t] + '(')
elif t.endswith('()'):
t = t[:-2]
if t in keywords:
final_tokens.append(t + '()')
else:
if t not in symbol_table.keys():
symbol_table[t] = "FUNC" + str(f_count)
f_count += 1
final_tokens.append(symbol_table[t] + '()')
elif re.match("^\"*\"$", t) is not None:
final_tokens.append("STRING")
elif re.match("^[0-9]+(\.[0-9]+)?$", t) is not None:
final_tokens.append("NUMBER")
elif re.match("^[0-9]*(\.[0-9]+)$", t) is not None:
final_tokens.append("NUMBER")
else:
if t not in symbol_table.keys():
symbol_table[t] = "VAR" + str(var_count)
var_count += 1
final_tokens.append(symbol_table[t])
return ' '.join(final_tokens)
import json
file_name = '../data/chrome_debian_with_slices_ggnn_similar.json'
data = json.load(open(file_name))
len(data)
draper_data = []
vd_data = []
s_api_data = []
s_arr_data = []
s_arith_data = []
s_ptr_data = []
call_num, array_num, arith_num, ptr_num, vd_num = 0, 0, 0, 0, 0
vde, calle, arre, arie, ptre = 0,0,0,0,0
at_least_one = set()
vuld_taken = set()
for didx, data_point in enumerate(data):
if didx % 1000 == 0:
print(didx, call_num, array_num, arith_num, ptr_num, vd_num)
label = data_point['label']
code_lines = data_point['code'].split('\n')
d_tok = data_point['tokenized']
draper_data.append({
'code': data_point['code'],
'label': label,
'tokenized': d_tok
})
# extract vd
if len(data_point['call_slices_vd']) > 0:
vd_num += 1
for sliced_lines in data_point['call_slices_vd']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
vd_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
vuld_taken.add(didx)
# at_least_one.add(didx)
# extract s_api
if len(data_point['call_slices_sy']) > 0:
call_num += 1
for sliced_lines in data_point['call_slices_sy']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
s_api_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
at_least_one.add(didx)
# extract s_array
if len(data_point['array_slices_sy']) > 0:
array_num += 1
for sliced_lines in data_point['array_slices_sy']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
s_arr_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
at_least_one.add(didx)
# extract s_arith
if len(data_point['arith_slices_sy']) > 0:
arith_num += 1
for sliced_lines in data_point['arith_slices_sy']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
s_arith_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
at_least_one.add(didx)
# extract s_ptr
if len(data_point['ptr_slices_sy']) > 0:
ptr_num += 1
for sliced_lines in data_point['ptr_slices_sy']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
s_ptr_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
at_least_one.add(didx)
print('Total: \t%d\nVulDee:\t%d\nSY API:\t%d\nSY Arr:\t%d\nSY Ari:\t%d\nSY Ptr:\t%d' % (
len(data), vd_num, call_num, array_num, arith_num, ptr_num))
print(len(at_least_one))
taken_data_points = []
not_taken_points = []
vt, nvt, vnt, nvnt= 0,0,0,0
for didx, dp in enumerate(data):
label = dp['label']
if didx in at_least_one:
taken_data_points.append(dp)
if label == 1:
vt += 1
else:
nvt += 1
else:
not_taken_points.append(dp)
if label == 1:
vnt += 1
else:
nvnt += 1
pass
print(vt, nvt, vnt, nvnt)
print(len(data), len(at_least_one))
taken_data_points = []
not_taken_points = []
vt, nvt, vnt, nvnt= 0,0,0,0
for didx, dp in enumerate(data):
label = dp['label']
if didx in vuld_taken:
taken_data_points.append(dp)
if label == 1:
vt += 1
else:
nvt += 1
else:
not_taken_points.append(dp)
if label == 1:
vnt += 1
else:
nvnt += 1
pass
print(vt, nvt, vnt, nvnt)
print(len(data), len(at_least_one))
# draper_data = []
# vd_data = []
# s_api_data = []
# s_arr_data = []
# s_arith_data = []
# s_ptr_data = []
project = 'chrome_debian_ggnn_similar.json'
draper_file = open('../data/draper/' + project, 'w')
json.dump(draper_data, draper_file)
print(len(draper_data))
draper_file.close()
vul_file = open('../data/VulDeePecker/' + project, 'w')
json.dump(vd_data, vul_file)
print(len(vd_data))
vul_file.close()
api_file = open('../data/SySeVR/API_function_call-' + project, 'w')
json.dump(s_api_data, api_file)
print(len(s_api_data))
api_file.close()
arr_file = open('../data/SySeVR/Array_usage-' + project, 'w')
json.dump(s_arr_data, arr_file)
print(len(s_arr_data))
arr_file.close()
arith_file = open('../data/SySeVR/Arithmetic_expression-' + project, 'w')
json.dump(s_arith_data, arith_file)
print(len(s_arith_data))
arith_file.close()
ptr_file = open('../data/SySeVR/Pointer_usage-' + project, 'w')
json.dump(s_ptr_data, ptr_file)
print(len(s_ptr_data))
ptr_file.close()
s_arith_data[0]
len(data)
```
|
github_jupyter
|
keywords = ["alignas", "alignof", "and", "and_eq", "asm", "atomic_cancel", "atomic_commit",
"atomic_noexcept", "auto", "bitand", "bitor", "bool", "break", "case", "catch",
"char", "char8_t", "char16_t", "char32_t", "class", "compl", "concept", "const",
"consteval", "constexpr", "constinit", "const_cast", "continue", "co_await",
"co_return", "co_yield", "decltype", "default", "delete", "do", "double", "dynamic_cast",
"else", "enum", "explicit", "export", "extern", "false", "float", "for", "friend", "goto",
"if", "inline", "int", "long", "mutable", "namespace", "new", "noexcept", "not", "not_eq",
"nullptr", "operator", "or", "or_eq", "private", "protected", "public", "reflexpr",
"register", "reinterpret_cast", "requires", "return", "short", "signed", "sizeof", "static",
"static_assert", "static_cast", "struct", "switch", "synchronized", "template", "this",
"thread_local", "throw", "true", "try", "typedef", "typeid", "typename", "union", "unsigned",
"using", "virtual", "void", "volatile", "wchar_t", "while", "xor", "xor_eq", "NULL"]
puncs = '~`!@#$%^&*()-+={[]}|\\;:\'\"<,>.?/'
puncs = list(puncs)
l_funcs = ['StrNCat', 'getaddrinfo', '_ui64toa', 'fclose', 'pthread_mutex_lock', 'gets_s', 'sleep',
'_ui64tot', 'freopen_s', '_ui64tow', 'send', 'lstrcat', 'HMAC_Update', '__fxstat', 'StrCatBuff',
'_mbscat', '_mbstok_s', '_cprintf_s', 'ldap_search_init_page', 'memmove_s', 'ctime_s', 'vswprintf',
'vswprintf_s', '_snwprintf', '_gmtime_s', '_tccpy', '*RC6*', '_mbslwr_s', 'random',
'__wcstof_internal', '_wcslwr_s', '_ctime32_s', 'wcsncat*', 'MD5_Init', '_ultoa',
'snprintf', 'memset', 'syslog', '_vsnprintf_s', 'HeapAlloc', 'pthread_mutex_destroy',
'ChangeWindowMessageFilter', '_ultot', 'crypt_r', '_strupr_s_l', 'LoadLibraryExA', '_strerror_s',
'LoadLibraryExW', 'wvsprintf', 'MoveFileEx', '_strdate_s', 'SHA1', 'sprintfW', 'StrCatNW',
'_scanf_s_l', 'pthread_attr_init', '_wtmpnam_s', 'snscanf', '_sprintf_s_l', 'dlopen',
'sprintfA', 'timed_mutex', 'OemToCharA', 'ldap_delete_ext', 'sethostid', 'popen', 'OemToCharW',
'_gettws', 'vfork', '_wcsnset_s_l', 'sendmsg', '_mbsncat', 'wvnsprintfA', 'HeapFree', '_wcserror_s',
'realloc', '_snprintf*', 'wcstok', '_strncat*', 'StrNCpy', '_wasctime_s', 'push*', '_lfind_s',
'CC_SHA512', 'ldap_compare_ext_s', 'wcscat_s', 'strdup', '_chsize_s', 'sprintf_s', 'CC_MD4_Init',
'wcsncpy', '_wfreopen_s', '_wcsupr_s', '_searchenv_s', 'ldap_modify_ext_s', '_wsplitpath',
'CC_SHA384_Final', 'MD2', 'RtlCopyMemory', 'lstrcatW', 'MD4', 'MD5', '_wcstok_s_l', '_vsnwprintf_s',
'ldap_modify_s', 'strerror', '_lsearch_s', '_mbsnbcat_s', '_wsplitpath_s', 'MD4_Update', '_mbccpy_s',
'_strncpy_s_l', '_snprintf_s', 'CC_SHA512_Init', 'fwscanf_s', '_snwprintf_s', 'CC_SHA1', 'swprintf',
'fprintf', 'EVP_DigestInit_ex', 'strlen', 'SHA1_Init', 'strncat', '_getws_s', 'CC_MD4_Final',
'wnsprintfW', 'lcong48', 'lrand48', 'write', 'HMAC_Init', '_wfopen_s', 'wmemchr', '_tmakepath',
'wnsprintfA', 'lstrcpynW', 'scanf_s', '_mbsncpy_s_l', '_localtime64_s', 'fstream.open', '_wmakepath',
'Connection.open', '_tccat', 'valloc', 'setgroups', 'unlink', 'fstream.put', 'wsprintfA', '*SHA1*',
'_wsearchenv_s', 'ualstrcpyA', 'CC_MD5_Update', 'strerror_s', 'HeapCreate', 'ualstrcpyW', '__xstat',
'_wmktemp_s', 'StrCatChainW', 'ldap_search_st', '_mbstowcs_s_l', 'ldap_modify_ext', '_mbsset_s',
'strncpy_s', 'move', 'execle', 'StrCat', 'xrealloc', 'wcsncpy_s', '_tcsncpy*', 'execlp',
'RIPEMD160_Final', 'ldap_search_s', 'EnterCriticalSection', '_wctomb_s_l', 'fwrite', '_gmtime64_s',
'sscanf_s', 'wcscat', '_strupr_s', 'wcrtomb_s', 'VirtualLock', 'ldap_add_ext_s', '_mbscpy',
'_localtime32_s', 'lstrcpy', '_wcsncpy*', 'CC_SHA1_Init', '_getts', '_wfopen', '__xstat64',
'strcoll', '_fwscanf_s_l', '_mbslwr_s_l', 'RegOpenKey', 'makepath', 'seed48', 'CC_SHA256',
'sendto', 'execv', 'CalculateDigest', 'memchr', '_mbscpy_s', '_strtime_s', 'ldap_search_ext_s',
'_chmod', 'flock', '__fxstat64', '_vsntprintf', 'CC_SHA256_Init', '_itoa_s', '__wcserror_s',
'_gcvt_s', 'fstream.write', 'sprintf', 'recursive_mutex', 'strrchr', 'gethostbyaddr', '_wcsupr_s_l',
'strcspn', 'MD5_Final', 'asprintf', '_wcstombs_s_l', '_tcstok', 'free', 'MD2_Final', 'asctime_s',
'_alloca', '_wputenv_s', '_wcsset_s', '_wcslwr_s_l', 'SHA1_Update', 'filebuf.sputc', 'filebuf.sputn',
'SQLConnect', 'ldap_compare', 'mbstowcs_s', 'HMAC_Final', 'pthread_condattr_init', '_ultow_s', 'rand',
'ofstream.put', 'CC_SHA224_Final', 'lstrcpynA', 'bcopy', 'system', 'CreateFile*', 'wcscpy_s',
'_mbsnbcpy*', 'open', '_vsnwprintf', 'strncpy', 'getopt_long', 'CC_SHA512_Final', '_vsprintf_s_l',
'scanf', 'mkdir', '_localtime_s', '_snprintf', '_mbccpy_s_l', 'memcmp', 'final', '_ultoa_s',
'lstrcpyW', 'LoadModule', '_swprintf_s_l', 'MD5_Update', '_mbsnset_s_l', '_wstrtime_s', '_strnset_s',
'lstrcpyA', '_mbsnbcpy_s', 'mlock', 'IsBadHugeWritePtr', 'copy', '_mbsnbcpy_s_l', 'wnsprintf',
'wcscpy', 'ShellExecute', 'CC_MD4', '_ultow', '_vsnwprintf_s_l', 'lstrcpyn', 'CC_SHA1_Final',
'vsnprintf', '_mbsnbset_s', '_i64tow', 'SHA256_Init', 'wvnsprintf', 'RegCreateKey', 'strtok_s',
'_wctime32_s', '_i64toa', 'CC_MD5_Final', 'wmemcpy', 'WinExec', 'CreateDirectory*',
'CC_SHA256_Update', '_vsnprintf_s_l', 'jrand48', 'wsprintf', 'ldap_rename_ext_s', 'filebuf.open',
'_wsystem', 'SHA256_Update', '_cwscanf_s', 'wsprintfW', '_sntscanf', '_splitpath', 'fscanf_s',
'strpbrk', 'wcstombs_s', 'wscanf', '_mbsnbcat_s_l', 'strcpynA', 'pthread_cond_init', 'wcsrtombs_s',
'_wsopen_s', 'CharToOemBuffA', 'RIPEMD160_Update', '_tscanf', 'HMAC', 'StrCCpy', 'Connection.connect',
'lstrcatn', '_mbstok', '_mbsncpy', 'CC_SHA384_Update', 'create_directories', 'pthread_mutex_unlock',
'CFile.Open', 'connect', '_vswprintf_s_l', '_snscanf_s_l', 'fputc', '_wscanf_s', '_snprintf_s_l',
'strtok', '_strtok_s_l', 'lstrcatA', 'snwscanf', 'pthread_mutex_init', 'fputs', 'CC_SHA384_Init',
'_putenv_s', 'CharToOemBuffW', 'pthread_mutex_trylock', '__wcstoul_internal', '_memccpy',
'_snwprintf_s_l', '_strncpy*', 'wmemset', 'MD4_Init', '*RC4*', 'strcpyW', '_ecvt_s', 'memcpy_s',
'erand48', 'IsBadHugeReadPtr', 'strcpyA', 'HeapReAlloc', 'memcpy', 'ldap_rename_ext', 'fopen_s',
'srandom', '_cgetws_s', '_makepath', 'SHA256_Final', 'remove', '_mbsupr_s', 'pthread_mutexattr_init',
'__wcstold_internal', 'StrCpy', 'ldap_delete', 'wmemmove_s', '_mkdir', 'strcat', '_cscanf_s_l',
'StrCAdd', 'swprintf_s', '_strnset_s_l', 'close', 'ldap_delete_ext_s', 'ldap_modrdn', 'strchr',
'_gmtime32_s', '_ftcscat', 'lstrcatnA', '_tcsncat', 'OemToChar', 'mutex', 'CharToOem', 'strcpy_s',
'lstrcatnW', '_wscanf_s_l', '__lxstat64', 'memalign', 'MD2_Init', 'StrCatBuffW', 'StrCpyN', 'CC_MD5',
'StrCpyA', 'StrCatBuffA', 'StrCpyW', 'tmpnam_r', '_vsnprintf', 'strcatA', 'StrCpyNW', '_mbsnbset_s_l',
'EVP_DigestInit', '_stscanf', 'CC_MD2', '_tcscat', 'StrCpyNA', 'xmalloc', '_tcslen', '*MD4*',
'vasprintf', 'strxfrm', 'chmod', 'ldap_add_ext', 'alloca', '_snscanf_s', 'IsBadWritePtr', 'swscanf_s',
'wmemcpy_s', '_itoa', '_ui64toa_s', 'EVP_DigestUpdate', '__wcstol_internal', '_itow', 'StrNCatW',
'strncat_s', 'ualstrcpy', 'execvp', '_mbccat', 'EVP_MD_CTX_init', 'assert', 'ofstream.write',
'ldap_add', '_sscanf_s_l', 'drand48', 'CharToOemW', 'swscanf', '_itow_s', 'RIPEMD160_Init',
'CopyMemory', 'initstate', 'getpwuid', 'vsprintf', '_fcvt_s', 'CharToOemA', 'setuid', 'malloc',
'StrCatNA', 'strcat_s', 'srand', 'getwd', '_controlfp_s', 'olestrcpy', '__wcstod_internal',
'_mbsnbcat', 'lstrncat', 'des_*', 'CC_SHA224_Init', 'set*', 'vsprintf_s', 'SHA1_Final', '_umask_s',
'gets', 'setstate', 'wvsprintfW', 'LoadLibraryEx', 'ofstream.open', 'calloc', '_mbstrlen',
'_cgets_s', '_sopen_s', 'IsBadStringPtr', 'wcsncat_s', 'add*', 'nrand48', 'create_directory',
'ldap_search_ext', '_i64toa_s', '_ltoa_s', '_cwscanf_s_l', 'wmemcmp', '__lxstat', 'lstrlen',
'pthread_condattr_destroy', '_ftcscpy', 'wcstok_s', '__xmknod', 'pthread_attr_destroy', 'sethostname',
'_fscanf_s_l', 'StrCatN', 'RegEnumKey', '_tcsncpy', 'strcatW', 'AfxLoadLibrary', 'setenv', 'tmpnam',
'_mbsncat_s_l', '_wstrdate_s', '_wctime64_s', '_i64tow_s', 'CC_MD4_Update', 'ldap_add_s', '_umask',
'CC_SHA1_Update', '_wcsset_s_l', '_mbsupr_s_l', 'strstr', '_tsplitpath', 'memmove', '_tcscpy',
'vsnprintf_s', 'strcmp', 'wvnsprintfW', 'tmpfile', 'ldap_modify', '_mbsncat*', 'mrand48', 'sizeof',
'StrCatA', '_ltow_s', '*desencrypt*', 'StrCatW', '_mbccpy', 'CC_MD2_Init', 'RIPEMD160', 'ldap_search',
'CC_SHA224', 'mbsrtowcs_s', 'update', 'ldap_delete_s', 'getnameinfo', '*RC5*', '_wcsncat_s_l',
'DriverManager.getConnection', 'socket', '_cscanf_s', 'ldap_modrdn_s', '_wopen', 'CC_SHA256_Final',
'_snwprintf*', 'MD2_Update', 'strcpy', '_strncat_s_l', 'CC_MD5_Init', 'mbscpy', 'wmemmove',
'LoadLibraryW', '_mbslen', '*alloc', '_mbsncat_s', 'LoadLibraryA', 'fopen', 'StrLen', 'delete',
'_splitpath_s', 'CreateFileTransacted*', 'MD4_Final', '_open', 'CC_SHA384', 'wcslen', 'wcsncat',
'_mktemp_s', 'pthread_mutexattr_destroy', '_snwscanf_s', '_strset_s', '_wcsncpy_s_l', 'CC_MD2_Final',
'_mbstok_s_l', 'wctomb_s', 'MySQL_Driver.connect', '_snwscanf_s_l', '*_des_*', 'LoadLibrary',
'_swscanf_s_l', 'ldap_compare_s', 'ldap_compare_ext', '_strlwr_s', 'GetEnvironmentVariable',
'cuserid', '_mbscat_s', 'strspn', '_mbsncpy_s', 'ldap_modrdn2', 'LeaveCriticalSection', 'CopyFile',
'getpwd', 'sscanf', 'creat', 'RegSetValue', 'ldap_modrdn2_s', 'CFile.Close', '*SHA_1*',
'pthread_cond_destroy', 'CC_SHA512_Update', '*RC2*', 'StrNCatA', '_mbsnbcpy', '_mbsnset_s',
'crypt', 'excel', '_vstprintf', 'xstrdup', 'wvsprintfA', 'getopt', 'mkstemp', '_wcsnset_s',
'_stprintf', '_sntprintf', 'tmpfile_s', 'OpenDocumentFile', '_mbsset_s_l', '_strset_s_l',
'_strlwr_s_l', 'ifstream.open', 'xcalloc', 'StrNCpyA', '_wctime_s', 'CC_SHA224_Update', '_ctime64_s',
'MoveFile', 'chown', 'StrNCpyW', 'IsBadReadPtr', '_ui64tow_s', 'IsBadCodePtr', 'getc',
'OracleCommand.ExecuteOracleScalar', 'AccessDataSource.Insert', 'IDbDataAdapter.FillSchema',
'IDbDataAdapter.Update', 'GetWindowText*', 'SendMessage', 'SqlCommand.ExecuteNonQuery', 'streambuf.sgetc',
'streambuf.sgetn', 'OracleCommand.ExecuteScalar', 'SqlDataSource.Update', '_Read_s', 'IDataAdapter.Fill',
'_wgetenv', '_RecordsetPtr.Open*', 'AccessDataSource.Delete', 'Recordset.Open*', 'filebuf.sbumpc', 'DDX_*',
'RegGetValue', 'fstream.read*', 'SqlCeCommand.ExecuteResultSet', 'SqlCommand.ExecuteXmlReader', 'main',
'streambuf.sputbackc', 'read', 'm_lpCmdLine', 'CRichEditCtrl.Get*', 'istream.putback',
'SqlCeCommand.ExecuteXmlReader', 'SqlCeCommand.BeginExecuteXmlReader', 'filebuf.sgetn',
'OdbcDataAdapter.Update', 'filebuf.sgetc', 'SQLPutData', 'recvfrom', 'OleDbDataAdapter.FillSchema',
'IDataAdapter.FillSchema', 'CRichEditCtrl.GetLine', 'DbDataAdapter.Update', 'SqlCommand.ExecuteReader',
'istream.get', 'ReceiveFrom', '_main', 'fgetc', 'DbDataAdapter.FillSchema', 'kbhit', 'UpdateCommand.Execute*',
'Statement.execute', 'fgets', 'SelectCommand.Execute*', 'getch', 'OdbcCommand.ExecuteNonQuery',
'CDaoQueryDef.Execute', 'fstream.getline', 'ifstream.getline', 'SqlDataAdapter.FillSchema',
'OleDbCommand.ExecuteReader', 'Statement.execute*', 'SqlCeCommand.BeginExecuteNonQuery',
'OdbcCommand.ExecuteScalar', 'SqlCeDataAdapter.Update', 'sendmessage', 'mysqlpp.DBDriver', 'fstream.peek',
'Receive', 'CDaoRecordset.Open', 'OdbcDataAdapter.FillSchema', '_wgetenv_s', 'OleDbDataAdapter.Update',
'readsome', 'SqlCommand.BeginExecuteXmlReader', 'recv', 'ifstream.peek', '_Main', '_tmain', '_Readsome_s',
'SqlCeCommand.ExecuteReader', 'OleDbCommand.ExecuteNonQuery', 'fstream.get', 'IDbCommand.ExecuteScalar',
'filebuf.sputbackc', 'IDataAdapter.Update', 'streambuf.sbumpc', 'InsertCommand.Execute*', 'RegQueryValue',
'IDbCommand.ExecuteReader', 'SqlPipe.ExecuteAndSend', 'Connection.Execute*', 'getdlgtext', 'ReceiveFromEx',
'SqlDataAdapter.Update', 'RegQueryValueEx', 'SQLExecute', 'pread', 'SqlCommand.BeginExecuteReader', 'AfxWinMain',
'getchar', 'istream.getline', 'SqlCeDataAdapter.Fill', 'OleDbDataReader.ExecuteReader', 'SqlDataSource.Insert',
'istream.peek', 'SendMessageCallback', 'ifstream.read*', 'SqlDataSource.Select', 'SqlCommand.ExecuteScalar',
'SqlDataAdapter.Fill', 'SqlCommand.BeginExecuteNonQuery', 'getche', 'SqlCeCommand.BeginExecuteReader', 'getenv',
'streambuf.snextc', 'Command.Execute*', '_CommandPtr.Execute*', 'SendNotifyMessage', 'OdbcDataAdapter.Fill',
'AccessDataSource.Update', 'fscanf', 'QSqlQuery.execBatch', 'DbDataAdapter.Fill', 'cin',
'DeleteCommand.Execute*', 'QSqlQuery.exec', 'PostMessage', 'ifstream.get', 'filebuf.snextc',
'IDbCommand.ExecuteNonQuery', 'Winmain', 'fread', 'getpass', 'GetDlgItemTextCCheckListBox.GetCheck',
'DISP_PROPERTY_EX', 'pread64', 'Socket.Receive*', 'SACommand.Execute*', 'SQLExecDirect',
'SqlCeDataAdapter.FillSchema', 'DISP_FUNCTION', 'OracleCommand.ExecuteNonQuery', 'CEdit.GetLine',
'OdbcCommand.ExecuteReader', 'CEdit.Get*', 'AccessDataSource.Select', 'OracleCommand.ExecuteReader',
'OCIStmtExecute', 'getenv_s', 'DB2Command.Execute*', 'OracleDataAdapter.FillSchema', 'OracleDataAdapter.Fill',
'CComboBox.Get*', 'SqlCeCommand.ExecuteNonQuery', 'OracleCommand.ExecuteOracleNonQuery', 'mysqlpp.Query',
'istream.read*', 'CListBox.GetText', 'SqlCeCommand.ExecuteScalar', 'ifstream.putback', 'readlink',
'CHtmlEditCtrl.GetDHtmlDocument', 'PostThreadMessage', 'CListCtrl.GetItemText', 'OracleDataAdapter.Update',
'OleDbCommand.ExecuteScalar', 'stdin', 'SqlDataSource.Delete', 'OleDbDataAdapter.Fill', 'fstream.putback',
'IDbDataAdapter.Fill', '_wspawnl', 'fwprintf', 'sem_wait', '_unlink', 'ldap_search_ext_sW', 'signal', 'PQclear',
'PQfinish', 'PQexec', 'PQresultStatus']
import re
import nltk
import warnings
warnings.filterwarnings('ignore')
def symbolic_tokenize(code):
tokens = nltk.word_tokenize(code)
c_tokens = []
for t in tokens:
if t.strip() != '':
c_tokens.append(t.strip())
f_count = 1
var_count = 1
symbol_table = {}
final_tokens = []
for idx in range(len(c_tokens)):
t = c_tokens[idx]
if t in keywords:
final_tokens.append(t)
elif t in puncs:
final_tokens.append(t)
elif t in l_funcs:
final_tokens.append(t)
elif c_tokens[idx + 1] == '(':
if t in keywords:
final_tokens.append(t + '(')
else:
if t not in symbol_table.keys():
symbol_table[t] = "FUNC" + str(f_count)
f_count += 1
final_tokens.append(symbol_table[t] + '(')
idx += 1
elif t.endswith('('):
t = t[:-1]
if t in keywords:
final_tokens.append(t + '(')
else:
if t not in symbol_table.keys():
symbol_table[t] = "FUNC" + str(f_count)
f_count += 1
final_tokens.append(symbol_table[t] + '(')
elif t.endswith('()'):
t = t[:-2]
if t in keywords:
final_tokens.append(t + '()')
else:
if t not in symbol_table.keys():
symbol_table[t] = "FUNC" + str(f_count)
f_count += 1
final_tokens.append(symbol_table[t] + '()')
elif re.match("^\"*\"$", t) is not None:
final_tokens.append("STRING")
elif re.match("^[0-9]+(\.[0-9]+)?$", t) is not None:
final_tokens.append("NUMBER")
elif re.match("^[0-9]*(\.[0-9]+)$", t) is not None:
final_tokens.append("NUMBER")
else:
if t not in symbol_table.keys():
symbol_table[t] = "VAR" + str(var_count)
var_count += 1
final_tokens.append(symbol_table[t])
return ' '.join(final_tokens)
import json
file_name = '../data/chrome_debian_with_slices_ggnn_similar.json'
data = json.load(open(file_name))
len(data)
draper_data = []
vd_data = []
s_api_data = []
s_arr_data = []
s_arith_data = []
s_ptr_data = []
call_num, array_num, arith_num, ptr_num, vd_num = 0, 0, 0, 0, 0
vde, calle, arre, arie, ptre = 0,0,0,0,0
at_least_one = set()
vuld_taken = set()
for didx, data_point in enumerate(data):
if didx % 1000 == 0:
print(didx, call_num, array_num, arith_num, ptr_num, vd_num)
label = data_point['label']
code_lines = data_point['code'].split('\n')
d_tok = data_point['tokenized']
draper_data.append({
'code': data_point['code'],
'label': label,
'tokenized': d_tok
})
# extract vd
if len(data_point['call_slices_vd']) > 0:
vd_num += 1
for sliced_lines in data_point['call_slices_vd']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
vd_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
vuld_taken.add(didx)
# at_least_one.add(didx)
# extract s_api
if len(data_point['call_slices_sy']) > 0:
call_num += 1
for sliced_lines in data_point['call_slices_sy']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
s_api_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
at_least_one.add(didx)
# extract s_array
if len(data_point['array_slices_sy']) > 0:
array_num += 1
for sliced_lines in data_point['array_slices_sy']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
s_arr_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
at_least_one.add(didx)
# extract s_arith
if len(data_point['arith_slices_sy']) > 0:
arith_num += 1
for sliced_lines in data_point['arith_slices_sy']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
s_arith_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
at_least_one.add(didx)
# extract s_ptr
if len(data_point['ptr_slices_sy']) > 0:
ptr_num += 1
for sliced_lines in data_point['ptr_slices_sy']:
sliced_lines = sorted(sliced_lines)
code = []
for l in sliced_lines:
if l < len(code_lines):
code.append(code_lines[l - 1])
code = '\n'.join(code)
tokenized = symbolic_tokenize(code)
if tokenized is not None:
s_ptr_data.append({
'code': code,
'leble': label,
'tokenized': tokenized,
'fidx': didx
})
at_least_one.add(didx)
print('Total: \t%d\nVulDee:\t%d\nSY API:\t%d\nSY Arr:\t%d\nSY Ari:\t%d\nSY Ptr:\t%d' % (
len(data), vd_num, call_num, array_num, arith_num, ptr_num))
print(len(at_least_one))
taken_data_points = []
not_taken_points = []
vt, nvt, vnt, nvnt= 0,0,0,0
for didx, dp in enumerate(data):
label = dp['label']
if didx in at_least_one:
taken_data_points.append(dp)
if label == 1:
vt += 1
else:
nvt += 1
else:
not_taken_points.append(dp)
if label == 1:
vnt += 1
else:
nvnt += 1
pass
print(vt, nvt, vnt, nvnt)
print(len(data), len(at_least_one))
taken_data_points = []
not_taken_points = []
vt, nvt, vnt, nvnt= 0,0,0,0
for didx, dp in enumerate(data):
label = dp['label']
if didx in vuld_taken:
taken_data_points.append(dp)
if label == 1:
vt += 1
else:
nvt += 1
else:
not_taken_points.append(dp)
if label == 1:
vnt += 1
else:
nvnt += 1
pass
print(vt, nvt, vnt, nvnt)
print(len(data), len(at_least_one))
# draper_data = []
# vd_data = []
# s_api_data = []
# s_arr_data = []
# s_arith_data = []
# s_ptr_data = []
project = 'chrome_debian_ggnn_similar.json'
draper_file = open('../data/draper/' + project, 'w')
json.dump(draper_data, draper_file)
print(len(draper_data))
draper_file.close()
vul_file = open('../data/VulDeePecker/' + project, 'w')
json.dump(vd_data, vul_file)
print(len(vd_data))
vul_file.close()
api_file = open('../data/SySeVR/API_function_call-' + project, 'w')
json.dump(s_api_data, api_file)
print(len(s_api_data))
api_file.close()
arr_file = open('../data/SySeVR/Array_usage-' + project, 'w')
json.dump(s_arr_data, arr_file)
print(len(s_arr_data))
arr_file.close()
arith_file = open('../data/SySeVR/Arithmetic_expression-' + project, 'w')
json.dump(s_arith_data, arith_file)
print(len(s_arith_data))
arith_file.close()
ptr_file = open('../data/SySeVR/Pointer_usage-' + project, 'w')
json.dump(s_ptr_data, ptr_file)
print(len(s_ptr_data))
ptr_file.close()
s_arith_data[0]
len(data)
| 0.223292 | 0.364212 |
# Vectorisation tests
Dot product is already tested in DotTest
## Distance
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import collections, lines, markers, path, patches
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
from geometry import dot, get_metric
def distance(u, v, geometry="spherical"):
'''
Calculate distance on the manifold between two pts
Inputs: u, v: two vectors, represented as np.arrays
Outputs: distance, a 1-D real number
'''
dotprod = dot(u,v,geometry)
# if np.abs(dotprod) > 1:
# print("distance: {}.{} = {:.3g}".format(u, v, dotprod))
if geometry == "spherical":
return np.arccos(dotprod)
elif geometry == "hyperbolic":
return np.arccosh(-dotprod)
elif geometry == "euclidean":
return np.sqrt(dot(u-v, u-v, geometry))
else:
print("geometry = {} is not a valid option! Try 'spherical' or 'hyperbolic'".format(geometry))
alpha = 0.1
beta = 1.1
a = np.array([[np.sinh(alpha), np.cosh(alpha)]]).T
b = np.array([[np.sinh(beta), np.cosh(beta)]]).T
print(a)
print(b)
distance(a, b, geometry="hyperbolic")
c = np.array([[-1, 0],[0, 1]]).dot(a)
d = np.array([[-1, 0],[0, 1]]).dot(b)
print(c)
print(d)
print(distance(a, c, geometry="hyperbolic"))
print(distance(a, d, geometry="hyperbolic"))
print(distance(b, c, geometry="hyperbolic"))
print(distance(b, d, geometry="hyperbolic"))
ab = np.hstack([a, b])
print(ab)
cd = np.hstack([c, d])
print(cd)
distance(ab, cd, geometry="hyperbolic")
distance(a, cd, geometry="hyperbolic")
def frechet_diff_vectorised(p_eval, points, geometry="spherical"):
'''
Calculates the differential to enable a gradient descent algorithm to find
the Karcher/Fréchet mean of a set of points.
Inputs:
p_eval: Point at which to evaluate the derivative (usually a guess at
the mean). (d+1)-dimensional vector, expressed in ambient space
coordinates.
points: List of points which the derivative is calculate w.r.t. to.
(d+1)-dimensional vector, expressed in ambient space
coordinates.
geometry: string specifying which metric and distance function to use.
Outputs:
Derivative: (d+1)-dimensional vector, expressed in ambient space
coordinates.
Note: should vectorise to remove loop over points.
'''
metric = get_metric(p_eval.shape[0], geometry)
# update = np.zeros([p_eval.shape[0], 1])
coeffs = -2.*distance(p_eval, points, geometry)
print("numerator = ",coeffs)
if geometry == "spherical":
coeffs /= np.sqrt(1.-dot(p_eval, points, geometry)**2)+ 1.e-10
elif geometry == "hyperbolic":
coeffs /= np.sqrt(dot(p_eval, points, geometry)**2-1.)+ 1.e-10
print("coeffs =",coeffs)
print("points =", points)
print("coeffs*points = ", coeffs*points)
return np.atleast_2d(np.sum(coeffs*points, axis=1)).T
frechet_diff_vectorised(a, cd, geometry="hyperbolic")
def frechet_diff(p_eval, points, geometry="spherical"):
'''
Calculates the differential to enable a gradient descent algorithm to find
the Karcher/Fréchet mean of a set of points.
Inputs:
p_eval: Point at which to evaluate the derivative (usually a guess at
the mean). (d+1)-dimensional vector, expressed in ambient space
coordinates.
points: List of points which the derivative is calculate w.r.t. to.
(d+1)-dimensional vector, expressed in ambient space
coordinates.
geometry: string specifying which metric and distance function to use.
Outputs:
Derivative: (d+1)-dimensional vector, expressed in ambient space
coordinates.
Note: should vectorise to remove loop over points.
'''
metric = get_metric(p_eval.shape[0], geometry)
update = np.zeros([p_eval.shape[0], 1])
# print("frechet_diff: p_eval = {}, points = {}".format(p_eval, points))
for xi in points:
if np.array_equal(p_eval,xi):
continue
# print("frechet_diff: xi =", xi)
coeff = -2.*distance(p_eval, xi, geometry)
print("numerator =", coeff)
if geometry == "spherical":
coeff /= np.sqrt(1.-dot(p_eval, xi, geometry)**2)+ 1.e-10
elif geometry == "hyperbolic":
coeff /= np.sqrt(dot(p_eval, xi, geometry)**2-1.)
print("frechet_diff: coeff =", coeff)
print("coeffs*point = ",coeff*xi)
# update += coeff*metric.dot(xi)
update += coeff*xi
print("frechet_diff: update = {}".format(update))
return update
frechet_diff(a, [c,d], geometry="hyperbolic")
frechet_diff_vectorised(a, [c,d], geometry="hyperbolic")
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import collections, lines, markers, path, patches
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
from geometry import dot, get_metric
def distance(u, v, geometry="spherical"):
'''
Calculate distance on the manifold between two pts
Inputs: u, v: two vectors, represented as np.arrays
Outputs: distance, a 1-D real number
'''
dotprod = dot(u,v,geometry)
# if np.abs(dotprod) > 1:
# print("distance: {}.{} = {:.3g}".format(u, v, dotprod))
if geometry == "spherical":
return np.arccos(dotprod)
elif geometry == "hyperbolic":
return np.arccosh(-dotprod)
elif geometry == "euclidean":
return np.sqrt(dot(u-v, u-v, geometry))
else:
print("geometry = {} is not a valid option! Try 'spherical' or 'hyperbolic'".format(geometry))
alpha = 0.1
beta = 1.1
a = np.array([[np.sinh(alpha), np.cosh(alpha)]]).T
b = np.array([[np.sinh(beta), np.cosh(beta)]]).T
print(a)
print(b)
distance(a, b, geometry="hyperbolic")
c = np.array([[-1, 0],[0, 1]]).dot(a)
d = np.array([[-1, 0],[0, 1]]).dot(b)
print(c)
print(d)
print(distance(a, c, geometry="hyperbolic"))
print(distance(a, d, geometry="hyperbolic"))
print(distance(b, c, geometry="hyperbolic"))
print(distance(b, d, geometry="hyperbolic"))
ab = np.hstack([a, b])
print(ab)
cd = np.hstack([c, d])
print(cd)
distance(ab, cd, geometry="hyperbolic")
distance(a, cd, geometry="hyperbolic")
def frechet_diff_vectorised(p_eval, points, geometry="spherical"):
'''
Calculates the differential to enable a gradient descent algorithm to find
the Karcher/Fréchet mean of a set of points.
Inputs:
p_eval: Point at which to evaluate the derivative (usually a guess at
the mean). (d+1)-dimensional vector, expressed in ambient space
coordinates.
points: List of points which the derivative is calculate w.r.t. to.
(d+1)-dimensional vector, expressed in ambient space
coordinates.
geometry: string specifying which metric and distance function to use.
Outputs:
Derivative: (d+1)-dimensional vector, expressed in ambient space
coordinates.
Note: should vectorise to remove loop over points.
'''
metric = get_metric(p_eval.shape[0], geometry)
# update = np.zeros([p_eval.shape[0], 1])
coeffs = -2.*distance(p_eval, points, geometry)
print("numerator = ",coeffs)
if geometry == "spherical":
coeffs /= np.sqrt(1.-dot(p_eval, points, geometry)**2)+ 1.e-10
elif geometry == "hyperbolic":
coeffs /= np.sqrt(dot(p_eval, points, geometry)**2-1.)+ 1.e-10
print("coeffs =",coeffs)
print("points =", points)
print("coeffs*points = ", coeffs*points)
return np.atleast_2d(np.sum(coeffs*points, axis=1)).T
frechet_diff_vectorised(a, cd, geometry="hyperbolic")
def frechet_diff(p_eval, points, geometry="spherical"):
'''
Calculates the differential to enable a gradient descent algorithm to find
the Karcher/Fréchet mean of a set of points.
Inputs:
p_eval: Point at which to evaluate the derivative (usually a guess at
the mean). (d+1)-dimensional vector, expressed in ambient space
coordinates.
points: List of points which the derivative is calculate w.r.t. to.
(d+1)-dimensional vector, expressed in ambient space
coordinates.
geometry: string specifying which metric and distance function to use.
Outputs:
Derivative: (d+1)-dimensional vector, expressed in ambient space
coordinates.
Note: should vectorise to remove loop over points.
'''
metric = get_metric(p_eval.shape[0], geometry)
update = np.zeros([p_eval.shape[0], 1])
# print("frechet_diff: p_eval = {}, points = {}".format(p_eval, points))
for xi in points:
if np.array_equal(p_eval,xi):
continue
# print("frechet_diff: xi =", xi)
coeff = -2.*distance(p_eval, xi, geometry)
print("numerator =", coeff)
if geometry == "spherical":
coeff /= np.sqrt(1.-dot(p_eval, xi, geometry)**2)+ 1.e-10
elif geometry == "hyperbolic":
coeff /= np.sqrt(dot(p_eval, xi, geometry)**2-1.)
print("frechet_diff: coeff =", coeff)
print("coeffs*point = ",coeff*xi)
# update += coeff*metric.dot(xi)
update += coeff*xi
print("frechet_diff: update = {}".format(update))
return update
frechet_diff(a, [c,d], geometry="hyperbolic")
frechet_diff_vectorised(a, [c,d], geometry="hyperbolic")
| 0.425009 | 0.961786 |
<img src="images/usm.jpg" width="480" height="240" align="left"/>
# MAT281 - Laboratorio N°01
## Objetivos de la clase
* Reforzar los conceptos básicos de python.
## Contenidos
* [Problema 01](#p1)
* [Problema 02](#p2)
* [Problema 03](#p3)
* [Problema 04](#p4)
<a id='p1'></a>
## Problema 01
### a) Calcular el número $\pi$
En los siglos XVII y XVIII, James Gregory y Gottfried Leibniz descubrieron una serie infinita que sirve para calcular $\pi$:
$$\pi = 4 \sum_{k=1}^{\infty}\dfrac{(-1)^{k+1}}{2k-1} = 4(1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7} + ...) $$
Desarolle un programa para estimar el valor de $\pi$ ocupando el método de Leibniz, donde la entrada del programa debe ser un número entero $n$ que indique cuántos términos de la suma se utilizará.
* **Ejemplo**: *calcular_pi(3)* = 3.466666666666667, *calcular_pi(1000)* = 3.140592653839794
```
def calcular_pi(n:int)->float:
"""
calcular_pi(n)
Aproximacion del valor de pi mediante el método de Leibniz
Parameters
----------
n : int
Numero de terminos.
Returns
-------
output : float
Valor aproximado de pi.
Examples
--------
>>> calcular_pi(3)
3.466666666666667
>>> calcular_pi(1000)
3.140592653839794
"""
pi = 0 # valor incial
for k in range(1,n+1):
numerador = (-1)**(k+1) # numerador de la iteracion i
denominador = 2*k-1 # denominador de la iteracion i
pi+=numerador/denominador # suma hasta el i-esimo termino
return 4*pi
# Acceso a la documentación
help(calcular_pi)
# ejemplo 01
calcular_pi(3)
# ejemplo 02
calcular_pi(1000)
```
**Observación**:
* Note que si corre la línea de comando `calcular_pi(3.0)` le mandará un error ... ¿ por qué ?
* En los laboratorio, no se pide ser tan meticuloso con la documentacion.
* Lo primero es definir el código, correr los ejemplos y luego documentar correctamente.
### b) Calcular el número $e$
Euler realizó varios aportes en relación a $e$, pero no fue hasta 1748 cuando publicó su **Introductio in analysin infinitorum** que dio un tratamiento definitivo a las ideas sobre $e$. Allí mostró que:
En los siglos XVII y XVIII, James Gregory y Gottfried Leibniz descubrieron una serie infinita que sirve para calcular π:
$$e = \sum_{k=0}^{\infty}\dfrac{1}{k!} = 1+\dfrac{1}{2!}+\dfrac{1}{3!}+\dfrac{1}{4!} + ... $$
Desarolle un programa para estimar el valor de $e$ ocupando el método de Euler, donde la entrada del programa debe ser un número entero $n$ que indique cuántos términos de la suma se utilizará.
* **Ejemplo**: *calcular_e(3)* =2.5, *calcular_e(1000)* = 2.7182818284590455
<a id='p2'></a>
## Problema 02
Sea $\sigma(n)$ definido como la suma de los divisores propios de $n$ (números menores que n que se dividen en $n$).
Los [números amigos](https://en.wikipedia.org/wiki/Amicable_numbers) son enteros positivos $n_1$ y $n_2$ tales que la suma de los divisores propios de uno es igual al otro número y viceversa, es decir, $\sigma(n_1)=\sigma(n_2)$ y $\sigma(n_2)=\sigma(n_1)$.
Por ejemplo, los números 220 y 284 son números amigos.
* los divisores propios de 220 son 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 y 110; por lo tanto $\sigma(220) = 284$.
* los divisores propios de 284 son 1, 2, 4, 71 y 142; entonces $\sigma(284) = 220$.
Implemente una función llamada `amigos` cuyo input sean dos números naturales $n_1$ y $n_2$, cuyo output sea verifique si los números son amigos o no.
* **Ejemplo**: *amigos(220,284)* = True, *amigos(6,5)* = False
<a id='p3'></a>
## Problema 03
La [conjetura de Collatz](https://en.wikipedia.org/wiki/Collatz_conjecture), conocida también como conjetura $3n+1$ o conjetura de Ulam (entre otros nombres), fue enunciada por el matemático Lothar Collatz en 1937, y a la fecha no se ha resuelto.
Sea la siguiente operación, aplicable a cualquier número entero positivo:
* Si el número es par, se divide entre 2.
* Si el número es impar, se multiplica por 3 y se suma 1.
La conjetura dice que siempre alcanzaremos el 1 (y por tanto el ciclo 4, 2, 1) para cualquier número con el que comencemos.
Implemente una función llamada `collatz` cuyo input sea un número natural positivo $N$ y como output devulva la secuencia de números hasta llegar a 1.
* **Ejemplo**: *collatz(9)* = [9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
<a id='p4'></a>
## Problema 04
La [conjetura de Goldbach](https://en.wikipedia.org/wiki/Goldbach%27s_conjecture) es uno de los problemas abiertos más antiguos en matemáticas. Concretamente, G.H. Hardy, en 1921, en su famoso discurso pronunciado en la Sociedad Matemática de Copenhague, comentó que probablemente la conjetura de Goldbach no es solo uno de los problemas no resueltos más difíciles de la teoría de números, sino de todas las matemáticas. Su enunciado es el siguiente:
$$\textrm{Todo número par mayor que 2 puede escribirse como suma de dos números primos - Christian Goldbach (1742)}$$
Implemente una función llamada `goldbach` cuyo input sea un número natural positivo $N$ y como output devuelva la suma de dos primos ($N1$ y $N2$) tal que: $N1+N2=N$.
* **Ejemplo**: goldbash(4) = (2,2), goldbash(6) = (3,3) , goldbash(8) = (3,5)
|
github_jupyter
|
def calcular_pi(n:int)->float:
"""
calcular_pi(n)
Aproximacion del valor de pi mediante el método de Leibniz
Parameters
----------
n : int
Numero de terminos.
Returns
-------
output : float
Valor aproximado de pi.
Examples
--------
>>> calcular_pi(3)
3.466666666666667
>>> calcular_pi(1000)
3.140592653839794
"""
pi = 0 # valor incial
for k in range(1,n+1):
numerador = (-1)**(k+1) # numerador de la iteracion i
denominador = 2*k-1 # denominador de la iteracion i
pi+=numerador/denominador # suma hasta el i-esimo termino
return 4*pi
# Acceso a la documentación
help(calcular_pi)
# ejemplo 01
calcular_pi(3)
# ejemplo 02
calcular_pi(1000)
| 0.682362 | 0.958343 |
# <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>
## Download: http://github.com/dsacademybr
```
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
```
## Exercícios - Métodos e Funções
```
# Exercício 1 - Crie uma função que imprima a sequência de números pares entre 1 e 20 (a função não recebe parâmetro) e
# depois faça uma chamada à função para listar os números
def listaPar():
for i in range(2, 21, 2):
print(i)
listaPar()
# Exercício 2 - Crie uam função que receba uma string como argumento e retorne a mesma string em letras maiúsculas.
# Faça uma chamada à função, passando como parâmetro uma string
def listaString(texto):
print(texto.upper())
return
listaString('Rumo à Análise de Dados')
# Exercício 3 - Crie uma função que receba como parâmetro uma lista de 4 elementos, adicione 2 elementos a lista e
# imprima a lista
def novaLista(lista):
print(lista.append(5))
print(lista.append(6))
lista1 = [1, 2, 3, 4]
novaLista(lista1)
print(lista1)
# Exercício 4 - Crie uma função que receba um argumento formal e uma possível lista de elementos. Faça duas chamadas
# à função, com apenas 1 elemento e na segunda chamada com 4 elementos
def printNum( arg1, *lista ):
print (arg1)
for i in lista:
print (i)
return;
# Chamada à função
printNum( 100 )
printNum( 'A', 'B', 'C' )
# Exercício 5 - Crie uma função anônima e atribua seu retorno a uma variável chamada soma. A expressão vai receber 2
# números como parâmetro e retornar a soma deles
soma = lambda arg1, arg2: arg1 + arg2
print ("A soma é : ", soma( 452, 298 ))
# Exercício 6 - Execute o código abaixo e certifique-se que compreende a diferença entre variável global e local
total = 0
def soma( arg1, arg2 ):
total = arg1 + arg2;
print ("Dentro da função o total é: ", total)
return total;
soma( 10, 20 );
print ("Fora da função o total é: ", total)
# Exercício 7 - Abaixo você encontra uma lista com temperaturas em graus Celsius
# Crie uma função anônima que converta cada temperatura para Fahrenheit
# Dica: para conseguir realizar este exercício, você deve criar sua função lambda, dentro de uma função
# (que será estudada no próximo capítulo). Isso permite aplicar sua função a cada elemento da lista
# Como descobrir a fórmula matemática que converte de Celsius para Fahrenheit? Pesquise!!!
Celsius = [39.2, 36.5, 37.3, 37.8]
Fahrenheit = map(lambda x: (float(9)/5)*x + 32, Celsius)
print (list(Fahrenheit))
# Exercício 8
# Crie um dicionário e liste todos os métodos e atributos do dicionário
dic = {'k1': 'Natal', 'k2': 'Recife'}
dir(dic)
import pandas as pd
pd.__version__
# Exercício 9
# Abaixo você encontra a importação do Pandas, um dos principais pacotes Python para análise de dados.
# Analise atentamente todos os métodos disponíveis. Um deles você vai usar no próximo exercício.
import pandas as pd
dir(pd)
# ************* Desafio ************* (pesquise na documentação Python)
# Exercício 10 - Crie uma função que receba o arquivo abaixo como argumento e retorne um resumo estatístico descritivo
# do arquivo. Dica, use Pandas e um de seus métodos, describe()
# Arquivo: "binary.csv"
import pandas as pd
file_name = "binary.csv"
def retornaArq(file_name):
df = pd.read_csv(file_name)
return df.describe()
retornaArq(file_name)
```
# Fim
### Obrigado
### Visite o Blog da Data Science Academy - <a href="http://blog.dsacademy.com.br">Blog DSA</a>
|
github_jupyter
|
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Exercício 1 - Crie uma função que imprima a sequência de números pares entre 1 e 20 (a função não recebe parâmetro) e
# depois faça uma chamada à função para listar os números
def listaPar():
for i in range(2, 21, 2):
print(i)
listaPar()
# Exercício 2 - Crie uam função que receba uma string como argumento e retorne a mesma string em letras maiúsculas.
# Faça uma chamada à função, passando como parâmetro uma string
def listaString(texto):
print(texto.upper())
return
listaString('Rumo à Análise de Dados')
# Exercício 3 - Crie uma função que receba como parâmetro uma lista de 4 elementos, adicione 2 elementos a lista e
# imprima a lista
def novaLista(lista):
print(lista.append(5))
print(lista.append(6))
lista1 = [1, 2, 3, 4]
novaLista(lista1)
print(lista1)
# Exercício 4 - Crie uma função que receba um argumento formal e uma possível lista de elementos. Faça duas chamadas
# à função, com apenas 1 elemento e na segunda chamada com 4 elementos
def printNum( arg1, *lista ):
print (arg1)
for i in lista:
print (i)
return;
# Chamada à função
printNum( 100 )
printNum( 'A', 'B', 'C' )
# Exercício 5 - Crie uma função anônima e atribua seu retorno a uma variável chamada soma. A expressão vai receber 2
# números como parâmetro e retornar a soma deles
soma = lambda arg1, arg2: arg1 + arg2
print ("A soma é : ", soma( 452, 298 ))
# Exercício 6 - Execute o código abaixo e certifique-se que compreende a diferença entre variável global e local
total = 0
def soma( arg1, arg2 ):
total = arg1 + arg2;
print ("Dentro da função o total é: ", total)
return total;
soma( 10, 20 );
print ("Fora da função o total é: ", total)
# Exercício 7 - Abaixo você encontra uma lista com temperaturas em graus Celsius
# Crie uma função anônima que converta cada temperatura para Fahrenheit
# Dica: para conseguir realizar este exercício, você deve criar sua função lambda, dentro de uma função
# (que será estudada no próximo capítulo). Isso permite aplicar sua função a cada elemento da lista
# Como descobrir a fórmula matemática que converte de Celsius para Fahrenheit? Pesquise!!!
Celsius = [39.2, 36.5, 37.3, 37.8]
Fahrenheit = map(lambda x: (float(9)/5)*x + 32, Celsius)
print (list(Fahrenheit))
# Exercício 8
# Crie um dicionário e liste todos os métodos e atributos do dicionário
dic = {'k1': 'Natal', 'k2': 'Recife'}
dir(dic)
import pandas as pd
pd.__version__
# Exercício 9
# Abaixo você encontra a importação do Pandas, um dos principais pacotes Python para análise de dados.
# Analise atentamente todos os métodos disponíveis. Um deles você vai usar no próximo exercício.
import pandas as pd
dir(pd)
# ************* Desafio ************* (pesquise na documentação Python)
# Exercício 10 - Crie uma função que receba o arquivo abaixo como argumento e retorne um resumo estatístico descritivo
# do arquivo. Dica, use Pandas e um de seus métodos, describe()
# Arquivo: "binary.csv"
import pandas as pd
file_name = "binary.csv"
def retornaArq(file_name):
df = pd.read_csv(file_name)
return df.describe()
retornaArq(file_name)
| 0.256646 | 0.872619 |
```
# default_exp models.fnfm
```
# FNFM
> A pytorch implementation of Field-aware Neural Factorization Machine.
```
#hide
from nbdev.showdoc import *
from fastcore.nb_imports import *
from fastcore.test import *
#export
import numpy as np
import torch
from recohut.models.layers.common import FeaturesLinear, MultiLayerPerceptron
#export
class FieldAwareFactorizationMachine(torch.nn.Module):
def __init__(self, field_dims, embed_dim):
super().__init__()
self.num_fields = len(field_dims)
self.embeddings = torch.nn.ModuleList([
torch.nn.Embedding(sum(field_dims), embed_dim) for _ in range(self.num_fields)
])
self.offsets = np.array((0, *np.cumsum(field_dims)[:-1]), dtype=np.long)
for embedding in self.embeddings:
torch.nn.init.xavier_uniform_(embedding.weight.data)
def forward(self, x):
"""
:param x: Long tensor of size ``(batch_size, num_fields)``
"""
x = x + x.new_tensor(self.offsets).unsqueeze(0)
xs = [self.embeddings[i](x) for i in range(self.num_fields)]
ix = list()
for i in range(self.num_fields - 1):
for j in range(i + 1, self.num_fields):
ix.append(xs[j][:, i] * xs[i][:, j])
ix = torch.stack(ix, dim=1)
return ix
class FNFM(torch.nn.Module):
"""
A pytorch implementation of Field-aware Neural Factorization Machine.
Reference:
L Zhang, et al. Field-aware Neural Factorization Machine for Click-Through Rate Prediction, 2019.
"""
def __init__(self, field_dims, embed_dim, mlp_dims, dropouts):
super().__init__()
self.linear = FeaturesLinear(field_dims)
self.ffm = FieldAwareFactorizationMachine(field_dims, embed_dim)
self.ffm_output_dim = len(field_dims) * (len(field_dims) - 1) // 2 * embed_dim
self.bn = torch.nn.BatchNorm1d(self.ffm_output_dim)
self.dropout = torch.nn.Dropout(dropouts[0])
self.mlp = MultiLayerPerceptron(self.ffm_output_dim, mlp_dims, dropouts[1])
def forward(self, x):
"""
:param x: Long tensor of size ``(batch_size, num_fields)``
"""
cross_term = self.ffm(x).view(-1, self.ffm_output_dim)
cross_term = self.bn(cross_term)
cross_term = self.dropout(cross_term)
x = self.linear(x) + self.mlp(cross_term)
return torch.sigmoid(x.squeeze(1))
```
> **References:-**
- L Zhang, et al. Field-aware Neural Factorization Machine for Click-Through Rate Prediction, 2019.
- https://github.com/rixwew/pytorch-fm/blob/master/torchfm/model/fnfm.py
```
#hide
%reload_ext watermark
%watermark -a "Sparsh A." -m -iv -u -t -d -p recohut
```
|
github_jupyter
|
# default_exp models.fnfm
#hide
from nbdev.showdoc import *
from fastcore.nb_imports import *
from fastcore.test import *
#export
import numpy as np
import torch
from recohut.models.layers.common import FeaturesLinear, MultiLayerPerceptron
#export
class FieldAwareFactorizationMachine(torch.nn.Module):
def __init__(self, field_dims, embed_dim):
super().__init__()
self.num_fields = len(field_dims)
self.embeddings = torch.nn.ModuleList([
torch.nn.Embedding(sum(field_dims), embed_dim) for _ in range(self.num_fields)
])
self.offsets = np.array((0, *np.cumsum(field_dims)[:-1]), dtype=np.long)
for embedding in self.embeddings:
torch.nn.init.xavier_uniform_(embedding.weight.data)
def forward(self, x):
"""
:param x: Long tensor of size ``(batch_size, num_fields)``
"""
x = x + x.new_tensor(self.offsets).unsqueeze(0)
xs = [self.embeddings[i](x) for i in range(self.num_fields)]
ix = list()
for i in range(self.num_fields - 1):
for j in range(i + 1, self.num_fields):
ix.append(xs[j][:, i] * xs[i][:, j])
ix = torch.stack(ix, dim=1)
return ix
class FNFM(torch.nn.Module):
"""
A pytorch implementation of Field-aware Neural Factorization Machine.
Reference:
L Zhang, et al. Field-aware Neural Factorization Machine for Click-Through Rate Prediction, 2019.
"""
def __init__(self, field_dims, embed_dim, mlp_dims, dropouts):
super().__init__()
self.linear = FeaturesLinear(field_dims)
self.ffm = FieldAwareFactorizationMachine(field_dims, embed_dim)
self.ffm_output_dim = len(field_dims) * (len(field_dims) - 1) // 2 * embed_dim
self.bn = torch.nn.BatchNorm1d(self.ffm_output_dim)
self.dropout = torch.nn.Dropout(dropouts[0])
self.mlp = MultiLayerPerceptron(self.ffm_output_dim, mlp_dims, dropouts[1])
def forward(self, x):
"""
:param x: Long tensor of size ``(batch_size, num_fields)``
"""
cross_term = self.ffm(x).view(-1, self.ffm_output_dim)
cross_term = self.bn(cross_term)
cross_term = self.dropout(cross_term)
x = self.linear(x) + self.mlp(cross_term)
return torch.sigmoid(x.squeeze(1))
#hide
%reload_ext watermark
%watermark -a "Sparsh A." -m -iv -u -t -d -p recohut
| 0.881602 | 0.815306 |
**Important**: Click on "*Kernel*" > "*Restart Kernel and Run All*" *after* finishing the exercises in [JupyterLab <img height="12" style="display: inline-block" src="static/link_to_jp.png">](https://jupyterlab.readthedocs.io/en/stable/) (e.g., in the cloud on [MyBinder <img height="12" style="display: inline-block" src="static/link_to_mb.png">](https://mybinder.org/v2/gh/webartifex/intro-to-python/master?urlpath=lab/tree/08_mfr_02_exercises.ipynb)) to ensure that your solution runs top to bottom *without* any errors
# Chapter 8: Map, Filter, & Reduce
## Coding Exercises
The exercises below assume that you have read [Chapter 8 <img height="12" style="display: inline-block" src="static/link_to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/master/08_mfr_00_content.ipynb) in the book.
The `...`'s in the code cells indicate where you need to fill in code snippets. The number of `...`'s within a code cell give you a rough idea of how many lines of code are needed to solve the task. You should not need to create any additional code cells for your final solution. However, you may want to use temporary code cells to try out some ideas.
### Packing & Unpacking with Functions (continued)
**Q1.1**: Copy your solution to **Q2.10** from the [Chapter 7 Exercises <img height="12" style="display: inline-block" src="static/link_to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/master/07_sequences_02_exercises.ipynb#Packing-&-Unpacking-with-Functions) into the code cell below!
```
import collections.abc as abc
def product(*args, ...):
"""Multiply all arguments."""
...
...
...
...
...
...
return ...
```
**Q1.2**: Verify that all test cases below work (i.e., the `assert` statements must *not* raise an `AssertionError`)!
```
assert product(42) == 42
assert product(2, 5, 10) == 100
assert product(2, 5, 10, start=2) == 200
one_hundred = [2, 5, 10]
assert product(one_hundred) == 100
assert product(*one_hundred) == 100
```
**Q1.3**: Verify that `product()` raises a `TypeError` when called without any arguments!
```
product()
```
This implementation of `product()` is convenient to use, in particular, because we can pass it any *collection* object with or without *unpacking* it.
However, `product()` suffers from one last flaw: We cannot pass it a **stream** of data, as modeled, for example, with an *iterator* object that produces elements on a one-by-one basis.
**Q1.4**: Click through the following example!
The [*stream.py* <img height="12" style="display: inline-block" src="static/link_to_gh.png">](https://github.com/webartifex/intro-to-python/blob/master/stream.py) module in the book's repository provides a `make_finite_stream()` function. It is a *factory* function creating objects of type `generator` that we use to model *streaming* data.
```
from stream import make_finite_stream
stream = make_finite_stream()
stream
type(stream)
```
Being a `generator` object, `stream` is also an `Iterator` in the abstract sense.
```
isinstance(stream, abc.Iterator)
```
*Iterators* are good for only *one* thing: Giving us the "next" element in a series of possibly *infinitely* many objects. While the `stream` object is finite (i.e., execute the next code cell until you see a `StopIteration` exception), ...
```
next(stream)
```
... it has *no* concept of a "length:" The built-in [len() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#len) function raises a `TypeError`.
```
len(stream)
```
We can use the built-in [list() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#func-list) constructor to *materialize* the elements. However, in a real-world scenario, these may *not* fit into our machine's memory!
```
list(stream)
```
To be more realistic, `make_finite_stream()` creates `generator` objects producing a varying number of elements.
```
list(make_finite_stream())
list(make_finite_stream())
list(make_finite_stream())
```
Let's see what happens if we pass an *iterator*, as created by `make_finite_stream()`, instead of a materialized *collection*, like `one_hundred`, to `product()`.
```
product(make_finite_stream())
```
**Q1.5**: What line causes the `TypeError`? What line is really the problem in `product()`? Hint: These may be different lines. Describe what happens on each line in the function's body until the exception is raised!
< your answer >
**Q1.6**: Adapt `product()` one last time to make it work with *iterators* as well!
Hints: This task is as easy as replacing `Collection` with something else. Which of the three behaviors of *collections* do *iterators* also exhibit? You may want to look at the documentations on the built-in [max() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#max), [min() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#min), and [sum() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#sum) functions: What kind of argument do they take?
```
def product(*args, ...):
"""Multiply all arguments."""
...
...
...
...
...
...
return ...
```
The final version of `product()` behaves like built-ins in edge cases (i.e., `sum()` also raises a `TypeError` when called without arguments), ...
```
product()
```
... works with the arguments passed either separately as *positional* arguments, *packed* together into a single *collection* argument, or *unpacked*, ...
```
product(42)
product(2, 5, 10)
product([2, 5, 10])
product(*[2, 5, 10])
```
... and can handle *streaming* data with *indefinite* "length."
```
product(make_finite_stream())
```
In real-world projects, the data science practitioner must decide if it is worthwhile to make a function usable in various different forms as we do in this exercise. This may be over-engineered.
Yet, two lessons are important to take away:
- It is a good idea to *mimic* the behavior of *built-ins*, and
- make functions capable of working with *streaming* data.
### Removing Outliers in Streaming Data
Let's say we are given a `list` object with random integers like `sample` below, and we want to calculate some basic statistics on them.
```
sample = [
45, 46, 40, 49, 36, 53, 49, 42, 25, 40, 39, 36, 38, 40, 40, 52, 36, 52, 40, 41,
35, 29, 48, 43, 42, 30, 29, 33, 55, 33, 38, 50, 39, 56, 52, 28, 37, 56, 45, 37,
41, 41, 37, 30, 51, 32, 23, 40, 53, 40, 45, 39, 99, 42, 34, 42, 34, 39, 39, 53,
43, 37, 46, 36, 45, 42, 32, 38, 57, 34, 36, 44, 47, 51, 46, 39, 28, 40, 35, 46,
41, 51, 41, 23, 46, 40, 40, 51, 50, 32, 47, 36, 38, 29, 32, 53, 34, 43, 39, 41,
40, 34, 44, 40, 41, 43, 47, 57, 50, 42, 38, 25, 45, 41, 58, 37, 45, 55, 44, 53,
82, 31, 45, 33, 32, 39, 46, 48, 42, 47, 40, 45, 51, 35, 31, 46, 40, 44, 61, 57,
40, 36, 35, 55, 40, 56, 36, 35, 86, 36, 51, 40, 54, 50, 49, 36, 41, 37, 48, 41,
42, 44, 40, 43, 51, 47, 46, 50, 40, 23, 40, 39, 28, 38, 42, 46, 46, 42, 46, 31,
32, 40, 48, 27, 40, 40, 30, 32, 25, 31, 30, 43, 44, 29, 45, 41, 63, 32, 33, 58,
]
len(sample)
```
**Q2.1**: `list` objects are **sequences**. What *four* behaviors do they always come with?
< your answer >
**Q2.2**: Write a function `mean()` that calculates the simple arithmetic mean of a given `sequence` with numbers!
Hints: You can solve this task with [built-in functions <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html) only. A `for`-loop is *not* needed.
```
def mean(sequence):
...
sample_mean = mean(sample)
sample_mean
```
**Q2.3**: Write a function `std()` that calculates the [standard deviation <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Standard_deviation) of a `sequence` of numbers! Integrate your `mean()` version from before and the [sqrt() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/math.html#math.sqrt) function from the [math <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/math.html) module in the [standard library <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/index.html) provided to you below. Make sure `std()` calls `mean()` only *once* internally! Repeated calls to `mean()` would be a waste of computational resources.
Hints: Parts of the code are probably too long to fit within the suggested 79 characters per line. So, use *temporary variables* inside your function. Instead of a `for`-loop, you may want to use a *list comprehension* or, even better, a memoryless *generator expression*.
```
from math import sqrt
def std(sequence):
...
sample_std = std(sample)
sample_std
```
**Q2.4**: Complete `standardize()` below that takes a `sequence` of numbers and returns a `list` object with the **[z-scores <img height="12" style="display: inline-block" src="static/link_to_wiki.png">](https://en.wikipedia.org/wiki/Standard_score)** of these numbers! A z-score is calculated by subtracting the mean and dividing by the standard deviation. Re-use `mean()` and `std()` from before. Again, ensure that `standardize()` calls `mean()` and `std()` only *once*! Further, round all z-scores with the built-in [round() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#round) function and pass on the keyword-only argument `digits` to it.
Hint: You may want to use a *list comprehension* instead of a `for`-loop.
```
def standardize(sequence, *, digits=3):
...
z_scores = standardize(sample)
```
The [pprint() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/pprint.html#pprint.pprint) function from the [pprint <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/pprint.html) module in the [standard library <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/index.html) allows us to "pretty print" long `list` objects compactly.
```
from pprint import pprint
pprint(z_scores, compact=True)
```
We know that `standardize()` works correctly if the resulting z-scores' mean and standard deviation approach `0` and `1` for a long enough `sequence`.
```
mean(z_scores), std(z_scores)
```
Even though `standardize()` calls `mean()` and `std()` only once each, `mean()` is called *twice*! That is so because `std()` internally also re-uses `mean()`!
**Q2.5.1**: Rewrite `std()` to take an optional keyword-only argument `seq_mean`, defaulting to `None`. If provided, `seq_mean` is used instead of the result of calling `mean()`. Otherwise, the latter is called.
Hint: You must check if `seq_mean` is still the default value.
```
def std(sequence, *, seq_mean=None):
...
```
`std()` continues to work as before.
```
sample_std = std(sample)
sample_std
```
**Q2.5.2**: Now, rewrite `standardize()` to pass on the return value of `mean()` to `std()`! In summary, `standardize()` calculates the z-scores for the numbers in the `sequence` with as few computational steps as possible.
```
def standardize(sequence, *, digits=3):
...
z_scores = standardize(sample)
mean(z_scores), std(z_scores)
```
**Q2.6**: With both `sample` and `z_scores` being materialized `list` objects, we can loop over pairs consisting of a number from `sample` and its corresponding z-score. Write a `for`-loop that prints out all the "outliers," as which we define numbers with an absolute z-score above `1.96`. There are *four* of them in the `sample`.
Hint: Use the [abs() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#abs) and [zip() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#zip) built-ins.
```
...
```
We provide a `stream` module with a `data` object that models an *infinite* **stream** of data (cf., the [*stream.py* <img height="12" style="display: inline-block" src="static/link_to_gh.png">](https://github.com/webartifex/intro-to-python/blob/master/stream.py) file in the repository).
```
from stream import data
data
```
`data` is of type `generator` and has *no* length.
```
type(data)
len(data)
```
Being a `generator`, it is an `Iterator` in the abstract sense ...
```
import collections.abc as abc
isinstance(data, abc.Iterator)
```
... and so the only thing we can do with it is to pass it to the built-in [next() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#next) function and go over the numbers it streams one by one.
```
next(data)
```
**Q2.7**: What happens if you call `mean()` with `data` as the argument? What is the problem?
Hints: If you try it out, you may have to press the "Stop" button in the toolbar at the top. Your computer should *not* crash, but you will *have to* restart this Jupyter notebook with "Kernel" > "Restart" and import `data` again.
< your answer >
```
mean(data)
```
**Q2.8**: Write a function `take_sample()` that takes an `iterator` as its argument, like `data`, and creates a *materialized* `list` object out of its first `n` elements, defaulting to `1_000`!
Hints: [next() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#next) and the [range() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#func-range) built-in may be helpful. You may want to use a *list comprehension* instead of a `for`-loop and write a one-liner. Audacious students may want to look at [isclice() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/itertools.html#itertools.islice) in the [itertools <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/itertools.html) module in the [standard library <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/index.html).
```
def take_sample(iterator, *, n=1_000):
...
```
We take a `new_sample` from the stream of `data`, and its statistics are similar to the initial `sample`.
```
new_sample = take_sample(data)
len(new_sample)
mean(new_sample)
std(new_sample)
```
**Q2.9**: Convert `standardize()` into a *new* function `standardized()` that implements the *same* logic but works on a possibly *infinite* stream of data, provided as an `iterable`, instead of a *finite* `sequence`.
To calculate a z-score, we need the stream's overall mean and standard deviation, and that is *impossible* to calculate if we do not know how long the stream is, and, in particular, if it is *infinite*. So, `standardized()` first takes a sample from the `iterable` internally, and uses the sample's mean and standard deviation to calculate the z-scores.
Hint: `standardized()` *must* return a `generator` object. So, use a *generator expression* as the return value; unless you know about the `yield` statement already (cf., [reference <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/reference/simple_stmts.html#the-yield-statement)).
```
def standardized(iterable, *, digits=3):
...
```
`standardized()` works almost like `standardize()` except that we use it with [next() <img height="12" style="display: inline-block" src="static/link_to_py.png">](https://docs.python.org/3/library/functions.html#next) to obtain the z-scores one by one.
```
z_scores = standardized(data)
z_scores
type(z_scores)
next(z_scores)
```
**Q2.10.1**: `standardized()` allows us to go over an *infinite* stream of z-scores. What we want to do instead is to loop over the stream's raw numbers and skip the outliers. In the remainder of this exercise, you look at the parts that make up the `skip_outliers()` function below to achieve precisely that.
The first steps in `skip_outliers()` are the same as in `standardized()`: We take a `sample` from the stream of `data` and calculate its statistics.
```
sample = ...
seq_mean = ...
seq_std = ...
```
**Q2.10.2**: Just as in `standardized()`, write a *generator expression* that produces z-scores one by one! However, instead of just generating a z-score, the resulting `generator` object should produce `tuple` objects consisting of a "raw" number from `data` and its z-score.
Hint: Look at the revisited "*Averaging Even Numbers*" example in [Chapter 7 <img height="12" style="display: inline-block" src="static/link_to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/master/07_sequences_00_content.ipynb#Example:-Averaging-all-even-Numbers-in-a-List-%28revisited%29) for some inspiration, which also contains a generator expression producing `tuple` objects.
```
standardizer = (... for ... in data)
```
`standardizer` should produce `tuple` objects.
```
next(standardizer)
```
**Q2.10.3**: Write another generator expression that loops over `standardizer`. It contains an `if`-clause that keeps only numbers with an absolute z-score below the `threshold_z`. If you fancy, use *tuple unpacking*.
```
threshold_z = 1.96
no_outliers = (... for ... in standardizer if ...)
```
`no_outliers` should produce `int` objects.
```
next(no_outliers)
```
**Q2.10.4**: Lastly, put everything together in the `skip_outliers()` function! Make sure you refer to `iterable` inside the function and not the global `data`.
```
def skip_outliers(iterable, *, threshold_z=1.96):
sample = ...
seq_mean = ...
seq_std = ...
standardizer = ...
no_outliers = ...
return no_outliers
```
Now, we can create a `generator` object and loop over the `data` in the stream with outliers skipped. Instead of the default `1.96`, we use a `threshold_z` of only `0.05`: That filters out all numbers except `42`.
```
skipper = skip_outliers(data, threshold_z=0.05)
skipper
type(skipper)
next(skipper)
```
**Q2.11**: You implemented the functions `mean()`, `std()`, `standardize()`, `standardized()`, and `skip_outliers()`. Which of them are **eager**, and which are **lazy**? How do these two concepts relate to **finite** and **infinite** data?
< your answer >
|
github_jupyter
|
import collections.abc as abc
def product(*args, ...):
"""Multiply all arguments."""
...
...
...
...
...
...
return ...
assert product(42) == 42
assert product(2, 5, 10) == 100
assert product(2, 5, 10, start=2) == 200
one_hundred = [2, 5, 10]
assert product(one_hundred) == 100
assert product(*one_hundred) == 100
product()
from stream import make_finite_stream
stream = make_finite_stream()
stream
type(stream)
isinstance(stream, abc.Iterator)
next(stream)
len(stream)
list(stream)
list(make_finite_stream())
list(make_finite_stream())
list(make_finite_stream())
product(make_finite_stream())
def product(*args, ...):
"""Multiply all arguments."""
...
...
...
...
...
...
return ...
product()
product(42)
product(2, 5, 10)
product([2, 5, 10])
product(*[2, 5, 10])
product(make_finite_stream())
sample = [
45, 46, 40, 49, 36, 53, 49, 42, 25, 40, 39, 36, 38, 40, 40, 52, 36, 52, 40, 41,
35, 29, 48, 43, 42, 30, 29, 33, 55, 33, 38, 50, 39, 56, 52, 28, 37, 56, 45, 37,
41, 41, 37, 30, 51, 32, 23, 40, 53, 40, 45, 39, 99, 42, 34, 42, 34, 39, 39, 53,
43, 37, 46, 36, 45, 42, 32, 38, 57, 34, 36, 44, 47, 51, 46, 39, 28, 40, 35, 46,
41, 51, 41, 23, 46, 40, 40, 51, 50, 32, 47, 36, 38, 29, 32, 53, 34, 43, 39, 41,
40, 34, 44, 40, 41, 43, 47, 57, 50, 42, 38, 25, 45, 41, 58, 37, 45, 55, 44, 53,
82, 31, 45, 33, 32, 39, 46, 48, 42, 47, 40, 45, 51, 35, 31, 46, 40, 44, 61, 57,
40, 36, 35, 55, 40, 56, 36, 35, 86, 36, 51, 40, 54, 50, 49, 36, 41, 37, 48, 41,
42, 44, 40, 43, 51, 47, 46, 50, 40, 23, 40, 39, 28, 38, 42, 46, 46, 42, 46, 31,
32, 40, 48, 27, 40, 40, 30, 32, 25, 31, 30, 43, 44, 29, 45, 41, 63, 32, 33, 58,
]
len(sample)
def mean(sequence):
...
sample_mean = mean(sample)
sample_mean
from math import sqrt
def std(sequence):
...
sample_std = std(sample)
sample_std
def standardize(sequence, *, digits=3):
...
z_scores = standardize(sample)
from pprint import pprint
pprint(z_scores, compact=True)
mean(z_scores), std(z_scores)
def std(sequence, *, seq_mean=None):
...
sample_std = std(sample)
sample_std
def standardize(sequence, *, digits=3):
...
z_scores = standardize(sample)
mean(z_scores), std(z_scores)
...
from stream import data
data
type(data)
len(data)
import collections.abc as abc
isinstance(data, abc.Iterator)
next(data)
mean(data)
def take_sample(iterator, *, n=1_000):
...
new_sample = take_sample(data)
len(new_sample)
mean(new_sample)
std(new_sample)
def standardized(iterable, *, digits=3):
...
z_scores = standardized(data)
z_scores
type(z_scores)
next(z_scores)
sample = ...
seq_mean = ...
seq_std = ...
standardizer = (... for ... in data)
next(standardizer)
threshold_z = 1.96
no_outliers = (... for ... in standardizer if ...)
next(no_outliers)
def skip_outliers(iterable, *, threshold_z=1.96):
sample = ...
seq_mean = ...
seq_std = ...
standardizer = ...
no_outliers = ...
return no_outliers
skipper = skip_outliers(data, threshold_z=0.05)
skipper
type(skipper)
next(skipper)
| 0.778523 | 0.941061 |
Given a binary matrix A, we want to flip the image horizontally, then invert it, and return the resulting image.
To flip an image horizontally means that each row of the image is reversed. For example, flipping [1, 1, 0] horizontally results in [0, 1, 1].
To invert an image means that each 0 is replaced by 1, and each 1 is replaced by 0. For example, inverting [0, 1, 1] results in [1, 0, 0].
좌우로 뒤집은 뒤 0을 1로, 1을 0으로 바꾼다.
```
# Example 1
input_a = [[1,1,0],
[1,0,1],
[0,0,0]]
output_a = [[1,0,0],
[0,1,0],
[1,1,1]]
# Explanation: First reverse each row
# [[0,1,1],
# [1,0,1],
# [0,0,0]].
# Then, invert the image
# [[1,0,0],
# [0,1,0],
# [1,1,1]]
# Example 2:
input_b = [[1,1,0,0],
[1,0,0,1],
[0,1,1,1],
[1,0,1,0]]
output_b = [[1,1,0,0],
[0,1,1,0],
[0,0,0,1],
[1,0,1,0]]
# Explanation: First reverse each row:
# [[0,0,1,1],
# [1,0,0,1],
# [1,1,1,0],
# [0,1,0,1]].
# Then invert the image
# [[1,1,0,0],
# [0,1,1,0],
# [0,0,0,1],
# [1,0,1,0]]
```
Notes:
1 <= A.length = A[0].length <= 20
0 <= A[i][j] <= 1
#### Example 1
```
print(input_a)
print(input_a[0][::-1])
print(input_a[1][::-1])
print(input_a[2][::-1])
for i in range(len(input_a)):
input_a[0] = input_a[0][::-1]
input_a
for i in range(len(input_a)):
for j in range(len(input_a)):
if input_a[i][j] == 0:
input_a[i][j] = 1
else:
input_a[i][j] = 0
input_a
```
#### Example 2
```
# Example 2:
input_b = [[1,1,0,0],
[1,0,0,1],
[0,1,1,1],
[1,0,1,0]]
output_b = [[1,1,0,0],
[0,1,1,0],
[0,0,0,1],
[1,0,1,0]]
# Explanation: First reverse each row:
# [[0,0,1,1],
# [1,0,0,1],
# [1,1,1,0],
# [0,1,0,1]]
print(input_b[0][::-1])
print(input_b[1][::-1])
print(input_b[2][::-1])
print(input_b[3][::-1])
for i in range(len(input_b)):
input_b[i] = input_b[i][::-1]
input_b
# Then invert the image
# [[1,1,0,0],
# [0,1,1,0],
# [0,0,0,1],
# [1,0,1,0]]
for i in range(len(input_b)):
for j in range(len(input_b)):
if input_b[i][j] == 0:
input_b[i][j] = 1
else:
input_b[i][j] = 0
input_b
%time
def flipAndInvertImage(input_b):
for k in range(len(input_b)):
input_b[k] = input_b[k][::-1]
for i in range(len(input_b)):
for j in range(len(input_b)):
if input_b[i][j] == 0:
input_b[i][j] = 1
else:
input_b[i][j] = 0
return input_b
%time
print(flipAndInvertImage(input_a))
print(output_a)
%time
print(flipAndInvertImage(input_b))
print(output_b)
```
|
github_jupyter
|
# Example 1
input_a = [[1,1,0],
[1,0,1],
[0,0,0]]
output_a = [[1,0,0],
[0,1,0],
[1,1,1]]
# Explanation: First reverse each row
# [[0,1,1],
# [1,0,1],
# [0,0,0]].
# Then, invert the image
# [[1,0,0],
# [0,1,0],
# [1,1,1]]
# Example 2:
input_b = [[1,1,0,0],
[1,0,0,1],
[0,1,1,1],
[1,0,1,0]]
output_b = [[1,1,0,0],
[0,1,1,0],
[0,0,0,1],
[1,0,1,0]]
# Explanation: First reverse each row:
# [[0,0,1,1],
# [1,0,0,1],
# [1,1,1,0],
# [0,1,0,1]].
# Then invert the image
# [[1,1,0,0],
# [0,1,1,0],
# [0,0,0,1],
# [1,0,1,0]]
print(input_a)
print(input_a[0][::-1])
print(input_a[1][::-1])
print(input_a[2][::-1])
for i in range(len(input_a)):
input_a[0] = input_a[0][::-1]
input_a
for i in range(len(input_a)):
for j in range(len(input_a)):
if input_a[i][j] == 0:
input_a[i][j] = 1
else:
input_a[i][j] = 0
input_a
# Example 2:
input_b = [[1,1,0,0],
[1,0,0,1],
[0,1,1,1],
[1,0,1,0]]
output_b = [[1,1,0,0],
[0,1,1,0],
[0,0,0,1],
[1,0,1,0]]
# Explanation: First reverse each row:
# [[0,0,1,1],
# [1,0,0,1],
# [1,1,1,0],
# [0,1,0,1]]
print(input_b[0][::-1])
print(input_b[1][::-1])
print(input_b[2][::-1])
print(input_b[3][::-1])
for i in range(len(input_b)):
input_b[i] = input_b[i][::-1]
input_b
# Then invert the image
# [[1,1,0,0],
# [0,1,1,0],
# [0,0,0,1],
# [1,0,1,0]]
for i in range(len(input_b)):
for j in range(len(input_b)):
if input_b[i][j] == 0:
input_b[i][j] = 1
else:
input_b[i][j] = 0
input_b
%time
def flipAndInvertImage(input_b):
for k in range(len(input_b)):
input_b[k] = input_b[k][::-1]
for i in range(len(input_b)):
for j in range(len(input_b)):
if input_b[i][j] == 0:
input_b[i][j] = 1
else:
input_b[i][j] = 0
return input_b
%time
print(flipAndInvertImage(input_a))
print(output_a)
%time
print(flipAndInvertImage(input_b))
print(output_b)
| 0.454956 | 0.858185 |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
<!--NAVIGATION-->
< [Understanding Data Types in Python](02.01-Understanding-Data-Types.ipynb) | [Contents](Index.ipynb) | [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.02-The-Basics-Of-NumPy-Arrays.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
# The Basics of NumPy Arrays
Data manipulation in Python is nearly synonymous with NumPy array manipulation: even newer tools like Pandas ([Chapter 3](03.00-Introduction-to-Pandas.ipynb)) are built around the NumPy array.
This section will present several examples of using NumPy array manipulation to access data and subarrays, and to split, reshape, and join the arrays.
While the types of operations shown here may seem a bit dry and pedantic, they comprise the building blocks of many other examples used throughout the book.
Get to know them well!
We'll cover a few categories of basic array manipulations here:
- *Attributes of arrays*: Determining the size, shape, memory consumption, and data types of arrays
- *Indexing of arrays*: Getting and setting the value of individual array elements
- *Slicing of arrays*: Getting and setting smaller subarrays within a larger array
- *Reshaping of arrays*: Changing the shape of a given array
- *Joining and splitting of arrays*: Combining multiple arrays into one, and splitting one array into many
## NumPy Array Attributes
First let's discuss some useful array attributes.
We'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array.
We'll use NumPy's random number generator, which we will *seed* with a set value in order to ensure that the same random arrays are generated each time this code is run:
```
import numpy as np
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array
x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
```
Each array has attributes ``ndim`` (the number of dimensions), ``shape`` (the size of each dimension), and ``size`` (the total size of the array):
```
print("x3 ndim: ", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
```
Another useful attribute is the ``dtype``, the data type of the array (which we discussed previously in [Understanding Data Types in Python](02.01-Understanding-Data-Types.ipynb)):
```
print("dtype:", x3.dtype)
```
Other attributes include ``itemsize``, which lists the size (in bytes) of each array element, and ``nbytes``, which lists the total size (in bytes) of the array:
```
print("itemsize:", x3.itemsize, "bytes")
print("nbytes:", x3.nbytes, "bytes")
```
In general, we expect that ``nbytes`` is equal to ``itemsize`` times ``size``.
## Array Indexing: Accessing Single Elements
If you are familiar with Python's standard list indexing, indexing in NumPy will feel quite familiar.
In a one-dimensional array, the $i^{th}$ value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists:
```
x1
x1[0]
x1[4]
```
To index from the end of the array, you can use negative indices:
```
x1[-1]
x1[-2]
```
In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:
```
x2
x2[0, 0]
x2[2, 0]
x2[2, -1]
```
Values can also be modified using any of the above index notation:
```
x2[0, 0] = 12
x2
```
Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.
This means, for example, that if you attempt to insert a floating-point value to an integer array, the value will be silently truncated. Don't be caught unaware by this behavior!
```
x1[0] = 3.14159 # this will be truncated!
x1
```
## Array Slicing: Accessing Subarrays
Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the *slice* notation, marked by the colon (``:``) character.
The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array ``x``, use this:
``` python
x[start:stop:step]
```
If any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.
We'll take a look at accessing sub-arrays in one dimension and in multiple dimensions.
### One-dimensional subarrays
```
x = np.arange(10)
x
x[:5] # first five elements
x[5:] # elements after index 5
x[4:7] # middle sub-array
x[::2] # every other element
x[1::2] # every other element, starting at index 1
```
A potentially confusing case is when the ``step`` value is negative.
In this case, the defaults for ``start`` and ``stop`` are swapped.
This becomes a convenient way to reverse an array:
```
x[::-1] # all elements, reversed
x[5::-2] # reversed every other from index 5
```
### Multi-dimensional subarrays
Multi-dimensional slices work in the same way, with multiple slices separated by commas.
For example:
```
x2
x2[:2, :3] # two rows, three columns
x2[:3, ::2] # all rows, every other column
```
Finally, subarray dimensions can even be reversed together:
```
x2[::-1, ::-1]
```
#### Accessing array rows and columns
One commonly needed routine is accessing of single rows or columns of an array.
This can be done by combining indexing and slicing, using an empty slice marked by a single colon (``:``):
```
print(x2[:, 0]) # first column of x2
print(x2[0, :]) # first row of x2
```
In the case of row access, the empty slice can be omitted for a more compact syntax:
```
print(x2[0]) # equivalent to x2[0, :]
```
### Subarrays as no-copy views
One important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.
This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.
Consider our two-dimensional array from before:
```
print(x2)
```
Let's extract a $2 \times 2$ subarray from this:
```
x2_sub = x2[:2, :2]
print(x2_sub)
```
Now if we modify this subarray, we'll see that the original array is changed! Observe:
```
x2_sub[0, 0] = 99
print(x2_sub)
print(x2)
```
This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer.
### Creating copies of arrays
Despite the nice features of array views, it is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method:
```
x2_sub_copy = x2[:2, :2].copy()
print(x2_sub_copy)
```
If we now modify this subarray, the original array is not touched:
```
x2_sub_copy[0, 0] = 42
print(x2_sub_copy)
print(x2)
```
## Reshaping of Arrays
Another useful type of operation is reshaping of arrays.
The most flexible way of doing this is with the ``reshape`` method.
For example, if you want to put the numbers 1 through 9 in a $3 \times 3$ grid, you can do the following:
```
grid = np.arange(1, 10).reshape((3, 3))
print(grid)
```
Note that for this to work, the size of the initial array must match the size of the reshaped array.
Where possible, the ``reshape`` method will use a no-copy view of the initial array, but with non-contiguous memory buffers this is not always the case.
Another common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix.
This can be done with the ``reshape`` method, or more easily done by making use of the ``newaxis`` keyword within a slice operation:
```
x = np.array([1, 2, 3])
# row vector via reshape
x.reshape((1, 3))
# row vector via newaxis
x[np.newaxis, :]
# column vector via reshape
x.reshape((3, 1))
# column vector via newaxis
x[:, np.newaxis]
```
We will see this type of transformation often throughout the remainder of the book.
## Array Concatenation and Splitting
All of the preceding routines worked on single arrays. It's also possible to combine multiple arrays into one, and to conversely split a single array into multiple arrays. We'll take a look at those operations here.
### Concatenation of arrays
Concatenation, or joining of two arrays in NumPy, is primarily accomplished using the routines ``np.concatenate``, ``np.vstack``, and ``np.hstack``.
``np.concatenate`` takes a tuple or list of arrays as its first argument, as we can see here:
```
x = np.array([1, 2, 3])
y = np.array([3, 2, 1])
np.concatenate([x, y])
```
You can also concatenate more than two arrays at once:
```
z = [99, 99, 99]
print(np.concatenate([x, y, z]))
```
It can also be used for two-dimensional arrays:
```
grid = np.array([[1, 2, 3],
[4, 5, 6]])
# concatenate along the first axis
np.concatenate([grid, grid])
# concatenate along the second axis (zero-indexed)
np.concatenate([grid, grid], axis=1)
```
For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions:
```
x = np.array([1, 2, 3])
grid = np.array([[9, 8, 7],
[6, 5, 4]])
# vertically stack the arrays
np.vstack([x, grid])
# horizontally stack the arrays
y = np.array([[99],
[99]])
np.hstack([grid, y])
```
Similary, ``np.dstack`` will stack arrays along the third axis.
### Splitting of arrays
The opposite of concatenation is splitting, which is implemented by the functions ``np.split``, ``np.hsplit``, and ``np.vsplit``. For each of these, we can pass a list of indices giving the split points:
```
x = [1, 2, 3, 99, 99, 3, 2, 1]
x1, x2, x3 = np.split(x, [3, 5])
print(x1, x2, x3)
```
Notice that *N* split-points, leads to *N + 1* subarrays.
The related functions ``np.hsplit`` and ``np.vsplit`` are similar:
```
grid = np.arange(16).reshape((4, 4))
grid
upper, lower = np.vsplit(grid, [2])
print(upper)
print(lower)
left, right = np.hsplit(grid, [2])
print(left)
print(right)
```
Similarly, ``np.dsplit`` will split arrays along the third axis.
<!--NAVIGATION-->
< [Understanding Data Types in Python](02.01-Understanding-Data-Types.ipynb) | [Contents](Index.ipynb) | [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/02.02-The-Basics-Of-NumPy-Arrays.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
|
github_jupyter
|
import numpy as np
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array
x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
print("x3 ndim: ", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
print("dtype:", x3.dtype)
print("itemsize:", x3.itemsize, "bytes")
print("nbytes:", x3.nbytes, "bytes")
x1
x1[0]
x1[4]
x1[-1]
x1[-2]
x2
x2[0, 0]
x2[2, 0]
x2[2, -1]
x2[0, 0] = 12
x2
x1[0] = 3.14159 # this will be truncated!
x1
If any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.
We'll take a look at accessing sub-arrays in one dimension and in multiple dimensions.
### One-dimensional subarrays
A potentially confusing case is when the ``step`` value is negative.
In this case, the defaults for ``start`` and ``stop`` are swapped.
This becomes a convenient way to reverse an array:
### Multi-dimensional subarrays
Multi-dimensional slices work in the same way, with multiple slices separated by commas.
For example:
Finally, subarray dimensions can even be reversed together:
#### Accessing array rows and columns
One commonly needed routine is accessing of single rows or columns of an array.
This can be done by combining indexing and slicing, using an empty slice marked by a single colon (``:``):
In the case of row access, the empty slice can be omitted for a more compact syntax:
### Subarrays as no-copy views
One important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.
This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.
Consider our two-dimensional array from before:
Let's extract a $2 \times 2$ subarray from this:
Now if we modify this subarray, we'll see that the original array is changed! Observe:
This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer.
### Creating copies of arrays
Despite the nice features of array views, it is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method:
If we now modify this subarray, the original array is not touched:
## Reshaping of Arrays
Another useful type of operation is reshaping of arrays.
The most flexible way of doing this is with the ``reshape`` method.
For example, if you want to put the numbers 1 through 9 in a $3 \times 3$ grid, you can do the following:
Note that for this to work, the size of the initial array must match the size of the reshaped array.
Where possible, the ``reshape`` method will use a no-copy view of the initial array, but with non-contiguous memory buffers this is not always the case.
Another common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix.
This can be done with the ``reshape`` method, or more easily done by making use of the ``newaxis`` keyword within a slice operation:
We will see this type of transformation often throughout the remainder of the book.
## Array Concatenation and Splitting
All of the preceding routines worked on single arrays. It's also possible to combine multiple arrays into one, and to conversely split a single array into multiple arrays. We'll take a look at those operations here.
### Concatenation of arrays
Concatenation, or joining of two arrays in NumPy, is primarily accomplished using the routines ``np.concatenate``, ``np.vstack``, and ``np.hstack``.
``np.concatenate`` takes a tuple or list of arrays as its first argument, as we can see here:
You can also concatenate more than two arrays at once:
It can also be used for two-dimensional arrays:
For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions:
Similary, ``np.dstack`` will stack arrays along the third axis.
### Splitting of arrays
The opposite of concatenation is splitting, which is implemented by the functions ``np.split``, ``np.hsplit``, and ``np.vsplit``. For each of these, we can pass a list of indices giving the split points:
Notice that *N* split-points, leads to *N + 1* subarrays.
The related functions ``np.hsplit`` and ``np.vsplit`` are similar:
| 0.772015 | 0.970183 |
# Overview
In its current form, this notebook serves as a preliminary proposal for my final project for the DATA 512 (Human-Centered Data Science) course. The final project is meant to be a reproducible, useful, and reader-friendly body of research and analyses. The subject matter, datasets, and investigative questions are left for students to finalize.
This proposal outlines my intention to research the growth and spread of the __prison-industrial complex (PIC)__, a broad term referring to the increasing construction of, investment in, and use of prisons by government and corporate entities. It includes background information, my specific experimental questions and resources, and anticipated issues.
# Motivation & Problem Statement
My interest in the topic comes from exposure to notable authors, activists, and public figures who point to the PIC as a source of continued oppression along racial, economic, or other lines. While such issues have recently bubbled to the top of mainstream discourse, the PIC and its critics are long-standing parts of society in the US, and warrant further debate.
Many public-policy or governmental decisions involve rigorous scientific study to guarantee optimal socio-economic results. An analysis of the PIC's roots (and effects) may reveal useful truths about the operational effectiveness and efficiency of American prisons. Officials whose work involves budgets, public health, or law enforcement could all find value in this work. On a more human-centered, practical level, resulting analyses can enrich public debate around the effectiveness and ethicality of American incarceration. Understanding the PIC can inform everyday people's politics and shape the PIC's future.
In short, my hope is to learn about the growth of the PIC over time, and the corresponding parties that have contributed to or profited from that growth. I want to track the time periods and geographical locations in which the PIC exploded into its current state, but also see who benefitted (or suffered) from that boom.
# Data Selection
To investigate the PIC, there are two major groups by which all necessary data can be categorized:
1. Growth in size (*e.g.*, measuring the number of prisons, or number of imprisoned people, etc.)
2. Growth in scope (*e.g.*, measuring the amount of corporate/governmental investments, budget allocations, costs, etc.)
For the first type of data, I rely on an public-interest analysis project called ["Communicating With Prisoners"](https://www.acrosswalls.org/about/), a collection of primary sources, data, and references dealing with US prisons. I am specifically planning to use their dataset, ["Prisoners by State and Sex, U.S. 1880-2010"](https://www.acrosswalls.org/datasets/prisoners-us-state-sex-panel/) to analyze changes in prison population over time. It contains population counts for federal and state prisons (from 1880-2010), on an annual and state-by-state basis. Although it is difficult to verify, it notes that this data is pulled from past US Census surveys and yearly US Bureau of Justice statistics (from their "Prisoner" series). The authors of the project release all copyrights under the CCO 1.0 Universal Public Domain Dedication license, and encourage free use and distribution thereof. And while is a suitable corpus for tracking prison size over time, this set does have ethical issues attached. Namely, they are not the direct data source, but compilers. When drawing conclusions, I have to be wary of errors and author biases in accumulating and organizing this un-verified data. Also, these datasets in their raw form can have (however rare) identifying photographs or demographics, so I will need to perform sanitizing/anonymizing operations on the same.
For the second part of the research push, I will use a suite of datasets regarding [Expenditure and Employment Data for the Criminal Justice System](https://www.icpsr.umich.edu/web/NACJD/series/87), hosted by the National Archive of Criminal Justice Data. It is a part of the US Bureau of Justice's "Justice Expenditure and Employment" data collection program. Specifically, it has payroll information, operational costs, and more, on a state-by-state basis. It is a heterogenous mixture of data files, which effectively span every year from 1971-2003. Per the access website, "the public-use files in this collection are available for access by the general public." The program itself is intended to be fully transparent and accessible, although specific licenses are not mentioned. This data, which will help me track changes in prison "business" over time, has no foreseeable ethical conflicts. It is sanitized by pertinent government bodies, and is only vulnerable to erroneous provision of data in specific states or times.
# Unknowns & Dependencies
There are certainly factors outside of my control which will inhibit timely analyis. Specifically, "free and open" data on this topic is messy and inconsistent. Not all sources of data offer enough years' worth of data, or state-wise splits. Other data on related issues like prison labor, for-profit prison finances, and more are behind a paywall. The data that *is* available is often ill-formatted and scattered across files. As such, it may prove difficult to homogenize Excel/SAS data, clean table headings and values, link the two different "categories" of data, and provide a reproducible analysis. If it turns out that these initial data sources are incompatible or insufficient, that may delay the production of any meaningful end-product. For example, the two datasets could have different granularities for counting populations (e.g., state vs. federal level). Or, the prisons for whom expenses are counted in one set may not be the ones for which populations are counted in the other. These are all serious but not insurmountable hurdles for this project.
|
github_jupyter
|
# Overview
In its current form, this notebook serves as a preliminary proposal for my final project for the DATA 512 (Human-Centered Data Science) course. The final project is meant to be a reproducible, useful, and reader-friendly body of research and analyses. The subject matter, datasets, and investigative questions are left for students to finalize.
This proposal outlines my intention to research the growth and spread of the __prison-industrial complex (PIC)__, a broad term referring to the increasing construction of, investment in, and use of prisons by government and corporate entities. It includes background information, my specific experimental questions and resources, and anticipated issues.
# Motivation & Problem Statement
My interest in the topic comes from exposure to notable authors, activists, and public figures who point to the PIC as a source of continued oppression along racial, economic, or other lines. While such issues have recently bubbled to the top of mainstream discourse, the PIC and its critics are long-standing parts of society in the US, and warrant further debate.
Many public-policy or governmental decisions involve rigorous scientific study to guarantee optimal socio-economic results. An analysis of the PIC's roots (and effects) may reveal useful truths about the operational effectiveness and efficiency of American prisons. Officials whose work involves budgets, public health, or law enforcement could all find value in this work. On a more human-centered, practical level, resulting analyses can enrich public debate around the effectiveness and ethicality of American incarceration. Understanding the PIC can inform everyday people's politics and shape the PIC's future.
In short, my hope is to learn about the growth of the PIC over time, and the corresponding parties that have contributed to or profited from that growth. I want to track the time periods and geographical locations in which the PIC exploded into its current state, but also see who benefitted (or suffered) from that boom.
# Data Selection
To investigate the PIC, there are two major groups by which all necessary data can be categorized:
1. Growth in size (*e.g.*, measuring the number of prisons, or number of imprisoned people, etc.)
2. Growth in scope (*e.g.*, measuring the amount of corporate/governmental investments, budget allocations, costs, etc.)
For the first type of data, I rely on an public-interest analysis project called ["Communicating With Prisoners"](https://www.acrosswalls.org/about/), a collection of primary sources, data, and references dealing with US prisons. I am specifically planning to use their dataset, ["Prisoners by State and Sex, U.S. 1880-2010"](https://www.acrosswalls.org/datasets/prisoners-us-state-sex-panel/) to analyze changes in prison population over time. It contains population counts for federal and state prisons (from 1880-2010), on an annual and state-by-state basis. Although it is difficult to verify, it notes that this data is pulled from past US Census surveys and yearly US Bureau of Justice statistics (from their "Prisoner" series). The authors of the project release all copyrights under the CCO 1.0 Universal Public Domain Dedication license, and encourage free use and distribution thereof. And while is a suitable corpus for tracking prison size over time, this set does have ethical issues attached. Namely, they are not the direct data source, but compilers. When drawing conclusions, I have to be wary of errors and author biases in accumulating and organizing this un-verified data. Also, these datasets in their raw form can have (however rare) identifying photographs or demographics, so I will need to perform sanitizing/anonymizing operations on the same.
For the second part of the research push, I will use a suite of datasets regarding [Expenditure and Employment Data for the Criminal Justice System](https://www.icpsr.umich.edu/web/NACJD/series/87), hosted by the National Archive of Criminal Justice Data. It is a part of the US Bureau of Justice's "Justice Expenditure and Employment" data collection program. Specifically, it has payroll information, operational costs, and more, on a state-by-state basis. It is a heterogenous mixture of data files, which effectively span every year from 1971-2003. Per the access website, "the public-use files in this collection are available for access by the general public." The program itself is intended to be fully transparent and accessible, although specific licenses are not mentioned. This data, which will help me track changes in prison "business" over time, has no foreseeable ethical conflicts. It is sanitized by pertinent government bodies, and is only vulnerable to erroneous provision of data in specific states or times.
# Unknowns & Dependencies
There are certainly factors outside of my control which will inhibit timely analyis. Specifically, "free and open" data on this topic is messy and inconsistent. Not all sources of data offer enough years' worth of data, or state-wise splits. Other data on related issues like prison labor, for-profit prison finances, and more are behind a paywall. The data that *is* available is often ill-formatted and scattered across files. As such, it may prove difficult to homogenize Excel/SAS data, clean table headings and values, link the two different "categories" of data, and provide a reproducible analysis. If it turns out that these initial data sources are incompatible or insufficient, that may delay the production of any meaningful end-product. For example, the two datasets could have different granularities for counting populations (e.g., state vs. federal level). Or, the prisons for whom expenses are counted in one set may not be the ones for which populations are counted in the other. These are all serious but not insurmountable hurdles for this project.
| 0.736969 | 0.989045 |
# Session 6:
## Machine learning for causal inference
## Agenda
1. [A general problem](#A-general-problem)
1. [Inference with Lasso](#Inference-with-Lasso)
1. [Double Machine Learning](#Double-Machine-Learning)
# A general problem
## Estimate treatment effect with selection
Often we are interested in estimating average treatment effect (ATE) of $T$ on $y$ in observational data.
- Matching has for a long time been the defacto standard
- (assuming we measure enough variables to assignment conditionally random)
- Can we somehow solve the problem using machine learning?
## Estimate treatment effect with selection
[Chernozhukov et al. (2018)](http://economics.mit.edu/files/12538) assume that we have the following data generating process
\begin{align}
y=& T\theta_0+g_0(X)+U\\
T=&m_0(X) + V\\
E[U|X,T]=&0\\
E[V|X]=&0\\
\end{align}
Basic model properties:
- The outcome $y$ is confounded by unknown nuisance function $g_0(\cdot)$
- The treatment $T$ suffer from selection on observable, where $m_0$ is unknown propensity function
- Note assumed no selection on unobservables (only "mild" econometric problem)
## Estimate treatment effect with selection
There have been multiple ways proposed to solve the problem:
- *Directly* modify ML to allow for estimation of treatment effect
- *Indirectly* modify estimation procedure to incorporate ML
# Inference with Lasso
## Treatment effects in linear models
Suppose we want to estimate a linear model parameter for the causal effect of treatment $T_i$ on $y_i$.
\begin{equation}y_i=\alpha T_i+x_i\beta+r_{yi}+\zeta_i\end{equation}
- We follow notation in [Belloni et al., 2015](https://doi.org/10.1257/jep.28.2.29)
- We let $r_{yi}$ be an approximation error (we don't know the functional form)
## Treatment effects in linear models (2)
How to select model, i.e. subset of $x$?
- Classic econometrics:
- Use **OLS** and include covariates based on theory or inference
- Problem how to delete covariates systematically? Adjust for multiple hypothesis testing?
- Machine learning:
- Use **LASSO** to perform covariate selection
- Note - estimates are biased towards zero!
- Problem we omit potentially relevant variables!!
- LASSO excludes possible confounders if little predictive power $y_i$.
- Excluded variables may still have an effect through $T_i$, e.g. covariates correlated with treatment.
## Fixing the LASSO
A simple solution suggested by [Belloni et al. (2015)](https://doi.org/10.1257/jep.28.2.29) us to use Post-LASSO to correct for bias:
- Step 1: estimate two LASSO models
- a) Regress $y_i$ on $x_i$
- b) Regress $T_i$ on $x_i$
- Step 2: run OLS using only variables that were kept in either LASSOs
What about inference?
- We need further assumptions on sparsity.
- See [Chernozhukov et al. (2015)](https://doi.org/10.1257/aer.p20151022) online appendix for details.
## Fixing the LASSO (2)
The LASSO picks the correct variables even if there are more variables than observations!
- In high dimensions this does not work - can return more variables to use estimate than possible in OLS
# Double Machine Learning
## Linear DML
[Belloni et al. (2015)](https://doi.org/10.1257/jep.28.2.29), [Chernozhukov et al. (2015)](https://doi.org/10.1257/aer.p20151022) write down the two "prediction equations":
\begin{align}
y_i=&\alpha T_i+x_i^{\prime}\theta_y+r_{yi}+\zeta_i\\
T_{i}=&x_{i}^{\prime} \theta_{t}+r_{t i}+v_{i}
\end{align}
The two equations can be combined into a single structural equation (substite $T_i$ into $y_i$):
\begin{align}y_{i}=&x_{i}^{\prime}\left(\alpha \theta_{t}+\theta_{y}\right)+\left(\alpha r_{t i}+r_{y i}\right)+\left(\alpha v_{i}+\zeta_{i}\right)\\=&x_{i}^{\prime} \pi+r_{c i}+\varepsilon_{i}\end{align}
## Linear DML (2)
[Chernozhukov et al. (2015)](https://doi.org/10.1257/aer.p20151022) states the following algorithm for :
1. run the two LASSO equations (as in POST-LASSO) and obtain residuals
- $\hat{\rho}_i^y$ from $y_i$ on $x_i$
- $\hat{\rho}_i^d$ from $T_i$ on $x_i$
1. run a regression of $\hat{\rho}_i^y$ on $\hat{\rho}_i^d$
What is the intuition?
- Similar to Frisch-Waugh-Lowell where we partial out effects.
- We partial out effect of $x_i$ on both $y_i$ and on $T_i$ seperately
- Innovation: We make double selection of variables using LASSO
## Linear DML (3)
[Belloni et al. (2015)](https://doi.org/10.1257/jep.28.2.29) simulates the performance of post selection estimators
<center><img src='beh2014_fig1.JPG' alt="Drawing" style="width: 800px;"/></center>
## Naive solution
What happens when use machine learning estimator to estimate directly estimate in $\theta_0$ and $g_0(\cdot)$ in $y=T\theta_0+g_0(X)$ to control for confounders?
- Where we use a an auxiliary subsample $I^c$ to compute $\hat{g}_0(\cdot)$ using a possibly non-linear model.
- Assume subsample is half of the sample size.
$$\hat{\theta}_0=\frac{\frac{1}{n}\sum_{i\in I}T_i(y_i-\hat{g}_0(X_i))}{\frac{1}{n}\sum_{i\in I}T_i^2}$$
## Naive solution
We decompose estimator into scaled estimation error
$$\sqrt{n}(\theta-\hat{\theta}_0)=
\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}T_iU_i}{\frac{1}{n}\sum_{i\in I}T_i^2}+\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}T_i(g_0-\hat{g}_0(X_i))}{\frac{1}{n}\sum_{i\in I}T_i^2}
$$
What could be problematic here?
## Naive problem
Issue is that $\hat{g}$ will be systematically biased as we are curbing overfitting, e.g. through regularization.
- Same problem arises for tree-based and neural network models.
- Estimator will have bias term that diverges and is not centered:
$$\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}T_i(g_0-\hat{g}_0(X_i))}{E[T_i^2]}
$$
## Orthogonalization
Suppose we also estimate $\hat{m}_0(\cdot)$ on the auxiliary sample $I^c$. We can then make the following estimate:
$$\check{\theta}_0=\frac{\frac{1}{n}\sum_{i\in I}\hat{V}_i(y_i-\hat{g}_0(X_i))}{\frac{1}{n}\sum_{i\in I}T_i^2}$$
where we use the
## Orthogonalization
We decompose estimator into scaled estimation error
$$\sqrt{n}(\theta-\hat{\theta}_0)=
\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}T_iU_i}{\frac{1}{n}\sum_{i\in I}T_i^2}+\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}(m_0(X_i)-\hat{m}_0(X_i))(g_0-\hat{g}_0(X_i))}{\frac{1}{n}\sum_{i\in I}T_i^2}
$$
This solves the problem as the product of estimation errors vanishes.
## Orthogonalization
The first major contribution of [Chernozhukov et al. (2018)](http://economics.mit.edu/files/12538) is to show that in general the second double debiasing procedure leads to consistent estimates and can be used estimate average treatment effects.
- The proof depends on sample splitting - using an independent auxiliary sample for estimating $\hat{m}_0,\hat{g}_0$.
## Implementation details
Problem - what to do with auxiliary sample?
To gain efficient estimates [Chernozhukov et al. (2018)](http://economics.mit.edu/files/12538)
- Problem - what to do with auxiliary sample?
- We rotate sample using **cross-fitting**: first use one part as auxiliary sample, then the other. Like cross-validation in supervised ML.
- This is second major contribution.
## Implementation details
How do we estimate $\hat{m}_0,\hat{g}_0$? This can be done using cross-validation on auxiliary sample $I^c$. Available estimators:
- linear/logistic models, including regularized
- tree based inclding random forests, boosted trees
- neural networks
- kernel models including suppert vector machine
## Extensions
The DML approach is extended in the paper to:
- compute Local Average Treatment Effects (LATE)
- compute Instrumental Variables
## The end
|
github_jupyter
|
# Session 6:
## Machine learning for causal inference
## Agenda
1. [A general problem](#A-general-problem)
1. [Inference with Lasso](#Inference-with-Lasso)
1. [Double Machine Learning](#Double-Machine-Learning)
# A general problem
## Estimate treatment effect with selection
Often we are interested in estimating average treatment effect (ATE) of $T$ on $y$ in observational data.
- Matching has for a long time been the defacto standard
- (assuming we measure enough variables to assignment conditionally random)
- Can we somehow solve the problem using machine learning?
## Estimate treatment effect with selection
[Chernozhukov et al. (2018)](http://economics.mit.edu/files/12538) assume that we have the following data generating process
\begin{align}
y=& T\theta_0+g_0(X)+U\\
T=&m_0(X) + V\\
E[U|X,T]=&0\\
E[V|X]=&0\\
\end{align}
Basic model properties:
- The outcome $y$ is confounded by unknown nuisance function $g_0(\cdot)$
- The treatment $T$ suffer from selection on observable, where $m_0$ is unknown propensity function
- Note assumed no selection on unobservables (only "mild" econometric problem)
## Estimate treatment effect with selection
There have been multiple ways proposed to solve the problem:
- *Directly* modify ML to allow for estimation of treatment effect
- *Indirectly* modify estimation procedure to incorporate ML
# Inference with Lasso
## Treatment effects in linear models
Suppose we want to estimate a linear model parameter for the causal effect of treatment $T_i$ on $y_i$.
\begin{equation}y_i=\alpha T_i+x_i\beta+r_{yi}+\zeta_i\end{equation}
- We follow notation in [Belloni et al., 2015](https://doi.org/10.1257/jep.28.2.29)
- We let $r_{yi}$ be an approximation error (we don't know the functional form)
## Treatment effects in linear models (2)
How to select model, i.e. subset of $x$?
- Classic econometrics:
- Use **OLS** and include covariates based on theory or inference
- Problem how to delete covariates systematically? Adjust for multiple hypothesis testing?
- Machine learning:
- Use **LASSO** to perform covariate selection
- Note - estimates are biased towards zero!
- Problem we omit potentially relevant variables!!
- LASSO excludes possible confounders if little predictive power $y_i$.
- Excluded variables may still have an effect through $T_i$, e.g. covariates correlated with treatment.
## Fixing the LASSO
A simple solution suggested by [Belloni et al. (2015)](https://doi.org/10.1257/jep.28.2.29) us to use Post-LASSO to correct for bias:
- Step 1: estimate two LASSO models
- a) Regress $y_i$ on $x_i$
- b) Regress $T_i$ on $x_i$
- Step 2: run OLS using only variables that were kept in either LASSOs
What about inference?
- We need further assumptions on sparsity.
- See [Chernozhukov et al. (2015)](https://doi.org/10.1257/aer.p20151022) online appendix for details.
## Fixing the LASSO (2)
The LASSO picks the correct variables even if there are more variables than observations!
- In high dimensions this does not work - can return more variables to use estimate than possible in OLS
# Double Machine Learning
## Linear DML
[Belloni et al. (2015)](https://doi.org/10.1257/jep.28.2.29), [Chernozhukov et al. (2015)](https://doi.org/10.1257/aer.p20151022) write down the two "prediction equations":
\begin{align}
y_i=&\alpha T_i+x_i^{\prime}\theta_y+r_{yi}+\zeta_i\\
T_{i}=&x_{i}^{\prime} \theta_{t}+r_{t i}+v_{i}
\end{align}
The two equations can be combined into a single structural equation (substite $T_i$ into $y_i$):
\begin{align}y_{i}=&x_{i}^{\prime}\left(\alpha \theta_{t}+\theta_{y}\right)+\left(\alpha r_{t i}+r_{y i}\right)+\left(\alpha v_{i}+\zeta_{i}\right)\\=&x_{i}^{\prime} \pi+r_{c i}+\varepsilon_{i}\end{align}
## Linear DML (2)
[Chernozhukov et al. (2015)](https://doi.org/10.1257/aer.p20151022) states the following algorithm for :
1. run the two LASSO equations (as in POST-LASSO) and obtain residuals
- $\hat{\rho}_i^y$ from $y_i$ on $x_i$
- $\hat{\rho}_i^d$ from $T_i$ on $x_i$
1. run a regression of $\hat{\rho}_i^y$ on $\hat{\rho}_i^d$
What is the intuition?
- Similar to Frisch-Waugh-Lowell where we partial out effects.
- We partial out effect of $x_i$ on both $y_i$ and on $T_i$ seperately
- Innovation: We make double selection of variables using LASSO
## Linear DML (3)
[Belloni et al. (2015)](https://doi.org/10.1257/jep.28.2.29) simulates the performance of post selection estimators
<center><img src='beh2014_fig1.JPG' alt="Drawing" style="width: 800px;"/></center>
## Naive solution
What happens when use machine learning estimator to estimate directly estimate in $\theta_0$ and $g_0(\cdot)$ in $y=T\theta_0+g_0(X)$ to control for confounders?
- Where we use a an auxiliary subsample $I^c$ to compute $\hat{g}_0(\cdot)$ using a possibly non-linear model.
- Assume subsample is half of the sample size.
$$\hat{\theta}_0=\frac{\frac{1}{n}\sum_{i\in I}T_i(y_i-\hat{g}_0(X_i))}{\frac{1}{n}\sum_{i\in I}T_i^2}$$
## Naive solution
We decompose estimator into scaled estimation error
$$\sqrt{n}(\theta-\hat{\theta}_0)=
\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}T_iU_i}{\frac{1}{n}\sum_{i\in I}T_i^2}+\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}T_i(g_0-\hat{g}_0(X_i))}{\frac{1}{n}\sum_{i\in I}T_i^2}
$$
What could be problematic here?
## Naive problem
Issue is that $\hat{g}$ will be systematically biased as we are curbing overfitting, e.g. through regularization.
- Same problem arises for tree-based and neural network models.
- Estimator will have bias term that diverges and is not centered:
$$\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}T_i(g_0-\hat{g}_0(X_i))}{E[T_i^2]}
$$
## Orthogonalization
Suppose we also estimate $\hat{m}_0(\cdot)$ on the auxiliary sample $I^c$. We can then make the following estimate:
$$\check{\theta}_0=\frac{\frac{1}{n}\sum_{i\in I}\hat{V}_i(y_i-\hat{g}_0(X_i))}{\frac{1}{n}\sum_{i\in I}T_i^2}$$
where we use the
## Orthogonalization
We decompose estimator into scaled estimation error
$$\sqrt{n}(\theta-\hat{\theta}_0)=
\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}T_iU_i}{\frac{1}{n}\sum_{i\in I}T_i^2}+\frac{\frac{1}{\sqrt{n}}\sum_{i\in I}(m_0(X_i)-\hat{m}_0(X_i))(g_0-\hat{g}_0(X_i))}{\frac{1}{n}\sum_{i\in I}T_i^2}
$$
This solves the problem as the product of estimation errors vanishes.
## Orthogonalization
The first major contribution of [Chernozhukov et al. (2018)](http://economics.mit.edu/files/12538) is to show that in general the second double debiasing procedure leads to consistent estimates and can be used estimate average treatment effects.
- The proof depends on sample splitting - using an independent auxiliary sample for estimating $\hat{m}_0,\hat{g}_0$.
## Implementation details
Problem - what to do with auxiliary sample?
To gain efficient estimates [Chernozhukov et al. (2018)](http://economics.mit.edu/files/12538)
- Problem - what to do with auxiliary sample?
- We rotate sample using **cross-fitting**: first use one part as auxiliary sample, then the other. Like cross-validation in supervised ML.
- This is second major contribution.
## Implementation details
How do we estimate $\hat{m}_0,\hat{g}_0$? This can be done using cross-validation on auxiliary sample $I^c$. Available estimators:
- linear/logistic models, including regularized
- tree based inclding random forests, boosted trees
- neural networks
- kernel models including suppert vector machine
## Extensions
The DML approach is extended in the paper to:
- compute Local Average Treatment Effects (LATE)
- compute Instrumental Variables
## The end
| 0.869853 | 0.973139 |
```
import tensorflow as tf
print(tf.__version__)
# additional imports
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout
from tensorflow.keras.models import Model
# Load in the data
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
print("x_train.shape:", x_train.shape)
# the data is only 2D!
# convolution expects height x width x color
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print(x_train.shape)
# number of classes
K = len(set(y_train))
print("number of classes:", K)
# Build the model using the functional API
i = Input(shape=x_train[0].shape)
x = Conv2D(32, (3, 3), strides=2, activation='relu')(i)
x = Conv2D(64, (3, 3), strides=2, activation='relu')(x)
x = Conv2D(128, (3, 3), strides=2, activation='relu')(x)
x = Flatten()(x)
x = Dropout(0.2)(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(K, activation='softmax')(x)
model = Model(i, x)
# Compile and fit
# Note: make sure you are using the GPU for this!
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
r = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=15)
# Plot loss per iteration
import matplotlib.pyplot as plt
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot accuracy per iteration
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Plot confusion matrix
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
p_test = model.predict(x_test).argmax(axis=1)
cm = confusion_matrix(y_test, p_test)
plot_confusion_matrix(cm, list(range(10)))
# Label mapping
labels = '''T-shirt/top
Trouser
Pullover
Dress
Coat
Sandal
Shirt
Sneaker
Bag
Ankle boot'''.split("\n")
# Show some misclassified examples
misclassified_idx = np.where(p_test != y_test)[0]
i = np.random.choice(misclassified_idx)
plt.imshow(x_test[i].reshape(28,28), cmap='gray')
plt.title("True label: %s Predicted: %s" % (labels[y_test[i]], labels[p_test[i]]));
```
|
github_jupyter
|
import tensorflow as tf
print(tf.__version__)
# additional imports
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout
from tensorflow.keras.models import Model
# Load in the data
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
print("x_train.shape:", x_train.shape)
# the data is only 2D!
# convolution expects height x width x color
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print(x_train.shape)
# number of classes
K = len(set(y_train))
print("number of classes:", K)
# Build the model using the functional API
i = Input(shape=x_train[0].shape)
x = Conv2D(32, (3, 3), strides=2, activation='relu')(i)
x = Conv2D(64, (3, 3), strides=2, activation='relu')(x)
x = Conv2D(128, (3, 3), strides=2, activation='relu')(x)
x = Flatten()(x)
x = Dropout(0.2)(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(K, activation='softmax')(x)
model = Model(i, x)
# Compile and fit
# Note: make sure you are using the GPU for this!
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
r = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=15)
# Plot loss per iteration
import matplotlib.pyplot as plt
plt.plot(r.history['loss'], label='loss')
plt.plot(r.history['val_loss'], label='val_loss')
plt.legend()
# Plot accuracy per iteration
plt.plot(r.history['accuracy'], label='acc')
plt.plot(r.history['val_accuracy'], label='val_acc')
plt.legend()
# Plot confusion matrix
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
p_test = model.predict(x_test).argmax(axis=1)
cm = confusion_matrix(y_test, p_test)
plot_confusion_matrix(cm, list(range(10)))
# Label mapping
labels = '''T-shirt/top
Trouser
Pullover
Dress
Coat
Sandal
Shirt
Sneaker
Bag
Ankle boot'''.split("\n")
# Show some misclassified examples
misclassified_idx = np.where(p_test != y_test)[0]
i = np.random.choice(misclassified_idx)
plt.imshow(x_test[i].reshape(28,28), cmap='gray')
plt.title("True label: %s Predicted: %s" % (labels[y_test[i]], labels[p_test[i]]));
| 0.94656 | 0.782496 |
# Libraries and datasets
```
import pandas as pd
import numpy as np
import math
import random
import matplotlib.pyplot as plt
import re
births = pd.read_csv('../data/raw/Cleaned_data_set.csv')
births_nou = births.loc[births['admit_NICU'] != 'U']
colnames = np.array(births.columns)
colnames
```
# Visualizations
### Mother's BMI
```
births['mother_bmi_recode'] = pd.cut(births['mothers_bmi'],[10.0,19.0, 25.0,30.0,35.0,40.0,90.0], labels = ['u','h','o','ob1','ob2','ob3'])
```
## Mother's Age
```
births['mothers_age_groups2']= pd.cut(births['mothers_age'], [0,14,19,24,29,34,39,44,49,100],
labels = ['<15', '15-19', '20-24','25-29','30-34','35-39','40-44','45-49','50-100'])
bmi_plt = births.loc[births['admit_NICU'] != 'U'].groupby(['mothers_age_groups2'])['admit_NICU'].value_counts().unstack()
bmi_plt.plot(kind = 'bar', logy = True, stacked = True, color = ['#ff7f00','#1f77b4'])
pct_bplot(births,'mothers_age_groups2')
```
## Prior-termination births
```
pter_plt = births_nou.groupby(['prior_terminations'])['admit_NICU'].value_counts().unstack()
pter_plt.plot(kind = 'bar', logy = True, stacked = True, color = ['#ff7f00','#1f77b4'])
pct_bplot(births_nou,'prior_terminations')
lst = births[['pre_preg_hypten', 'gest_hypten', 'gest_diab', 'pre_preg_diab']].count()
# bmi_plt = births.loc[births['admit_NICU'] != 'U'].groupby(['mothers_age_groups2'])['admit_NICU'].value_counts().unstack()
# bmi_plt.plot(kind = 'bar', logy = True, stacked = True, color = ['#ff7f00','#1f77b4'])
# pct_bplot(births,'mothers_age_groups2')
lst.plot(kind = 'bar')
def make_pctdf(dataframe = births_nou, group = str, target = 'admit_NICU', columns_titles = ['Y','N']):
df1 = births_nou.loc[(births[group] == 'Y')].groupby([group])[[target]].count()
df2 = births_nou.loc[(births[group] == 'Y')].groupby([group])[target].value_counts().unstack()
df2 = df2.reindex(columns=columns_titles)
df3 = pd.merge(df1,df2, left_index = True, right_index = True)
pct_df = pd.DataFrame(list(map(lambda x: df3[x]/df3[target] * 100, df3.columns[1:])))
return pct_df.T
cnames_list = ['gest_diab','pre_preg_diab', 'gest_hypten', 'pre_preg_hypten']
test = pd.concat(map( lambda cname: make_pctdf( births_nou, cname ), cnames_list ))
print(test)
test.plot(kind = 'bar', stacked = True, legend = False)
test1 = make_pctdf(births_nou,'gest_diab')
test2 = make_pctdf(births_nou, 'pre_preg_diab')
test3 = pd.concat([test1,test2], axis =0)
print(test3)
#test3.plot(kind = 'bar', stacked = True, legend = False)
# colors_list = ['#5cb85c','#5bc0de','#d9534f']
# Change this line to plot percentages instead of absolute values
ax = (test3.div(test3.sum(1), axis=0)).plot(kind='bar',figsize=(15,4),width = 0.8,stacked=True)#,color = colors_list,edgecolor=None)
plt.legend(labels=test3.columns,fontsize= 14)
plt.title("Percentage of Respondents' Interest in Data Science Areas",fontsize= 16)
plt.xticks(fontsize=14)
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.yticks([])
plt.axhline(y=.0914, color='r', linestyle='-', label = '9%')
# Add this loop to add the annotations
for p in ax.patches:
width, height = p.get_width(), p.get_height()
x, y = p.get_xy()
ax.annotate('{:.0%}'.format(height), (p.get_x()+.5*width, p.get_y() + height - 0.1), ha = 'center')
test_plt = test.plot(kind = 'bar', stacked = True, legend = False)
plt.axhline(y=9.14, color='r', linestyle='-', label = '9%')
test_plt.set_xticklabels(('Gestational Diabetes', "Pre-pregancy Diabetes", 'Gestational Hypertension', 'Pre-pregnancy Hypertension'))
test_plt.set_title("Mother's health factors resulting NICU admittance rates")
# ax.set(title='USA births by day of year (1969-1988)',ylabel='average daily births')
# viol_plot = viol_counts[:10].plot(kind='bar')
# viol_plot.set_ylabel('No. of Violations')
# viol_plot.set_title('Audit Results')
# viol_plot.set_xticks(ind+width)
# viol_plot.set_xticklabels( ('A', 'B','C') )
def pct_bplot(dataframe, group = str, target = 'admit_NICU', columns_titles = ['Y','N'] ):
df1 = dataframe.groupby([group])[[target]].count()
df2 = dataframe.groupby([group])[target].value_counts().unstack()
df2 = df2.reindex(columns=columns_titles)
df3 = pd.merge(df1,df2, left_index = True, right_index = True)
pct_df = pd.DataFrame(list(map(lambda x: df3[x]/df3[target] * 100, df3.columns[1:])))
pct_df = pct_df.T
pct_df.plot(kind = 'bar', stacked = True)
colnames
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import math
import random
import matplotlib.pyplot as plt
import re
births = pd.read_csv('../data/raw/Cleaned_data_set.csv')
births_nou = births.loc[births['admit_NICU'] != 'U']
colnames = np.array(births.columns)
colnames
births['mother_bmi_recode'] = pd.cut(births['mothers_bmi'],[10.0,19.0, 25.0,30.0,35.0,40.0,90.0], labels = ['u','h','o','ob1','ob2','ob3'])
births['mothers_age_groups2']= pd.cut(births['mothers_age'], [0,14,19,24,29,34,39,44,49,100],
labels = ['<15', '15-19', '20-24','25-29','30-34','35-39','40-44','45-49','50-100'])
bmi_plt = births.loc[births['admit_NICU'] != 'U'].groupby(['mothers_age_groups2'])['admit_NICU'].value_counts().unstack()
bmi_plt.plot(kind = 'bar', logy = True, stacked = True, color = ['#ff7f00','#1f77b4'])
pct_bplot(births,'mothers_age_groups2')
pter_plt = births_nou.groupby(['prior_terminations'])['admit_NICU'].value_counts().unstack()
pter_plt.plot(kind = 'bar', logy = True, stacked = True, color = ['#ff7f00','#1f77b4'])
pct_bplot(births_nou,'prior_terminations')
lst = births[['pre_preg_hypten', 'gest_hypten', 'gest_diab', 'pre_preg_diab']].count()
# bmi_plt = births.loc[births['admit_NICU'] != 'U'].groupby(['mothers_age_groups2'])['admit_NICU'].value_counts().unstack()
# bmi_plt.plot(kind = 'bar', logy = True, stacked = True, color = ['#ff7f00','#1f77b4'])
# pct_bplot(births,'mothers_age_groups2')
lst.plot(kind = 'bar')
def make_pctdf(dataframe = births_nou, group = str, target = 'admit_NICU', columns_titles = ['Y','N']):
df1 = births_nou.loc[(births[group] == 'Y')].groupby([group])[[target]].count()
df2 = births_nou.loc[(births[group] == 'Y')].groupby([group])[target].value_counts().unstack()
df2 = df2.reindex(columns=columns_titles)
df3 = pd.merge(df1,df2, left_index = True, right_index = True)
pct_df = pd.DataFrame(list(map(lambda x: df3[x]/df3[target] * 100, df3.columns[1:])))
return pct_df.T
cnames_list = ['gest_diab','pre_preg_diab', 'gest_hypten', 'pre_preg_hypten']
test = pd.concat(map( lambda cname: make_pctdf( births_nou, cname ), cnames_list ))
print(test)
test.plot(kind = 'bar', stacked = True, legend = False)
test1 = make_pctdf(births_nou,'gest_diab')
test2 = make_pctdf(births_nou, 'pre_preg_diab')
test3 = pd.concat([test1,test2], axis =0)
print(test3)
#test3.plot(kind = 'bar', stacked = True, legend = False)
# colors_list = ['#5cb85c','#5bc0de','#d9534f']
# Change this line to plot percentages instead of absolute values
ax = (test3.div(test3.sum(1), axis=0)).plot(kind='bar',figsize=(15,4),width = 0.8,stacked=True)#,color = colors_list,edgecolor=None)
plt.legend(labels=test3.columns,fontsize= 14)
plt.title("Percentage of Respondents' Interest in Data Science Areas",fontsize= 16)
plt.xticks(fontsize=14)
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.yticks([])
plt.axhline(y=.0914, color='r', linestyle='-', label = '9%')
# Add this loop to add the annotations
for p in ax.patches:
width, height = p.get_width(), p.get_height()
x, y = p.get_xy()
ax.annotate('{:.0%}'.format(height), (p.get_x()+.5*width, p.get_y() + height - 0.1), ha = 'center')
test_plt = test.plot(kind = 'bar', stacked = True, legend = False)
plt.axhline(y=9.14, color='r', linestyle='-', label = '9%')
test_plt.set_xticklabels(('Gestational Diabetes', "Pre-pregancy Diabetes", 'Gestational Hypertension', 'Pre-pregnancy Hypertension'))
test_plt.set_title("Mother's health factors resulting NICU admittance rates")
# ax.set(title='USA births by day of year (1969-1988)',ylabel='average daily births')
# viol_plot = viol_counts[:10].plot(kind='bar')
# viol_plot.set_ylabel('No. of Violations')
# viol_plot.set_title('Audit Results')
# viol_plot.set_xticks(ind+width)
# viol_plot.set_xticklabels( ('A', 'B','C') )
def pct_bplot(dataframe, group = str, target = 'admit_NICU', columns_titles = ['Y','N'] ):
df1 = dataframe.groupby([group])[[target]].count()
df2 = dataframe.groupby([group])[target].value_counts().unstack()
df2 = df2.reindex(columns=columns_titles)
df3 = pd.merge(df1,df2, left_index = True, right_index = True)
pct_df = pd.DataFrame(list(map(lambda x: df3[x]/df3[target] * 100, df3.columns[1:])))
pct_df = pct_df.T
pct_df.plot(kind = 'bar', stacked = True)
colnames
| 0.233794 | 0.760228 |
```
import numpy as np
import csv
%matplotlib inline
import pandas as pd
import re
from datetime import datetime
id_to_region = { 12: 1, 88: 1, 87: 1, 209: 1, 45: 1, 231: 1, 261: 1, 13: 1, 158: 2, 249: 2, 113: 2, 114: 2, 79: 2, 4: 2, 232: 2, 148: 2, 144: 2, 211: 2, 125: 2, 246: 3, 50: 3, 48: 3, 68: 3, 90: 3, 186: 3, 100: 3, 230: 3, 163: 3, 161: 3, 164: 3, 234: 3, 107: 3, 170: 3, 162: 3, 229: 3, 233: 3, 137: 3, 224: 3, 143: 4, 142: 4, 239: 4, 238: 4, 151: 4, 24: 4, 75: 5, 236: 5, 263: 5, 262: 5, 140: 5, 141: 5, 237: 5, 166: 6, 41: 6, 74: 6, 42: 6, 152: 6, 116: 6, 244: 7, 120: 7, 243: 7, 127: 7, 128: 7,
}
valid_ids = id_to_region.keys()
columns = ['weekday', 'hour', 'region', 'count', 'total_amount_sum', 'duration_min', 'duration_sum', 'duration_max']
def filter_pickup(data_pickup):
# Filter all trips that start and end in the above regions
# Create a copy
data_pickup_filtered = chunk.copy()
# Initialize the list to record which region the origin is located
in_which_region_list = []
# Loop through each row
for i in range(0, len(data_pickup)):
in_which_region = -1 # Initialize with -1
if data_pickup["PULocationID"][i] in valid_ids and data_pickup["PULocationID"][i] == data_pickup["DOLocationID"][i]:
in_which_region = id_to_region[data_pickup["PULocationID"][i]]
in_which_region_list.append(in_which_region)
data_pickup_filtered['region'] = in_which_region_list
# Keep only those have real region indice
data_pickup_filtered = data_pickup_filtered[data_pickup_filtered.region != -1]
# Reset the indice
data_pickup_filtered = data_pickup_filtered.dropna(how='any').reset_index(drop=True)
return data_pickup_filtered
def process_datetime(data_datetime):
weekday_list = []
hour_list = []
duration_list = []
count = [1] * len(data_datetime)
for i in range(len(data_datetime)):
start = pd.to_datetime(data_datetime["tpep_pickup_datetime"][i])
end = pd.to_datetime(data_datetime["tpep_dropoff_datetime"][i])
weekday_list.append(start.weekday())
hour_list.append(start.hour)
duration_list.append(int((end-start).total_seconds()))
data_datetime["weekday"] = weekday_list
data_datetime["hour"] = hour_list
data_datetime["duration"] = duration_list
data_datetime["count"] = count
return data_datetime
def compute_average(to_compute):
df = to_compute.copy()
# Remove unuseful columns
df = df.drop(
[
"PULocationID",
"DOLocationID",
"tpep_pickup_datetime",
"tpep_dropoff_datetime",
"VendorID",
"passenger_count",
"RatecodeID",
"store_and_fwd_flag",
"payment_type",
"fare_amount",
"extra",
"mta_tax",
"tip_amount",
"tolls_amount",
"improvement_surcharge",
],
1
)
# groupby
df = df.groupby(["weekday","hour","region"], as_index=False).agg(
{"count": "count", "total_amount": "sum", "duration": ["min", "sum", "max"]}
)
# rename the columns
df.columns = columns
return df
def process_pipeline(df, chunk, i):
chunk = filter_pickup(chunk)
chunk = process_datetime(chunk)
chunk = compute_average(chunk)
# combine chunk to df
if i == 0:
df = chunk
else:
df = pd.concat([df, chunk], ignore_index=True)
df = df.groupby(["weekday","hour","region"], as_index=False).agg(
{
"count": "sum",
"total_amount_sum": "sum",
"duration_min": "min",
"duration_sum": "sum",
"duration_max": "max",
}
)
return df
chunksize = 10 ** 5
filename = "data/2018_Yellow_Taxi_Trip_Data.csv"
output = "data/2018_Taxi_Processed.csv"
current_size = 0
i = 0
df = pd.DataFrame(columns=columns)
for chunk in pd.read_csv(filename, chunksize=chunksize):
current_size += len(chunk)
# reset indices
chunk = chunk.dropna(how='any').reset_index(drop=True)
print("Iterations run: %s, rows read: %s" % (i, current_size))
df = process_pipeline(df, chunk, i)
i += 1
# save to file
print("saving to csv file %s" % output)
df.to_csv(output)
output = ("data/2018_Taxi_Aggregated.csv")
df = pd.read_csv("data/2018_Taxi_Processed.csv")
"""
We need to calculate the following:
Min, Average, Max number of trips (Cmin, C, Cmax) per hour per region
Average duration of trip (D) per hour per region
Average Fare of trip (F) per hour per region
"""
df["min_trips"] = 3600/(df.duration_sum/df["count"]+180)
```
|
github_jupyter
|
import numpy as np
import csv
%matplotlib inline
import pandas as pd
import re
from datetime import datetime
id_to_region = { 12: 1, 88: 1, 87: 1, 209: 1, 45: 1, 231: 1, 261: 1, 13: 1, 158: 2, 249: 2, 113: 2, 114: 2, 79: 2, 4: 2, 232: 2, 148: 2, 144: 2, 211: 2, 125: 2, 246: 3, 50: 3, 48: 3, 68: 3, 90: 3, 186: 3, 100: 3, 230: 3, 163: 3, 161: 3, 164: 3, 234: 3, 107: 3, 170: 3, 162: 3, 229: 3, 233: 3, 137: 3, 224: 3, 143: 4, 142: 4, 239: 4, 238: 4, 151: 4, 24: 4, 75: 5, 236: 5, 263: 5, 262: 5, 140: 5, 141: 5, 237: 5, 166: 6, 41: 6, 74: 6, 42: 6, 152: 6, 116: 6, 244: 7, 120: 7, 243: 7, 127: 7, 128: 7,
}
valid_ids = id_to_region.keys()
columns = ['weekday', 'hour', 'region', 'count', 'total_amount_sum', 'duration_min', 'duration_sum', 'duration_max']
def filter_pickup(data_pickup):
# Filter all trips that start and end in the above regions
# Create a copy
data_pickup_filtered = chunk.copy()
# Initialize the list to record which region the origin is located
in_which_region_list = []
# Loop through each row
for i in range(0, len(data_pickup)):
in_which_region = -1 # Initialize with -1
if data_pickup["PULocationID"][i] in valid_ids and data_pickup["PULocationID"][i] == data_pickup["DOLocationID"][i]:
in_which_region = id_to_region[data_pickup["PULocationID"][i]]
in_which_region_list.append(in_which_region)
data_pickup_filtered['region'] = in_which_region_list
# Keep only those have real region indice
data_pickup_filtered = data_pickup_filtered[data_pickup_filtered.region != -1]
# Reset the indice
data_pickup_filtered = data_pickup_filtered.dropna(how='any').reset_index(drop=True)
return data_pickup_filtered
def process_datetime(data_datetime):
weekday_list = []
hour_list = []
duration_list = []
count = [1] * len(data_datetime)
for i in range(len(data_datetime)):
start = pd.to_datetime(data_datetime["tpep_pickup_datetime"][i])
end = pd.to_datetime(data_datetime["tpep_dropoff_datetime"][i])
weekday_list.append(start.weekday())
hour_list.append(start.hour)
duration_list.append(int((end-start).total_seconds()))
data_datetime["weekday"] = weekday_list
data_datetime["hour"] = hour_list
data_datetime["duration"] = duration_list
data_datetime["count"] = count
return data_datetime
def compute_average(to_compute):
df = to_compute.copy()
# Remove unuseful columns
df = df.drop(
[
"PULocationID",
"DOLocationID",
"tpep_pickup_datetime",
"tpep_dropoff_datetime",
"VendorID",
"passenger_count",
"RatecodeID",
"store_and_fwd_flag",
"payment_type",
"fare_amount",
"extra",
"mta_tax",
"tip_amount",
"tolls_amount",
"improvement_surcharge",
],
1
)
# groupby
df = df.groupby(["weekday","hour","region"], as_index=False).agg(
{"count": "count", "total_amount": "sum", "duration": ["min", "sum", "max"]}
)
# rename the columns
df.columns = columns
return df
def process_pipeline(df, chunk, i):
chunk = filter_pickup(chunk)
chunk = process_datetime(chunk)
chunk = compute_average(chunk)
# combine chunk to df
if i == 0:
df = chunk
else:
df = pd.concat([df, chunk], ignore_index=True)
df = df.groupby(["weekday","hour","region"], as_index=False).agg(
{
"count": "sum",
"total_amount_sum": "sum",
"duration_min": "min",
"duration_sum": "sum",
"duration_max": "max",
}
)
return df
chunksize = 10 ** 5
filename = "data/2018_Yellow_Taxi_Trip_Data.csv"
output = "data/2018_Taxi_Processed.csv"
current_size = 0
i = 0
df = pd.DataFrame(columns=columns)
for chunk in pd.read_csv(filename, chunksize=chunksize):
current_size += len(chunk)
# reset indices
chunk = chunk.dropna(how='any').reset_index(drop=True)
print("Iterations run: %s, rows read: %s" % (i, current_size))
df = process_pipeline(df, chunk, i)
i += 1
# save to file
print("saving to csv file %s" % output)
df.to_csv(output)
output = ("data/2018_Taxi_Aggregated.csv")
df = pd.read_csv("data/2018_Taxi_Processed.csv")
"""
We need to calculate the following:
Min, Average, Max number of trips (Cmin, C, Cmax) per hour per region
Average duration of trip (D) per hour per region
Average Fare of trip (F) per hour per region
"""
df["min_trips"] = 3600/(df.duration_sum/df["count"]+180)
| 0.343562 | 0.63307 |
# Example Document
This is an example notebook to try out the ["Notebook as PDF"](https://github.com/betatim/notebook-as-pdf) extension. It contains a few plots from the excellent [matplotlib gallery](https://matplotlib.org/3.1.1/gallery/index.html).
To try out the extension click "File -> Download as -> PDF via HTML". This will convert this notebook into a PDF. This extension has three new features compared to the official "save as PDF" extension:
* it produces a PDF with the smallest number of page breaks,
* the original notebook is attached to the PDF; and
* this extension does not require LaTex.
The created PDF will have as few pages as possible, in many cases only one. This is useful if you are exporting your notebook to a PDF for sharing with others who will view them on a screen.
To make it easier to reproduce the contents of the PDF at a later date the original notebook is attached to the PDF. Not all PDF viewers know how to deal with attachments. This mean you need to use Acrobat Reader or pdf.js to be able to get the attachment from the PDF. Preview for OSX does not know how to display/give you access to PDF attachments.
```
import numpy as np
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
# Compute pie slices
N = 20
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
colors = plt.cm.viridis(radii / 10.)
ax = plt.subplot(111, projection='polar')
ax.bar(theta, radii, width=width, bottom=0.0, color=colors, alpha=0.5)
```
Below we show some more lines that go up and go down. These are noisy lines because we use a random number generator to create them. Fantastic isn't it?
```
x = np.linspace(0, 10)
# Fixing random state for reproducibility
np.random.seed(19680801)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + np.random.randn(50));
```
## Markdown Images
This image is a image inserted via Markdown image tag:

## How about math?
How is math handled and displayed? Force is mass times acceleration: $F = ma$.
What is the gravity of all this?
$$ F = G \frac{m_1 m_2}{d^2} $$
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
# Compute pie slices
N = 20
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
colors = plt.cm.viridis(radii / 10.)
ax = plt.subplot(111, projection='polar')
ax.bar(theta, radii, width=width, bottom=0.0, color=colors, alpha=0.5)
x = np.linspace(0, 10)
# Fixing random state for reproducibility
np.random.seed(19680801)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + np.random.randn(50));
| 0.649023 | 0.98595 |
# Pneumonia X-Ray Classification.
> Predicting whether a Chest X-Ray has Pneumonia or is Normal.
- toc: false
- branch: master
- badges: true
- comments: true
- categories: [begginer]
- image: images/header-pneumonia.jpg

# Introduction
Recently, I have been widely interested in the overlap between Deep Learning and Biology and decided to start learning about it. I came across an interesting challenge, where I try to build a Pneumonia Binary Classification Computer Vision model that predicts whether a chest X-ray has Pneumonia or not. I also learned a nifty approach to deal with a problem that is common in Medical Datasets, that I will show you here.
I am going to be using [fastai](https://github.com/fastai/fastai) and [PyTorch](https://github.com/pytorch/pytorch) for this tutorial. I want to extend my thanks to the author of this [dataset](https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia) from Kaggle that we are going to be using today.
```
#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
#hide
!mkdir ~/.kaggle
!mkdir data
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
#hide
!kaggle datasets download -d paultimothymooney/chest-xray-pneumonia
#hide
!unzip *zip -d data && rm -rf *zip
#hide
from fastbook import *
```
Let's get the packages that we will need:
```
from fastai.vision.all import *
import matplotlib.pyplot as plt
import seaborn as sns
```
How does our data look like?
```
path = Path('data/chest_xray')
#hide
Path.BASE_PATH = path
path.ls()
```
It is already separated for us in the relevant folders. Awesome! Let's check inside one of the folders:
```
(path/'train').ls()
```
The folders are also separated into their respective classes. How many images do we have per category?
```
train = get_image_files(path/'train')
val = get_image_files(path/'val')
test = get_image_files(path/'test')
print(f"Train: {len(train)}, Valid: {len(val)}, Test: {len(test)}")
```
Our validation set has only 16 images! That won't be a good measurement of how our model is performing but we will tackle that later on.
Let us check the distribution of images between the two classes:
```
normal = get_image_files(path/'train'/'NORMAL')
pneumonia = get_image_files(path/'train'/'PNEUMONIA')
print(f"Normal Images: {len(normal)}. Pneumonia Images: {len(pneumonia)}")
data = [['Normal', len(normal)], ['Pneumonia', len(pneumonia)]]
df = pd.DataFrame(data, columns=['Class', 'Count'])
sns.barplot(x=df['Class'], y=df['Count']);
```
Remember the problem common to Medical Datasets I was talking about? We see that our dataset is imbalanced. Our negative class (Normal) is 3 times less than our positive class. This is a problem. How do we solve it?
First, we will utilize some Data Augmentations. This is artificially growing our dataset by introducing some transforms on the images.
Second, we will use some lessons that I read from a wonderful paper: [Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class
imbalance problem in convolutional neural networks](https://arxiv.org/abs/1710.05381) that studies the problem of class imbalances and offers a way to solve it. I highly recommend reading the paper.
However, we need to build a Baseline model that we can later improve on.
For the Data Augmentations, we have to be careful to pick the ones that make sense for our X-Ray data. I picked Rotate and Zoom. If you think about it, transforms like flipping the image won't be useful since our body parts are in specific locations. e.g our liver is one the right, and flipping the X-Ray would take it to the opposite side.
I also utilize a nifty trick called Presizing from the fastai team. The basic idea behind this approach is this: We first resize the image to a bigger size, bigger than what we want for the final image. For instance, here, I resize the image to 460x460 first, then later on, resize it to 224x224 and at the same time, apply all the augmentaions at once. That is the most important point, applying the final resize and the transforms at the same time, preferably as a batch transform on the GPU. This helps in a higher quality image than insted, let's say, applying them one by one, which may degrade the data. To learn more about presizing, check out his [notebook](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb).
```
augs = [RandomResizedCropGPU(size=224, min_scale=0.75), Rotate(), Zoom()]
dblock = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=parent_label,
splitter=GrandparentSplitter(train_name='train', valid_name='val'),
item_tfms=Resize(460),
batch_tfms=augs
)
```
Let us collect the data in a dataloaders object and show one batch.
```
dls = dblock.dataloaders(path)
dls.show_batch()
```
We are going to utilize transfer learning on the resnet18 architecture. Our metrics to guide us are going to be error rate and accuracy.
```
learn = cnn_learner(dls, resnet18, metrics=[error_rate, accuracy])
```
We use another great tool by fastai that helps us get the optimal learning rate to use. Something around 1x10<sup>-2</sup> will work okay according to the plot. (The bottom scale is Logarithmic)
```
learn.lr_find()
```
Let me explain, what happens in the next few cells. In Transfer Learning, we need to retain the knowledge learned by the pretrained model. So what happens is, we freeze all the earlier layers and chop off the last classification layer and replace it with a layer with random weights and the correct number outputs, two in this case(it is done by default in fastai when creating the learner through the cnn_learner method).
So, first we train the final layer (with random weights) for 3 epochs, with the one cycle training policy. Then we unfreeze the whole model, find the new suitable learning rate (because we are now updating all the weights) and train for a further 3 epochs.
```
learn.fit_one_cycle(3, lr_max=1e-2)
learn.unfreeze()
learn.lr_find()
```
This plot looks different from the other one, since we are now updating all the weights, not just the final random ones, and the first layers don't need too much learning. 4x10<sup>-6</sup> is the suggested learning rate.
Let's train the whole model with the new learning rate:
```
learn.fit_one_cycle(3, lr_max=4.4e-6)
```
87.5% accuracy. Not bad for a start, but we will try ways to improve it.
Let us see how our model is doing by inspecting the Confusion Matrix.
```
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
```
Four 'NORMAL' images are being classified as 'PNEUMONIA'. Can this be caused because our model doesn't have enough examples of the 'NORMAL' class to learn about? Let us investigate.
# Solving the Imbalance Problem
The solution to the problem, as with many solutions to problems in Deep Learning, is simple and something that can be implemented easily. Quoting from their conclusions in the paper:
* The method of addressing class imbalance that emerged as
dominant in almost all analyzed scenarios was **oversampling**.
* Oversampling should be applied to the level that completely eliminates the imbalance.
* Oversampling does not cause overtting of convolutional neural networks.
Basically, Oversampling is artificially making the minority class bigger by replicating it a couple of times. The paper recommends we replicate it until is completely eliminates the imbalance, therefore, our new oversampled 'NORMAL' class is going to be the original images, repeated three times. And we don't have to worry about overfitting of our model too!
```
os_normal = get_image_files(path/'train'/'NORMAL') * 3
pneumonia = get_image_files(path/'train'/'PNEUMONIA')
print(f"Normal Images: {len(os_normal)}. Pneumonia Images: {len(pneumonia)}")
data = [['Normal', len(os_normal)], ['Pneumonia', len(pneumonia)]]
os_df = pd.DataFrame(data, columns=['Class', 'Count'])
sns.barplot(x=os_df['Class'], y=os_df['Count']);
```
After the Oversampling, the distribution between the classes is almost at per. Now our dataset it balanced and we can train a new model on this Balanced Data.
Now we need a new way to split our dataset when loading it to a DataLoader. Our new Oversampled Path is going to be the Oversampled 'NOMARL' class, the original 'PNEUMONIA' and the validation data.
Then we create two variables, train_idx and val_idx, that represent the indexes of the respective category of the images, whether train or validation.
```
os_path = os_normal + pneumonia + val
train_idx = [i for i, fname in enumerate(os_path) if 'train' in str(fname)]
val_idx = [i for i, fname in enumerate(os_path) if 'val' in str(fname)]
L(train_idx), L(val_idx)
```
Now we have 7898 images in the Train instead of the original 5216, and we still have 16 Validation images. We load them up in a dataloaders object, which our learner expects, find the new optimal learning rate and train the model:
```
dblock = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=parent_label,
splitter=lambda x: [train_idx, val_idx],
item_tfms=Resize(460),
batch_tfms=augs
)
dls = dblock.dataloaders(path)
learn = cnn_learner(dls, resnet18, metrics=[error_rate, accuracy])
learn.lr_find()
learn.fit_one_cycle(3, lr_max=2.5e-2)
```
After just three epochs, we get 100% accuracy on the Validation Set. The Oversampling Solution worked well for us.
However, as I mentioned before, we only have 16 images on the Validation Set, so its not a good measure on how well our model generalizes.
So I combined the Validation and Test Set into one, and used that as my Validation Set to test how well my model generalizes.
```
merged_path = os_normal + pneumonia + val + test
train_idx = [i for i, fname in enumerate(merged_path) if 'train' in str(fname)]
val_idx = [i for i, fname in enumerate(merged_path) if 'train' not in str(fname)]
L(train_idx), L(val_idx)
```
We now have 640 images as our validation. How does our model perform with this new data?
```
#hide
dblock = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=parent_label,
splitter=lambda x: [train_idx, val_idx],
item_tfms=Resize(460),
batch_tfms=augs
)
#hide
dls = dblock.dataloaders(path)
learn = cnn_learner(dls, resnet18, metrics=[error_rate, accuracy])
learn.lr_find()
#hide
learn.fit_one_cycle(3, lr_max=1e-2)
#hide
learn.unfreeze()
#hide
learn.lr_find()
learn.fit_one_cycle(5, lr_max=1e-4)
```
99.5% accuracy after 5 epochs looks good, looks like our model generalizes well.
See you next time!
|
github_jupyter
|
#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
#hide
!mkdir ~/.kaggle
!mkdir data
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
#hide
!kaggle datasets download -d paultimothymooney/chest-xray-pneumonia
#hide
!unzip *zip -d data && rm -rf *zip
#hide
from fastbook import *
from fastai.vision.all import *
import matplotlib.pyplot as plt
import seaborn as sns
path = Path('data/chest_xray')
#hide
Path.BASE_PATH = path
path.ls()
(path/'train').ls()
train = get_image_files(path/'train')
val = get_image_files(path/'val')
test = get_image_files(path/'test')
print(f"Train: {len(train)}, Valid: {len(val)}, Test: {len(test)}")
normal = get_image_files(path/'train'/'NORMAL')
pneumonia = get_image_files(path/'train'/'PNEUMONIA')
print(f"Normal Images: {len(normal)}. Pneumonia Images: {len(pneumonia)}")
data = [['Normal', len(normal)], ['Pneumonia', len(pneumonia)]]
df = pd.DataFrame(data, columns=['Class', 'Count'])
sns.barplot(x=df['Class'], y=df['Count']);
augs = [RandomResizedCropGPU(size=224, min_scale=0.75), Rotate(), Zoom()]
dblock = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=parent_label,
splitter=GrandparentSplitter(train_name='train', valid_name='val'),
item_tfms=Resize(460),
batch_tfms=augs
)
dls = dblock.dataloaders(path)
dls.show_batch()
learn = cnn_learner(dls, resnet18, metrics=[error_rate, accuracy])
learn.lr_find()
learn.fit_one_cycle(3, lr_max=1e-2)
learn.unfreeze()
learn.lr_find()
learn.fit_one_cycle(3, lr_max=4.4e-6)
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
os_normal = get_image_files(path/'train'/'NORMAL') * 3
pneumonia = get_image_files(path/'train'/'PNEUMONIA')
print(f"Normal Images: {len(os_normal)}. Pneumonia Images: {len(pneumonia)}")
data = [['Normal', len(os_normal)], ['Pneumonia', len(pneumonia)]]
os_df = pd.DataFrame(data, columns=['Class', 'Count'])
sns.barplot(x=os_df['Class'], y=os_df['Count']);
os_path = os_normal + pneumonia + val
train_idx = [i for i, fname in enumerate(os_path) if 'train' in str(fname)]
val_idx = [i for i, fname in enumerate(os_path) if 'val' in str(fname)]
L(train_idx), L(val_idx)
dblock = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=parent_label,
splitter=lambda x: [train_idx, val_idx],
item_tfms=Resize(460),
batch_tfms=augs
)
dls = dblock.dataloaders(path)
learn = cnn_learner(dls, resnet18, metrics=[error_rate, accuracy])
learn.lr_find()
learn.fit_one_cycle(3, lr_max=2.5e-2)
merged_path = os_normal + pneumonia + val + test
train_idx = [i for i, fname in enumerate(merged_path) if 'train' in str(fname)]
val_idx = [i for i, fname in enumerate(merged_path) if 'train' not in str(fname)]
L(train_idx), L(val_idx)
#hide
dblock = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
get_y=parent_label,
splitter=lambda x: [train_idx, val_idx],
item_tfms=Resize(460),
batch_tfms=augs
)
#hide
dls = dblock.dataloaders(path)
learn = cnn_learner(dls, resnet18, metrics=[error_rate, accuracy])
learn.lr_find()
#hide
learn.fit_one_cycle(3, lr_max=1e-2)
#hide
learn.unfreeze()
#hide
learn.lr_find()
learn.fit_one_cycle(5, lr_max=1e-4)
| 0.340047 | 0.962953 |
<a href="https://colab.research.google.com/github/jbgh2/speech-denoising-wavenet/blob/master/Unet_playground.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Notebook to experiment with code
```
on_colab = True
try:
from google.colab import drive
except:
on_colab = False
print("Running on colab:", on_colab)
import os
import pathlib
if on_colab:
drive.mount('/content/gdrive') #, force_remount=True
BASE_DIR = "/content/gdrive/My Drive/Colab Notebooks"
unet_path = os.path.join(BASE_DIR, "DenoiseUNet")
else:
BASE_DIR = os.path.join(os.getcwd())
unet_path = BASE_DIR
print("Base dir:", BASE_DIR)
from zipfile import ZipFile
import soundfile as sf
import io
for s in ['clean', 'noisy']:
for c in ['train', 'test']:
data_dir = os.path.join(unet_path, f"data/NSDTSEA/{s}_{c}set_wav")
output_file = os.path.join(unet_path, f"data/NSDTSEA/{s}_{c}set_wav.zip")
#print(f"Zipping {data_dir} to {output_file}")
#!zip -qdgds 10m -r "{output_file}" "{data_dir}"
with ZipFile(output_file, "r") as zf:
filenames = zf.namelist()
print(f"{len(filenames)} files in {output_file}")
for f in filenames[:10]:
print(f"Load {f} with SoundFile")
sound_bytes = zf.read(f)
if len(sound_bytes) > 0:
data, sr = sf.read(io.BytesIO(sound_bytes))
print(len(data), sr)
else:
print("Ignoring. Empty file or directory")
#Test available GPU memory
show_gpu_mem = True
if on_colab and show_gpu_mem:
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil psutil humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
from contextlib import contextmanager
import os
@contextmanager
def cd(newdir):
prevdir = os.getcwd()
os.chdir(os.path.expanduser(newdir))
try:
yield
finally:
os.chdir(prevdir)
if on_colab:
print("Update repo")
!git -C "{unet_path}" pull
print("Install requirements")
req_path = os.path.join(unet_path, "requirements.txt")
print(req_path)
!pip install -r "{req_path}"
if on_colab:
!pip install tensorflow-gpu==2.0.0
import tensorflow as tf
print("Versions:", tf.version.VERSION, tf.version.GIT_VERSION)
print("GPU availablilty:", tf.test.is_gpu_available())
print("Eager execution:", tf.executing_eagerly())
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
print("GPU devices:", gpu_devices)
if len(gpu_devices) > 0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print("Memory growth:", tf.config.experimental.get_memory_growth(gpu_devices[0]))
#Quick test
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
if on_colab:
print(f"Adding {unet_path} to sys.path")
import sys
sys.path.append(unet_path)
from main import *
do_training = True
if do_training:
command_line_args = [
"--mode", "training",
"--config", os.path.join(unet_path, "sessions/002/config.json"),
"--batch_size", "4",
"--print_model_summary", "True",
"--use_zipped_dataset"
]
else:
command_line_args = [
"--mode", "inference",
"--config", os.path.join(unet_path, "sessions/001/config.json"),
#"--target_field_length", "16001",
"--batch_size", "4",
"--noisy_input_path", os.path.join(unet_path, "data/NSDTSEA/noisy_testset_wav"),
"--clean_input_path", os.path.join(unet_path, "data/NSDTSEA/clean_testset_wav"),
"--print_model_summary", "True"
]
set_system_settings()
cla = get_command_line_arguments(command_line_args)
config = load_config(cla.config)
config
with cd(unet_path):
print("Working dir:", os.getcwd())
if cla.mode == 'training':
training(config, cla)
elif cla.mode == 'inference':
inference(config, cla)
```
|
github_jupyter
|
on_colab = True
try:
from google.colab import drive
except:
on_colab = False
print("Running on colab:", on_colab)
import os
import pathlib
if on_colab:
drive.mount('/content/gdrive') #, force_remount=True
BASE_DIR = "/content/gdrive/My Drive/Colab Notebooks"
unet_path = os.path.join(BASE_DIR, "DenoiseUNet")
else:
BASE_DIR = os.path.join(os.getcwd())
unet_path = BASE_DIR
print("Base dir:", BASE_DIR)
from zipfile import ZipFile
import soundfile as sf
import io
for s in ['clean', 'noisy']:
for c in ['train', 'test']:
data_dir = os.path.join(unet_path, f"data/NSDTSEA/{s}_{c}set_wav")
output_file = os.path.join(unet_path, f"data/NSDTSEA/{s}_{c}set_wav.zip")
#print(f"Zipping {data_dir} to {output_file}")
#!zip -qdgds 10m -r "{output_file}" "{data_dir}"
with ZipFile(output_file, "r") as zf:
filenames = zf.namelist()
print(f"{len(filenames)} files in {output_file}")
for f in filenames[:10]:
print(f"Load {f} with SoundFile")
sound_bytes = zf.read(f)
if len(sound_bytes) > 0:
data, sr = sf.read(io.BytesIO(sound_bytes))
print(len(data), sr)
else:
print("Ignoring. Empty file or directory")
#Test available GPU memory
show_gpu_mem = True
if on_colab and show_gpu_mem:
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil psutil humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
from contextlib import contextmanager
import os
@contextmanager
def cd(newdir):
prevdir = os.getcwd()
os.chdir(os.path.expanduser(newdir))
try:
yield
finally:
os.chdir(prevdir)
if on_colab:
print("Update repo")
!git -C "{unet_path}" pull
print("Install requirements")
req_path = os.path.join(unet_path, "requirements.txt")
print(req_path)
!pip install -r "{req_path}"
if on_colab:
!pip install tensorflow-gpu==2.0.0
import tensorflow as tf
print("Versions:", tf.version.VERSION, tf.version.GIT_VERSION)
print("GPU availablilty:", tf.test.is_gpu_available())
print("Eager execution:", tf.executing_eagerly())
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
print("GPU devices:", gpu_devices)
if len(gpu_devices) > 0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print("Memory growth:", tf.config.experimental.get_memory_growth(gpu_devices[0]))
#Quick test
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
if on_colab:
print(f"Adding {unet_path} to sys.path")
import sys
sys.path.append(unet_path)
from main import *
do_training = True
if do_training:
command_line_args = [
"--mode", "training",
"--config", os.path.join(unet_path, "sessions/002/config.json"),
"--batch_size", "4",
"--print_model_summary", "True",
"--use_zipped_dataset"
]
else:
command_line_args = [
"--mode", "inference",
"--config", os.path.join(unet_path, "sessions/001/config.json"),
#"--target_field_length", "16001",
"--batch_size", "4",
"--noisy_input_path", os.path.join(unet_path, "data/NSDTSEA/noisy_testset_wav"),
"--clean_input_path", os.path.join(unet_path, "data/NSDTSEA/clean_testset_wav"),
"--print_model_summary", "True"
]
set_system_settings()
cla = get_command_line_arguments(command_line_args)
config = load_config(cla.config)
config
with cd(unet_path):
print("Working dir:", os.getcwd())
if cla.mode == 'training':
training(config, cla)
elif cla.mode == 'inference':
inference(config, cla)
| 0.149998 | 0.570212 |
```
# author: Keerthi Sravan Ravi
# date: 21/12/2017
import os
import numpy as np
import matplotlib.pyplot as plt
import keras
import keras.backend as kb
from keras.models import Sequential, load_model
from keras.layers import Dense, Conv2D, Conv2DTranspose, Flatten, Reshape, BatchNormalization, MaxPool2D
from keras.optimizers import adam
from keras.initializers import Constant, glorot_uniform
from keras.utils import plot_model
import sklearn.model_selection
import scipy.misc
%matplotlib inline
```
## Data loading and saving methods
---
#### 1. load_data_from_disk():
- Load spiral k-space trajectories (x data) from folder `imagenet_kspace`
- Load reconstructed images (y data) from folder `imagenet_orig`
#### 2. save_data_as_npy(arr_path_dict)
For faster access to data, we save the data as `.npy` files.
#### 3. load_data_from_npy(*paths)
In case we have already saved data as `.npy` files, we load the same, as opposed to loading data from `load_data()`.
#### 4. gen_train_test_data(x, y)
Split data into training and testing sets, of proportion:
- Train: 95%
- Test: 5%
```
def load_data_from_disk(root_path):
"""
Load x and y data from 'imagenet_64_orig' and 'imagenet_64_kspace'.
Typically, these two folders should be in the same folder as this code.
"""
# Find data from 'imagenet_64_orig' and 'imagenet_64_kspace'
src_imagenet_orig = os.path.join(root_path, 'imagenet_64_orig')
src_imagenet_kspace = os.path.join(root_path, 'imagenet_64_kspace')
all_folders = ['animals','artefact','flora','fungus','geo','people', 'sports1']
print('Folders to load from: {}'.format(all_folders))
x, y = [], []
for curr_folder in all_folders:
print('Currently in folder \'{}\''.format(curr_folder))
img_folder = os.path.join(src_imagenet_orig, curr_folder)
kspace_folder = os.path.join(src_imagenet_kspace, curr_folder)
num_files = len(os.listdir(kspace_folder))
for i in range(1, num_files):
img_file = str(i) + '.jpg'
kspace_file = str(i) + '.npy'
img_path = os.path.join(img_folder, img_file)
kspace_path = os.path.join(kspace_folder, kspace_file)
img = scipy.misc.imread(img_path)
img_size = img.size
# Skip loading null image
# If more than 90% of the image is white pixels, then it is a null image
null_pixels = len(np.where(img >= 255)[0])
if null_pixels >= 0.9 * img_size:
continue
kspace = np.load(kspace_path)
# Normalize k-space data
kspace = 127 * kspace/np.amax(np.abs(kspace))
kspace_real = np.real(kspace)
kspace_imag = np.imag(kspace)
# Convert to int16 to save memory
kspace = np.stack((kspace_real, kspace_imag), axis=2).astype('int16')
x.append(kspace)
y.append(img)
print('Done')
return x, y
def save_data_as_npy(arr_path_dict):
"""
Save data to .npy files.
Parameters:
arr_path_dict : dict
Key-value pairs of the save paths and the array to be saved
"""
for path, arr in arr_path_dict.items():
np.save(path, arr)
def load_data_from_npy(*paths):
"""
Load data from .npy files.
Parameters:
paths : list
List of paths to load .npy arrays from.
Returns:
An array of the .npy files that were loaded from the specified paths.
"""
arrs = []
for _path in paths:
print('Loading {}'.format(_path))
arrs.append(np.load(_path))
print('Done.')
return arrs
def gen_train_test_data(x, y):
"""
Generate 95% training and 5% testing data.
Parameters:
x : ndarray
X data
y : ndarray
Y data
Returns:
A tuple of x_train, x_test, y_train and y_test
"""
x = x.reshape(x.shape[0], -1)
y = y.reshape(y.shape[0], -1)
x_tr, x_ts, y_tr, y_ts = sklearn.model_selection.train_test_split(np.array(x), np.array(y), train_size=0.95)
print('Training dataset contains {} images'.format(len(x_tr)))
print('Testing dataset contains {} images'.format(len(y_ts)))
return x_tr, x_ts, y_tr, y_ts
# Define the callback for normalizing losses
class NormalizeLosses_Callback(keras.callbacks.Callback):
def __init__(self):
self.max_loss = 0
self.all_losses = []
def on_epoch_end(self, epoch, logs={}):
# We want to record the maximum loss across epochs in this run to normalize losses
loss = logs.get('loss')
self.max_loss = loss if loss > self.max_loss else self.max_loss
self.all_losses = self.all_losses.append(loss / self.max_loss)
return
```
## Create, save and load Keras models
---
#### 1. get_keras_model(return_modified_model=False)
Returns the original Keras Sequential model.
#### 2. save_keras_model_to_disk(model, path)
Save Keras model to disk.
#### 3. load_keras_model_from_disk(path)
Load Keras model from path
```
def get_keras_model(return_modified_model=False):
"""
Parameters:
return_modified_model : boolean
Boolean flag to indicate whether the model returned is original or modified.
Returns:
Keras Sequential model with the following layers, in order:
- Dense (64 * 64) X 3
- Reshape
- { Conv2D(5, 64)
- BatchNormalization } X 3 (4 if return_modified_model is True)
- Conv2DTranspose(5, 64)
- BatchNormalization
- Flatten
"""
# Clear current Keras session, and create a Sequential model
kb.clear_session()
model = Sequential()
# Constant bias = 0.05, and
# Truncated normal filter with std = 0.05
bias_initializer = Constant(value=0.1)
kernel = glorot_uniform()
size = 64 * 64
# Fully connected layers 1, 2 and 3
model.add(Dense(size, use_bias=True, activation='relu', bias_initializer=bias_initializer,
input_shape=(32480,)))
model.add(Dense(size, use_bias=True, activation='relu', bias_initializer=bias_initializer))
model.add(Dense(size, use_bias=True, activation='relu', bias_initializer=bias_initializer))
# Reshape the outputs so far to prepape for convolutional processing
model.add(keras.layers.Reshape((64, 64, 1)))
# Convolution layers 1, 2 and 3 (and 4 if return_modified_model=True)
filter_size = 5
num_filters = 64
model.add(Conv2D(num_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
model.add(Conv2D(num_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
model.add(Conv2D(num_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
if return_modified_model:
model.add(Conv2D(num_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
# Deconvolution layer 1
num_deconv_filters = 1
model.add(Conv2DTranspose(num_deconv_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
# Flatten the outputs
model.add(Flatten())
# Define optimizer, instantiate loss-callback, and compile
model.compile(optimizer=adam(lr=5e-3), metrics=['acc'], loss='mean_squared_error')
return model
def save_keras_model_to_disk(model, path):
"""
Save Keras model to disk.
Parameters:
model : Sequential
Keras Sequential model to save.
path : str
Path to save Keras model to.
"""
save_path = os.path.join(path, 'keras_model.h5')
model.save(save_path)
def load_keras_model_from_disk(path):
"""
Load Keras model from disk.
Parameters:
path : str
Path to load Keras model from.
"""
return load_model(path)
```
## Plot predictions
---
Run the network on, say, 12 k-space data samples (from `x_testing`). Plot the reconstructions versus the ground truth (from `y_testing`).
```
def plot_predictions_vs_truth(model, x_ts, y_ts):
"""
Plot 12 predictions (x_testing) and ground truths (y_testing).
Parameters:
model : Sequential
A Keras Sequential model used for predictions.
x_ts : ndarray
X testing data
y_ts : ndarray
Y testing data
num_predictions : int
Number of predictions to generate and plot
"""
# K-space samples are 280 samples-58 shot, complex numbers
kspace_size = 280 * 58 * 2
# Number of predictions
num_predictions = 12
# Random index of sample to predict
starting_index = np.random.randint(len(x_ts) - num_predictions)
plot_index = 1
plt.figure(num=1, figsize=(14, 6))
plt.suptitle('Reconstructions', fontsize=16)
for x in x_ts[starting_index : starting_index + num_predictions]:
x = x.reshape(-1, kspace_size)
y_pred = model.predict(x)
y_pred = y_pred.reshape((64, 64))
# Plot 2 rows, 6 columns
plt.subplot(2, 6, plot_index)
plt.imshow(y_pred)
plot_index+=1
img_flat_shape = (64, 64)
plot_index = 1
plt.figure(num=2, figsize=(14, 6))
plt.suptitle('Ground truths', fontsize=1)
for y in y_ts[starting_index : starting_index + num_predictions]:
y = y.reshape(img_flat_shape)
# Plot 2 rows, 6 columns
plt.subplot(2, 6, plot_index)
plt.imshow(y)
plot_index+=1
```
## Running the network
---
Now, run everything that we've made!
```
# Load data and generate training and testing sets
# x, y = load_data_from_npy('x.npy', 'y.npy')
x, y = load_data_from_disk('<insert-path-here>')
x_tr, x_ts, y_tr, y_ts = gen_train_test_data(x, y)
# Intantiate a normalized-loss callback, and get a Keras model
loss_callback = NormalizeLosses_Callback()
model = get_keras_model()
print(model.summary())
# Fit the model for 50 epochs, passing the normalzied-loss callback we just created
model.fit(x_tr, y_tr, batch_size=32, epochs=1, validation_data=(x_ts, y_ts), callbacks=[loss_callback])
# Plot 12 predictions and their ground truths
plot_predictions_vs_truth(model, x_ts, y_ts)
```
|
github_jupyter
|
# author: Keerthi Sravan Ravi
# date: 21/12/2017
import os
import numpy as np
import matplotlib.pyplot as plt
import keras
import keras.backend as kb
from keras.models import Sequential, load_model
from keras.layers import Dense, Conv2D, Conv2DTranspose, Flatten, Reshape, BatchNormalization, MaxPool2D
from keras.optimizers import adam
from keras.initializers import Constant, glorot_uniform
from keras.utils import plot_model
import sklearn.model_selection
import scipy.misc
%matplotlib inline
def load_data_from_disk(root_path):
"""
Load x and y data from 'imagenet_64_orig' and 'imagenet_64_kspace'.
Typically, these two folders should be in the same folder as this code.
"""
# Find data from 'imagenet_64_orig' and 'imagenet_64_kspace'
src_imagenet_orig = os.path.join(root_path, 'imagenet_64_orig')
src_imagenet_kspace = os.path.join(root_path, 'imagenet_64_kspace')
all_folders = ['animals','artefact','flora','fungus','geo','people', 'sports1']
print('Folders to load from: {}'.format(all_folders))
x, y = [], []
for curr_folder in all_folders:
print('Currently in folder \'{}\''.format(curr_folder))
img_folder = os.path.join(src_imagenet_orig, curr_folder)
kspace_folder = os.path.join(src_imagenet_kspace, curr_folder)
num_files = len(os.listdir(kspace_folder))
for i in range(1, num_files):
img_file = str(i) + '.jpg'
kspace_file = str(i) + '.npy'
img_path = os.path.join(img_folder, img_file)
kspace_path = os.path.join(kspace_folder, kspace_file)
img = scipy.misc.imread(img_path)
img_size = img.size
# Skip loading null image
# If more than 90% of the image is white pixels, then it is a null image
null_pixels = len(np.where(img >= 255)[0])
if null_pixels >= 0.9 * img_size:
continue
kspace = np.load(kspace_path)
# Normalize k-space data
kspace = 127 * kspace/np.amax(np.abs(kspace))
kspace_real = np.real(kspace)
kspace_imag = np.imag(kspace)
# Convert to int16 to save memory
kspace = np.stack((kspace_real, kspace_imag), axis=2).astype('int16')
x.append(kspace)
y.append(img)
print('Done')
return x, y
def save_data_as_npy(arr_path_dict):
"""
Save data to .npy files.
Parameters:
arr_path_dict : dict
Key-value pairs of the save paths and the array to be saved
"""
for path, arr in arr_path_dict.items():
np.save(path, arr)
def load_data_from_npy(*paths):
"""
Load data from .npy files.
Parameters:
paths : list
List of paths to load .npy arrays from.
Returns:
An array of the .npy files that were loaded from the specified paths.
"""
arrs = []
for _path in paths:
print('Loading {}'.format(_path))
arrs.append(np.load(_path))
print('Done.')
return arrs
def gen_train_test_data(x, y):
"""
Generate 95% training and 5% testing data.
Parameters:
x : ndarray
X data
y : ndarray
Y data
Returns:
A tuple of x_train, x_test, y_train and y_test
"""
x = x.reshape(x.shape[0], -1)
y = y.reshape(y.shape[0], -1)
x_tr, x_ts, y_tr, y_ts = sklearn.model_selection.train_test_split(np.array(x), np.array(y), train_size=0.95)
print('Training dataset contains {} images'.format(len(x_tr)))
print('Testing dataset contains {} images'.format(len(y_ts)))
return x_tr, x_ts, y_tr, y_ts
# Define the callback for normalizing losses
class NormalizeLosses_Callback(keras.callbacks.Callback):
def __init__(self):
self.max_loss = 0
self.all_losses = []
def on_epoch_end(self, epoch, logs={}):
# We want to record the maximum loss across epochs in this run to normalize losses
loss = logs.get('loss')
self.max_loss = loss if loss > self.max_loss else self.max_loss
self.all_losses = self.all_losses.append(loss / self.max_loss)
return
def get_keras_model(return_modified_model=False):
"""
Parameters:
return_modified_model : boolean
Boolean flag to indicate whether the model returned is original or modified.
Returns:
Keras Sequential model with the following layers, in order:
- Dense (64 * 64) X 3
- Reshape
- { Conv2D(5, 64)
- BatchNormalization } X 3 (4 if return_modified_model is True)
- Conv2DTranspose(5, 64)
- BatchNormalization
- Flatten
"""
# Clear current Keras session, and create a Sequential model
kb.clear_session()
model = Sequential()
# Constant bias = 0.05, and
# Truncated normal filter with std = 0.05
bias_initializer = Constant(value=0.1)
kernel = glorot_uniform()
size = 64 * 64
# Fully connected layers 1, 2 and 3
model.add(Dense(size, use_bias=True, activation='relu', bias_initializer=bias_initializer,
input_shape=(32480,)))
model.add(Dense(size, use_bias=True, activation='relu', bias_initializer=bias_initializer))
model.add(Dense(size, use_bias=True, activation='relu', bias_initializer=bias_initializer))
# Reshape the outputs so far to prepape for convolutional processing
model.add(keras.layers.Reshape((64, 64, 1)))
# Convolution layers 1, 2 and 3 (and 4 if return_modified_model=True)
filter_size = 5
num_filters = 64
model.add(Conv2D(num_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
model.add(Conv2D(num_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
model.add(Conv2D(num_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
if return_modified_model:
model.add(Conv2D(num_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
# Deconvolution layer 1
num_deconv_filters = 1
model.add(Conv2DTranspose(num_deconv_filters, filter_size, kernel_initializer=kernel, activation='relu',
use_bias=True, bias_initializer=bias_initializer, padding='same'))
model.add(BatchNormalization(center=True, scale=True))
# Flatten the outputs
model.add(Flatten())
# Define optimizer, instantiate loss-callback, and compile
model.compile(optimizer=adam(lr=5e-3), metrics=['acc'], loss='mean_squared_error')
return model
def save_keras_model_to_disk(model, path):
"""
Save Keras model to disk.
Parameters:
model : Sequential
Keras Sequential model to save.
path : str
Path to save Keras model to.
"""
save_path = os.path.join(path, 'keras_model.h5')
model.save(save_path)
def load_keras_model_from_disk(path):
"""
Load Keras model from disk.
Parameters:
path : str
Path to load Keras model from.
"""
return load_model(path)
def plot_predictions_vs_truth(model, x_ts, y_ts):
"""
Plot 12 predictions (x_testing) and ground truths (y_testing).
Parameters:
model : Sequential
A Keras Sequential model used for predictions.
x_ts : ndarray
X testing data
y_ts : ndarray
Y testing data
num_predictions : int
Number of predictions to generate and plot
"""
# K-space samples are 280 samples-58 shot, complex numbers
kspace_size = 280 * 58 * 2
# Number of predictions
num_predictions = 12
# Random index of sample to predict
starting_index = np.random.randint(len(x_ts) - num_predictions)
plot_index = 1
plt.figure(num=1, figsize=(14, 6))
plt.suptitle('Reconstructions', fontsize=16)
for x in x_ts[starting_index : starting_index + num_predictions]:
x = x.reshape(-1, kspace_size)
y_pred = model.predict(x)
y_pred = y_pred.reshape((64, 64))
# Plot 2 rows, 6 columns
plt.subplot(2, 6, plot_index)
plt.imshow(y_pred)
plot_index+=1
img_flat_shape = (64, 64)
plot_index = 1
plt.figure(num=2, figsize=(14, 6))
plt.suptitle('Ground truths', fontsize=1)
for y in y_ts[starting_index : starting_index + num_predictions]:
y = y.reshape(img_flat_shape)
# Plot 2 rows, 6 columns
plt.subplot(2, 6, plot_index)
plt.imshow(y)
plot_index+=1
# Load data and generate training and testing sets
# x, y = load_data_from_npy('x.npy', 'y.npy')
x, y = load_data_from_disk('<insert-path-here>')
x_tr, x_ts, y_tr, y_ts = gen_train_test_data(x, y)
# Intantiate a normalized-loss callback, and get a Keras model
loss_callback = NormalizeLosses_Callback()
model = get_keras_model()
print(model.summary())
# Fit the model for 50 epochs, passing the normalzied-loss callback we just created
model.fit(x_tr, y_tr, batch_size=32, epochs=1, validation_data=(x_ts, y_ts), callbacks=[loss_callback])
# Plot 12 predictions and their ground truths
plot_predictions_vs_truth(model, x_ts, y_ts)
| 0.85561 | 0.847842 |
```
import local_models.local_models
import numpy as np
import matplotlib.pyplot as plt
import sklearn.linear_model
import sklearn.cluster
from importlib import reload
from ml_battery.utils import cmap
import matplotlib as mpl
import sklearn.datasets
import sklearn.decomposition
import logging
import ml_battery.log
import time
import os
import local_models.loggin
import local_models.TLS_models
import local_models.algorithms
import local_models.linear_projections
import sklearn.gaussian_process as gp
import patched_gpr
from gpr_utils import *
logger = logging.getLogger(__name__)
np.random.seed(1)
reload(local_models.local_models)
#reload(local_models.loggin)
#reload(local_models.TLS_models)
np.warnings.filterwarnings('ignore')
mpl.rcParams['figure.figsize'] = [16.0, 8.0]
mpl.rcParams['font.size'] = int(mpl.rcParams['figure.figsize'][1]*3)
import cycler
CB_color_cycle = ['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']
mpl.rcParams['axes.prop_cycle'] = cycler.cycler('color', CB_color_cycle)
RUN = 4
HZ = 1
project_dir = "../data/local_gpr_contrived_variance_{:02d}".format(RUN)
os.makedirs(project_dir, exist_ok=1)
```
## Variable Noise
```
X = np.linspace(1,1000,1000).reshape((-1,1))
index = local_models.local_models.ConstantDistanceSortedIndex(X.flatten())
y = 10*np.sin(X/50)
y[:450,0] += np.random.normal(0,1,450)
y[450:550,0] += np.random.normal(0,5,100)
y[550:,0] += np.random.normal(0,1,450)
change_points = np.array([450,550])
bandwidth = 120
lm_kernel = local_models.local_models.TriCubeKernel(bandwidth=bandwidth)
plt.plot(X,y)
plt.show()
kernel = np.sum((
#gp.kernels.ConstantKernel(constant_value=1.0, constant_value_bounds=[0.001,100]),
np.prod((
gp.kernels.ConstantKernel(constant_value=10, constant_value_bounds=[1e-10,1e10]),
gp.kernels.RBF(length_scale=10., length_scale_bounds=[1e-10,1e10])
)),
gp.kernels.WhiteKernel(noise_level=10, noise_level_bounds=[1e-10,1e10])
))
exemplar_regressor = GPR(kernel=kernel, normalize_y=True, n_restarts_optimizer=400, alpha=0)
exemplar_rng = (bandwidth, 3*bandwidth-2)
exemplar_X = X[slice(*exemplar_rng)]
exemplar_y = y[slice(*exemplar_rng)]
exemplar_regressor.fit(
exemplar_X,
exemplar_y,
sample_weight = lm_kernel(np.abs(exemplar_X - np.mean(exemplar_X)))[:,0])
np.exp(exemplar_regressor.kernel_.theta)
plt.plot(exemplar_X,exemplar_y)
plt.plot(exemplar_X,exemplar_regressor.predict(exemplar_X),c='r')
plt.savefig(os.path.join(project_dir, "exemplar_variable_noise_b{:07.02f}_rng{}.png".format(bandwidth, str(exemplar_rng))))
FRESH=True
dat_dir = os.path.join(project_dir, "dat_variable_noise_01")
os.makedirs(dat_dir, exist_ok=1)
cvs = 2.**np.arange(-2,16)
rbfs = 2.**np.arange(-3,10)*HZ
for cv in cvs:
for rbf in rbfs:
kernel = np.sum((
np.prod((
gp.kernels.ConstantKernel(constant_value=cv, constant_value_bounds="fixed"),
gp.kernels.RBF(length_scale=rbf, length_scale_bounds="fixed")
)),
gp.kernels.WhiteKernel(noise_level=cv, noise_level_bounds=[1e-10,1e10])
))
regressor = GPR(kernel=kernel, normalize_y=True, n_restarts_optimizer=0, alpha=0)
gpr_models = local_models.local_models.LocalModels(regressor)
print(cv, rbf)
gpr_models.fit(X, y, index=index)
gpr_params = gpr_models.transform(X,
r=lm_kernel.support_radius()-1, weighted=True, kernel=lm_kernel,
neighbor_beta0s=False, batch_size=int(X.shape[0]/10))
filename = os.path.join(project_dir, "c{:10.02f}_r{:05.02f}_k{}.png".format(kernel.k1.k1.constant_value, kernel.k1.k2.length_scale, lm_kernel))
plt_gpr_params(X/HZ, y,
X/HZ, gpr_params,
chg_ptses=[change_points/HZ],
filename=filename, kernel=kernel, display=True)
#illustrative_cv_rbf_pairs = [(32,8),(64,2),(64,512),(.25,.12),(.25,2)]
illustrative_cv_rbf_pairs = [(64,512)]
gpr_paramses = []
for cv, rbf in illustrative_cv_rbf_pairs:
kernel = np.sum((
np.prod((
gp.kernels.ConstantKernel(constant_value=cv, constant_value_bounds=[1e-10,1e10]),
gp.kernels.RBF(length_scale=rbf, length_scale_bounds=[1e-10,1e10])
)),
gp.kernels.WhiteKernel(noise_level=cv, noise_level_bounds=[1e-10,1e10])
))
regressor = GPRNeighborFixedMixt(kernel=kernel, normalize_y=True, n_restarts_optimizer=0, alpha=0)
gpr_models = local_models.local_models.LocalModels(regressor)
print(cv, rbf)
gpr_models.fit(X, y, index=index)
gpr_params = gpr_models.transform(X,
r=lm_kernel.support_radius()-1, weighted=True, kernel=lm_kernel,
neighbor_beta0s=True, batch_size=int(X.shape[0]/10))
gpr_paramses.append(gpr_params)
gpr_paramses = np.stack(gpr_paramses, axis=2)
gpr_paramses.shape
filename = os.path.join(project_dir, "illustrative_pairs.png".format(kernel.k1.k1.constant_value, kernel.k1.k2.length_scale, lm_kernel))
plt_gpr_params(X/HZ, y,
X/HZ, gpr_paramses,
chg_ptses=[change_points/HZ],
filename=filename, kernel=kernel, display=True)
```
|
github_jupyter
|
import local_models.local_models
import numpy as np
import matplotlib.pyplot as plt
import sklearn.linear_model
import sklearn.cluster
from importlib import reload
from ml_battery.utils import cmap
import matplotlib as mpl
import sklearn.datasets
import sklearn.decomposition
import logging
import ml_battery.log
import time
import os
import local_models.loggin
import local_models.TLS_models
import local_models.algorithms
import local_models.linear_projections
import sklearn.gaussian_process as gp
import patched_gpr
from gpr_utils import *
logger = logging.getLogger(__name__)
np.random.seed(1)
reload(local_models.local_models)
#reload(local_models.loggin)
#reload(local_models.TLS_models)
np.warnings.filterwarnings('ignore')
mpl.rcParams['figure.figsize'] = [16.0, 8.0]
mpl.rcParams['font.size'] = int(mpl.rcParams['figure.figsize'][1]*3)
import cycler
CB_color_cycle = ['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']
mpl.rcParams['axes.prop_cycle'] = cycler.cycler('color', CB_color_cycle)
RUN = 4
HZ = 1
project_dir = "../data/local_gpr_contrived_variance_{:02d}".format(RUN)
os.makedirs(project_dir, exist_ok=1)
X = np.linspace(1,1000,1000).reshape((-1,1))
index = local_models.local_models.ConstantDistanceSortedIndex(X.flatten())
y = 10*np.sin(X/50)
y[:450,0] += np.random.normal(0,1,450)
y[450:550,0] += np.random.normal(0,5,100)
y[550:,0] += np.random.normal(0,1,450)
change_points = np.array([450,550])
bandwidth = 120
lm_kernel = local_models.local_models.TriCubeKernel(bandwidth=bandwidth)
plt.plot(X,y)
plt.show()
kernel = np.sum((
#gp.kernels.ConstantKernel(constant_value=1.0, constant_value_bounds=[0.001,100]),
np.prod((
gp.kernels.ConstantKernel(constant_value=10, constant_value_bounds=[1e-10,1e10]),
gp.kernels.RBF(length_scale=10., length_scale_bounds=[1e-10,1e10])
)),
gp.kernels.WhiteKernel(noise_level=10, noise_level_bounds=[1e-10,1e10])
))
exemplar_regressor = GPR(kernel=kernel, normalize_y=True, n_restarts_optimizer=400, alpha=0)
exemplar_rng = (bandwidth, 3*bandwidth-2)
exemplar_X = X[slice(*exemplar_rng)]
exemplar_y = y[slice(*exemplar_rng)]
exemplar_regressor.fit(
exemplar_X,
exemplar_y,
sample_weight = lm_kernel(np.abs(exemplar_X - np.mean(exemplar_X)))[:,0])
np.exp(exemplar_regressor.kernel_.theta)
plt.plot(exemplar_X,exemplar_y)
plt.plot(exemplar_X,exemplar_regressor.predict(exemplar_X),c='r')
plt.savefig(os.path.join(project_dir, "exemplar_variable_noise_b{:07.02f}_rng{}.png".format(bandwidth, str(exemplar_rng))))
FRESH=True
dat_dir = os.path.join(project_dir, "dat_variable_noise_01")
os.makedirs(dat_dir, exist_ok=1)
cvs = 2.**np.arange(-2,16)
rbfs = 2.**np.arange(-3,10)*HZ
for cv in cvs:
for rbf in rbfs:
kernel = np.sum((
np.prod((
gp.kernels.ConstantKernel(constant_value=cv, constant_value_bounds="fixed"),
gp.kernels.RBF(length_scale=rbf, length_scale_bounds="fixed")
)),
gp.kernels.WhiteKernel(noise_level=cv, noise_level_bounds=[1e-10,1e10])
))
regressor = GPR(kernel=kernel, normalize_y=True, n_restarts_optimizer=0, alpha=0)
gpr_models = local_models.local_models.LocalModels(regressor)
print(cv, rbf)
gpr_models.fit(X, y, index=index)
gpr_params = gpr_models.transform(X,
r=lm_kernel.support_radius()-1, weighted=True, kernel=lm_kernel,
neighbor_beta0s=False, batch_size=int(X.shape[0]/10))
filename = os.path.join(project_dir, "c{:10.02f}_r{:05.02f}_k{}.png".format(kernel.k1.k1.constant_value, kernel.k1.k2.length_scale, lm_kernel))
plt_gpr_params(X/HZ, y,
X/HZ, gpr_params,
chg_ptses=[change_points/HZ],
filename=filename, kernel=kernel, display=True)
#illustrative_cv_rbf_pairs = [(32,8),(64,2),(64,512),(.25,.12),(.25,2)]
illustrative_cv_rbf_pairs = [(64,512)]
gpr_paramses = []
for cv, rbf in illustrative_cv_rbf_pairs:
kernel = np.sum((
np.prod((
gp.kernels.ConstantKernel(constant_value=cv, constant_value_bounds=[1e-10,1e10]),
gp.kernels.RBF(length_scale=rbf, length_scale_bounds=[1e-10,1e10])
)),
gp.kernels.WhiteKernel(noise_level=cv, noise_level_bounds=[1e-10,1e10])
))
regressor = GPRNeighborFixedMixt(kernel=kernel, normalize_y=True, n_restarts_optimizer=0, alpha=0)
gpr_models = local_models.local_models.LocalModels(regressor)
print(cv, rbf)
gpr_models.fit(X, y, index=index)
gpr_params = gpr_models.transform(X,
r=lm_kernel.support_radius()-1, weighted=True, kernel=lm_kernel,
neighbor_beta0s=True, batch_size=int(X.shape[0]/10))
gpr_paramses.append(gpr_params)
gpr_paramses = np.stack(gpr_paramses, axis=2)
gpr_paramses.shape
filename = os.path.join(project_dir, "illustrative_pairs.png".format(kernel.k1.k1.constant_value, kernel.k1.k2.length_scale, lm_kernel))
plt_gpr_params(X/HZ, y,
X/HZ, gpr_paramses,
chg_ptses=[change_points/HZ],
filename=filename, kernel=kernel, display=True)
| 0.418935 | 0.557002 |
```
# Widget related imports
import ipywidgets as widgets
from IPython.display import display, clear_output, Javascript
from traitlets import Unicode
# nbconvert related imports
from nbconvert import get_export_names, export_by_name
from nbconvert.writers import FilesWriter
from nbformat import read, NO_CONVERT
from nbconvert.utils.exceptions import ConversionException
```
This notebook shows a really roundabout way to get the name of the notebook file using widgets. The true purpose of this demo is to demonstrate how Javascript and Python widget models are related by `id`.
Create a text Widget without displaying it. The widget will be used to store the notebook's name which is otherwise only available in the front-end.
```
notebook_name = widgets.Text()
```
Get the current notebook's name by pushing JavaScript to the browser that sets the notebook name in a string widget.
```
js = """IPython.notebook.kernel.widget_manager.get_model('%s').then(function(model) {
model.set('value', IPython.notebook.notebook_name);
model.save();
});
""" % notebook_name.model_id
display(Javascript(data=js))
filename = notebook_name.value
filename
```
Create the widget that will allow the user to Export the current notebook.
```
exporter_names = widgets.Dropdown(options=get_export_names(), value='html')
export_button = widgets.Button(description="Export")
download_link = widgets.HTML(visible=False)
```
Export the notebook when the export button is clicked.
```
file_writer = FilesWriter()
def export(name, nb):
# Get a unique key for the notebook and set it in the resources object.
notebook_name = name[:name.rfind('.')]
resources = {}
resources['unique_key'] = notebook_name
resources['output_files_dir'] = '%s_files' % notebook_name
# Try to export
try:
output, resources = export_by_name(exporter_names.value, nb)
except ConversionException as e:
download_link.value = "<br>Could not export notebook!"
else:
write_results = file_writer.write(output, resources, notebook_name=notebook_name)
download_link.value = "<br>Results: <a href='files/{filename}'><i>\"{filename}\"</i></a>".format(filename=write_results)
download_link.visible = True
def handle_export(widget):
with open(filename, 'r') as f:
export(filename, read(f, NO_CONVERT))
export_button.on_click(handle_export)
```
Display the controls.
```
display(exporter_names, export_button, download_link)
```
|
github_jupyter
|
# Widget related imports
import ipywidgets as widgets
from IPython.display import display, clear_output, Javascript
from traitlets import Unicode
# nbconvert related imports
from nbconvert import get_export_names, export_by_name
from nbconvert.writers import FilesWriter
from nbformat import read, NO_CONVERT
from nbconvert.utils.exceptions import ConversionException
notebook_name = widgets.Text()
js = """IPython.notebook.kernel.widget_manager.get_model('%s').then(function(model) {
model.set('value', IPython.notebook.notebook_name);
model.save();
});
""" % notebook_name.model_id
display(Javascript(data=js))
filename = notebook_name.value
filename
exporter_names = widgets.Dropdown(options=get_export_names(), value='html')
export_button = widgets.Button(description="Export")
download_link = widgets.HTML(visible=False)
file_writer = FilesWriter()
def export(name, nb):
# Get a unique key for the notebook and set it in the resources object.
notebook_name = name[:name.rfind('.')]
resources = {}
resources['unique_key'] = notebook_name
resources['output_files_dir'] = '%s_files' % notebook_name
# Try to export
try:
output, resources = export_by_name(exporter_names.value, nb)
except ConversionException as e:
download_link.value = "<br>Could not export notebook!"
else:
write_results = file_writer.write(output, resources, notebook_name=notebook_name)
download_link.value = "<br>Results: <a href='files/{filename}'><i>\"{filename}\"</i></a>".format(filename=write_results)
download_link.visible = True
def handle_export(widget):
with open(filename, 'r') as f:
export(filename, read(f, NO_CONVERT))
export_button.on_click(handle_export)
display(exporter_names, export_button, download_link)
| 0.366703 | 0.765856 |
```
from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
from sympy import init_printing, Matrix, symbols
from IPython.display import Image
from warnings import filterwarnings
init_printing(use_latex = 'mathjax')
filterwarnings('ignore')
```
# Projection matrices and least squares

## Least squares
* Consider from the previous lecture the three data point in the plane
$$ ({t}_{i},{y}_{i}) =(1,1), (2,2),(3,2) $$
* From this we need to construct a straight line
* This could be helpful say in, statistics (remember, though in statistics we might have to get rid of statistical outliers)
* Nonetheless (view image above) we note that we have a straight line in slope-intercept form
$$ {y}={C}+{Dt} $$
* On the line at *t* values of 1, 2, and 3 we will have
$$ {y}_{1}={C}+{D}=1 \\ {y}_{2}={C}+{2D}=2 \\ {y}_{3}={C}+{3D}=2 $$
* The actual *y* values at these *t* values are 1, 2, and 2, though
* We are thus including an error of
$$ \delta{y} \\ { \left( { e }_{ 1 } \right) }^{ 2 }={ \left[ \left( C+D \right) -1 \right] }^{ 2 }\\ { \left( { e }_{ 2 } \right) }^{ 2 }={ \left[ \left( C+2D \right) -2 \right] }^{ 2 }\\ { \left( { e }_{ 3 } \right) }^{ 2 }={ \left[ \left( C+3D \right) -2 \right] }^{ 2 } $$
* Since some are positive and some are negative (actual values below or above the line), we simply determine the square (which will always be positive)
* Adding the (three in our example here) squares we have the sum total of the error (which is actuall just the sqautre of the distance between the line and actual *y* values)
* The line will be the best fit when this error sum is at a minimum (hence *least squares*)
* We can do this with calculus or with linear algebra
* For calculus we take the partial derivatives of both unknowns and set to zero
* For linear algebra we project orthogonally onto the columnspace (hence minimizing the error)
* Note that the solution **b** does not exist in the columnspace (it is not a linear combination of the columns)
### Calculus method
* We'll create a function *f*(C,D) and successively take the partial derivatives of both variables and set it to zero
* We fill then have two equation with two unknowns to solve (which is easy enough to do manually or by simple linear algebra and row reduction)
```
C, D = symbols('C D')
e1_squared = ((C + D) - 1) ** 2
e2_squared = ((C + 2 * D) - 2) ** 2
e3_squared = ((C + 3 * D) - 2) ** 2
f = e1_squared + e2_squared + e3_squared
f
f.expand() # Expanding the expression
```
* Doing the partial derivatives will be
$$ f\left( C,D \right) =3{ C }^{ 2 }+12CD-10C+14{ D }^{ 2 }-22D+9\\ \frac { \partial f }{ \partial C } =6C+12D-10=0\\ \frac { \partial f }{ \partial D } =12C+28D-22=0 $$
```
f.diff(C) # Taking the partial derivative with respect to C
f.diff(D) # Taking the partial derivative with respect to D
```
* Setting both equal to zero (and creating a simple augmented matrix) we get
$$ 6C+12D-10=0\\ 12C+28D-22=0\\ \therefore \quad 6C+12D=10\\ \therefore \quad 12C+28D=22 $$
```
A_augm = Matrix([[6, 12, 10], [12, 28, 22]])
A_augm
A_augm.rref() # Doing a Gauss-Jordan elimination to reduced row echelon form
```
* We now have a solution
$$ {y}=\frac{2}{3} + \frac{1}{2}{t}$$
### Linear algebra
* We note that we can construct the following
$$ {C}+{1D}={1} \\ {C}+{2D}={2} \\ {C}+{3D}={2} \\ {C}\begin{bmatrix} 1 \\ 1\\ 1 \end{bmatrix}+{D}\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}=\begin{bmatrix} 1 \\ 2 \\ 2 \end{bmatrix} \\ A\underline { x } =\underline { b } \\ \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ 1 & 3 \end{bmatrix}\begin{bmatrix} C \\ D \end{bmatrix}=\begin{bmatrix} 1 \\ 2 \\ 2 \end{bmatrix} $$
* **b** is not in the columnspace of A and we have to do orthogonal projection
$$ { A }^{ T }A\hat { x } ={ A }^{ T }\underline { b } \\ \hat { x } ={ \left( { A }^{ T }A \right) }^{ -1 }{ A }^{ T }\underline { b } $$
```
A = Matrix([[1, 1], [1, 2], [1, 3]])
b = Matrix([1, 2, 2])
A, b # Showing the two matrices
x_hat = (A.transpose() * A).inv() * A.transpose() * b
x_hat
```
* Again, we get the same values for C and D
* Remember the following
$$ \underline{b} = \underline{p}+\underline{e} $$
* **p** and **e** are perpendicular
* Indeed **p** is in the columnspace of A and **e** is perpendicular to the columspace (or any vector in the columnspace)
## Example problem
### Example problem 1
* Find the quadratic (second order polynomial) equation through the origin, with the following data points: (1,1), (2,5) and (-1,-2)
#### Solution
* Let's just think about a quadratic equation in *y* and *t*
$$ {y}={c}_{1} +{C}{t}+{D}{t}^{2} $$
* Through the origin (0,0) means *y* = 0 and *t* = 0, thus we have
$$ {0}={c}_{1} +{C}{0}+{D}{0}^{2} \\ {c}_{1}=0 \\ {y}={C}{t}+{D}{t}^{2} $$
* This gives us three equation for our three data points
$$ C\left( 1 \right) +D{ \left( 1 \right) }^{ 2 }=1\\ C\left( 2 \right) +D{ \left( 2 \right) }^{ 2 }=5\\ C\left( -1 \right) +D{ \left( -1 \right) }^{ 2 }=-2\\ C\begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix}+D\begin{bmatrix} 1 \\ 4 \\ 1 \end{bmatrix}=\begin{bmatrix} 1 \\ 5 \\ -2 \end{bmatrix}\\ A=\begin{bmatrix} 1 & 1 \\ 2 & 4 \\ -1 & 1 \end{bmatrix}\\ \underline { x } =\begin{bmatrix} C \\ D \end{bmatrix}\\ \underline { b } =\begin{bmatrix} 1 \\ 5 \\ -2 \end{bmatrix} $$
* Clearly **b** is not in the columnspace of A and we have to project orthogonally onto the columnspace using
$$ \hat { x } ={ \left( { A }^{ T }A \right) }^{ -1 }{ A }^{ T }\underline { b } $$
```
A = Matrix([[1, 1], [2, 4], [-1, 1]])
b = Matrix([1, 5, -2])
x_hat = (A.transpose() * A).inv() * A.transpose() * b
x_hat
```
* Here's a simple plot of the equation
```
import matplotlib.pyplot as plt # The graph plotting module
import numpy as np # The numerical mathematics module
%matplotlib inline
x = np.linspace(-2, 3, 100) # Creating 100 x-values
y = (41 / 22) * x + (5 / 22) * x ** 2 # From the equation above
plt.figure(figsize = (8, 6)) # Creating a plot of the indicated size
plt.plot(x, y, 'b-') # Plot the equation above , in essence 100 little plots using small segmnets of blue lines
plt.plot(1, 1, 'ro') # Plot the point in a red dot
plt.plot(2, 5, 'ro')
plt.plot(-1, -2, 'ro')
plt.plot(0, 0, 'gs') # Plot the origin as a green square
plt.show(); # Create the plot
```
|
github_jupyter
|
from IPython.core.display import HTML, Image
css_file = 'style.css'
HTML(open(css_file, 'r').read())
from sympy import init_printing, Matrix, symbols
from IPython.display import Image
from warnings import filterwarnings
init_printing(use_latex = 'mathjax')
filterwarnings('ignore')
C, D = symbols('C D')
e1_squared = ((C + D) - 1) ** 2
e2_squared = ((C + 2 * D) - 2) ** 2
e3_squared = ((C + 3 * D) - 2) ** 2
f = e1_squared + e2_squared + e3_squared
f
f.expand() # Expanding the expression
f.diff(C) # Taking the partial derivative with respect to C
f.diff(D) # Taking the partial derivative with respect to D
A_augm = Matrix([[6, 12, 10], [12, 28, 22]])
A_augm
A_augm.rref() # Doing a Gauss-Jordan elimination to reduced row echelon form
A = Matrix([[1, 1], [1, 2], [1, 3]])
b = Matrix([1, 2, 2])
A, b # Showing the two matrices
x_hat = (A.transpose() * A).inv() * A.transpose() * b
x_hat
A = Matrix([[1, 1], [2, 4], [-1, 1]])
b = Matrix([1, 5, -2])
x_hat = (A.transpose() * A).inv() * A.transpose() * b
x_hat
import matplotlib.pyplot as plt # The graph plotting module
import numpy as np # The numerical mathematics module
%matplotlib inline
x = np.linspace(-2, 3, 100) # Creating 100 x-values
y = (41 / 22) * x + (5 / 22) * x ** 2 # From the equation above
plt.figure(figsize = (8, 6)) # Creating a plot of the indicated size
plt.plot(x, y, 'b-') # Plot the equation above , in essence 100 little plots using small segmnets of blue lines
plt.plot(1, 1, 'ro') # Plot the point in a red dot
plt.plot(2, 5, 'ro')
plt.plot(-1, -2, 'ro')
plt.plot(0, 0, 'gs') # Plot the origin as a green square
plt.show(); # Create the plot
| 0.644561 | 0.940079 |
# 2. Data Tables in Python
```
import pandas as pd
df = pd.DataFrame({
'StudentID' : [264422,264423,264444,264445,264446], # StudentID column and its data
'FirstName' : ['Steven','Alex','Bill','Mark','Bob'], # FirstName column and its data
'EnrolYear' : [2010,2010,2011,2011,2013], # Enrolment year column and its data
'Math' : [100,90,90,40,60], # Mathematics score column and its data
'English' : [60,70,80,80,60] # English score column and its data
})
df
```
#### Practice:
Create another data table called "df2" that contains the height of some of the students. It should contain the following data:
```
df2 = pd.DataFrame({
'StudentID' : [264422,264423,264444,264445],
'Height' : [160,155,175,175],
})
df2
df['FirstName']
df.FirstName
df[df['FirstName'] == 'Alex']
```
#### Practice :
Show the details of the students who enrolled in the year 2011?
```
df[df.EnrolYear==2011]
filt = (df.FirstName != 'Bob') & (df.StudentID > 264423)
df[filt]
# Add new column
df['Total'] = df['Math'] + df['English']
df
```
#### Practice :
Add a new column showing the average mark of "Math" and "English" marks for each student. (Average = Total/number of subjects)
```
df['AvgGrades']=(df.English+df.Math)/2
#you can use the newly generated column, Total, as well.
# df['AvgGrades']=dfTotal./2
df
```
#### Practice :
Practice 5: Write a filter to select students who have scored more than 150 in total and who have achieved a subject score (in Maths or English) of 90 or more.
```
df[(df.Total>150) & ((df.English>=90) |( df.Math>=90))]
```
In our current table we have two columns "Math" and "English", both of which contain student marks. We can split the information for each student into different rows using the "melt" command, as follows:
```
df = pd.melt(df, id_vars=['EnrolYear','FirstName','StudentID'],value_vars=['Math','English'],var_name='Subject')
df
df.rename(columns = {'value':'Score'}, inplace = True)
df
df3 = pd.merge(df, df2, on=['StudentID'])
df3
```
#### Practice:
What is the difference that can you see after merging to datasets?
Print out the merged table. Based on the output, explain what you understand about the parameter on=['StudentID'].
```
print(df)
print(df2)
```
Student ID 264446 does not exist in df3 as the ID does not exist in one of the data frames(df2).
#### Practice :
Looking closely at the data in the resulting DataFrame, is there a student is missing after the merge? How can you display all students?
```
df4 = pd.merge(df, df2, on=['StudentID'],how='left')
df4
```
### Reading CSV and Excel files into DataFrames
```
ufo_reports = pd.read_csv('uforeports.csv')
ufo_reports.head() # print the first 5 records
```
#### Practice :
How can we display the last 5 records? How can we display the last record only?
```
ufo_reports.tail(5)
ufo_reports.tail(1)
# read excel
ufo_reports_xls = pd.read_excel('uforeports_excel.xls', sheet_name='uforeports')
print(ufo_reports.Time.dtypes)
print(ufo_reports_xls.Time.dtypes)
```
Notice that Time columns in two DataFrames are indeed different. In the table read in from the CSV file, the Time column is just an object, while for the XLS file it was given a datetime format. We can change the format so they are consistent:
```
ufo_reports.Time = pd.to_datetime(ufo_reports.Time) # change to datetime format
print (ufo_reports.Time.dtypes) # print its datatype now
ufo_reports.head() # check whether it looks the same as the other dataframe
```
#### Practice:
Write a statement to read in titanic data (titanic.csv) into a pandas DataFrame called ‘titanic’. Then print out the first few rows.
```
titanic=pd.read_csv('titanic.csv')
titanic.head(5)
print(titanic.shape)
```
How many rows and columns are there?
891 rows and 15 cols
```
titanic.dtypes
titanic.describe()
titanic.age.value_counts().head()
```
# Group By
```
titanic['age'].mean()
titanic['age'].max()
titanic['age'].min()
titanic['age'].sum() # total years
titanic['age'].std() # standard deviation
titanic['age'].median() # half of the people were older (/younger) than ...
titanic['age'].mode() # most common age
sex_class = titanic.groupby(['sex','class'])['age']
sex_class.mean()
```
#### Practice:
Interpret the output, e.g. what did you notice about the average age of the Titanic’s passengers in regards to their classes? How about the relationship between age and gender?
For both gender, the mean age increase for the higher (rank) class. Male has higher mean age than the female, for all classes.
#### Practice:
Use other aggregation functions and groupings to determine which class had the oldest and youngest passengers and which gender had the largest amount of variation in age (standard deviation).
```
sex_class.min()
sex_class.max()
sex_class.std()
#If you want to repeat the group by without gender, you can use following code and apply the above functions again
sex_class1 = titanic.groupby(['class'])['age']
```
|
github_jupyter
|
import pandas as pd
df = pd.DataFrame({
'StudentID' : [264422,264423,264444,264445,264446], # StudentID column and its data
'FirstName' : ['Steven','Alex','Bill','Mark','Bob'], # FirstName column and its data
'EnrolYear' : [2010,2010,2011,2011,2013], # Enrolment year column and its data
'Math' : [100,90,90,40,60], # Mathematics score column and its data
'English' : [60,70,80,80,60] # English score column and its data
})
df
df2 = pd.DataFrame({
'StudentID' : [264422,264423,264444,264445],
'Height' : [160,155,175,175],
})
df2
df['FirstName']
df.FirstName
df[df['FirstName'] == 'Alex']
df[df.EnrolYear==2011]
filt = (df.FirstName != 'Bob') & (df.StudentID > 264423)
df[filt]
# Add new column
df['Total'] = df['Math'] + df['English']
df
df['AvgGrades']=(df.English+df.Math)/2
#you can use the newly generated column, Total, as well.
# df['AvgGrades']=dfTotal./2
df
df[(df.Total>150) & ((df.English>=90) |( df.Math>=90))]
df = pd.melt(df, id_vars=['EnrolYear','FirstName','StudentID'],value_vars=['Math','English'],var_name='Subject')
df
df.rename(columns = {'value':'Score'}, inplace = True)
df
df3 = pd.merge(df, df2, on=['StudentID'])
df3
print(df)
print(df2)
df4 = pd.merge(df, df2, on=['StudentID'],how='left')
df4
ufo_reports = pd.read_csv('uforeports.csv')
ufo_reports.head() # print the first 5 records
ufo_reports.tail(5)
ufo_reports.tail(1)
# read excel
ufo_reports_xls = pd.read_excel('uforeports_excel.xls', sheet_name='uforeports')
print(ufo_reports.Time.dtypes)
print(ufo_reports_xls.Time.dtypes)
ufo_reports.Time = pd.to_datetime(ufo_reports.Time) # change to datetime format
print (ufo_reports.Time.dtypes) # print its datatype now
ufo_reports.head() # check whether it looks the same as the other dataframe
titanic=pd.read_csv('titanic.csv')
titanic.head(5)
print(titanic.shape)
titanic.dtypes
titanic.describe()
titanic.age.value_counts().head()
titanic['age'].mean()
titanic['age'].max()
titanic['age'].min()
titanic['age'].sum() # total years
titanic['age'].std() # standard deviation
titanic['age'].median() # half of the people were older (/younger) than ...
titanic['age'].mode() # most common age
sex_class = titanic.groupby(['sex','class'])['age']
sex_class.mean()
sex_class.min()
sex_class.max()
sex_class.std()
#If you want to repeat the group by without gender, you can use following code and apply the above functions again
sex_class1 = titanic.groupby(['class'])['age']
| 0.177632 | 0.951278 |
# 2. Mapping Previous Test Results
### Calum Macdonald
## Create mapping indices to reorder Tia's results in h5 test set ordering
```
import sys
import os
import time
import math
import random
import pdb
import h5py
import numpy as np
from progressbar import *
from tqdm.notebook import tnrange
# Dictionary mapping the ordinal labels to particle types
LABEL_DICT = {0:"gamma", 1:"e", 2:"mu"}
# Fix the colour scheme for each particle type
COLOR_DICT = {"gamma":"red", "e":"blue", "mu":"green"}
```
### Load Tia's data
```
fprs = []
tprs = []
thresholds = []
run_id = "/resnet_results"
dump_dir = "/data/WatChMaL/data"
dump_file = "/test_validation_iteration_dump.npz"
softmax_index_dict = {value:key for key, value in LABEL_DICT.items()}
test_dump_path = dump_dir + run_id + dump_file
test_dump_np = np.load(test_dump_path, allow_pickle=True)
res_predictedlabels = np.concatenate(list([batch_array for batch_array in test_dump_np['predicted_labels']]))
res_softmaxes = np.concatenate(list([batch_array for batch_array in test_dump_np['softmax']]))
res_labels = np.concatenate(list([batch_array for batch_array in test_dump_np['labels']]))
res_energies = np.concatenate(list([batch_array for batch_array in test_dump_np['energies']]))
res_rootfiles = np.concatenate(list([batch_array for batch_array in test_dump_np['rootfiles']]))
res_eventids = np.concatenate(list([batch_array for batch_array in test_dump_np['eventids']]))
#res_positions = test_dump_np['positions'].reshape(-1)
res_angles = np.concatenate(list([batch_array for batch_array in test_dump_np['angles']]))
```
### Load h5 data
```
# Import test events from h5 file
filtered_index = "/fast_scratch/WatChMaL/data/IWCD_fulltank_300_pe_idxs.npz"
filtered_indices = np.load(filtered_index, allow_pickle=True)
test_filtered_indices = filtered_indices['test_idxs']
print(test_filtered_indices.shape)
original_data_path = "/data/WatChMaL/data/IWCDmPMT_4pi_fulltank_9M.h5"
f = h5py.File(original_data_path, "r")
hdf5_event_data = (f["event_data"])
original_eventdata = np.memmap(original_data_path, mode="r", shape=hdf5_event_data.shape,
offset=hdf5_event_data.id.get_offset(), dtype=hdf5_event_data.dtype)
original_eventids = np.array(f['event_ids'])
original_rootfiles = np.array(f['root_files'])
original_energies = np.array(f['energies'])
original_positions = np.array(f['positions'])
original_angles = np.array(f['angles'])
original_labels = np.array(f['labels'])
#filtered_eventdata = original_eventdata[test_filtered_indices]
filtered_eventids = original_eventids[test_filtered_indices]
filtered_rootfiles = original_rootfiles[test_filtered_indices]
filtered_energies = original_energies[test_filtered_indices]
filtered_positions = original_positions[test_filtered_indices]
filtered_angles = original_angles[test_filtered_indices]
filtered_labels = original_labels[test_filtered_indices]
```
### Create mapping indices
```
mapping_indices = np.array([])
for i in tnrange(filtered_rootfiles.shape[0]):
id_index = np.where(res_eventids == filtered_eventids[i])[0]
mapping_indices = np.append(mapping_indices, id_index[np.where((res_rootfiles[id_index] == filtered_rootfiles[i]))[0]])
np.savez(os.path.join(os.getcwd(),'map_indices_previous_resnet'), mapping_indices)
```
### Check Them
```
mapping_indices = np.load(os.path.join(os.getcwd(),'/Index_Storage/map_indices_previous_resnet.npz'),allow_pickle=True)['arr_0']
ordered_rn_eventids = res_eventids[mapping_indices.astype(int)]
ordered_rn_rootfiles = res_rootfiles[mapping_indices.astype(int)]
for i in tnrange(mapping_indices.shape[0]):
assert filtered_eventids[i] == ordered_rn_eventids[i]
assert filtered_rootfiles[i] == ordered_rn_rootfiles[i]
print('Success! We have mapping indices to reorder Tia\'s results.')
```
|
github_jupyter
|
import sys
import os
import time
import math
import random
import pdb
import h5py
import numpy as np
from progressbar import *
from tqdm.notebook import tnrange
# Dictionary mapping the ordinal labels to particle types
LABEL_DICT = {0:"gamma", 1:"e", 2:"mu"}
# Fix the colour scheme for each particle type
COLOR_DICT = {"gamma":"red", "e":"blue", "mu":"green"}
fprs = []
tprs = []
thresholds = []
run_id = "/resnet_results"
dump_dir = "/data/WatChMaL/data"
dump_file = "/test_validation_iteration_dump.npz"
softmax_index_dict = {value:key for key, value in LABEL_DICT.items()}
test_dump_path = dump_dir + run_id + dump_file
test_dump_np = np.load(test_dump_path, allow_pickle=True)
res_predictedlabels = np.concatenate(list([batch_array for batch_array in test_dump_np['predicted_labels']]))
res_softmaxes = np.concatenate(list([batch_array for batch_array in test_dump_np['softmax']]))
res_labels = np.concatenate(list([batch_array for batch_array in test_dump_np['labels']]))
res_energies = np.concatenate(list([batch_array for batch_array in test_dump_np['energies']]))
res_rootfiles = np.concatenate(list([batch_array for batch_array in test_dump_np['rootfiles']]))
res_eventids = np.concatenate(list([batch_array for batch_array in test_dump_np['eventids']]))
#res_positions = test_dump_np['positions'].reshape(-1)
res_angles = np.concatenate(list([batch_array for batch_array in test_dump_np['angles']]))
# Import test events from h5 file
filtered_index = "/fast_scratch/WatChMaL/data/IWCD_fulltank_300_pe_idxs.npz"
filtered_indices = np.load(filtered_index, allow_pickle=True)
test_filtered_indices = filtered_indices['test_idxs']
print(test_filtered_indices.shape)
original_data_path = "/data/WatChMaL/data/IWCDmPMT_4pi_fulltank_9M.h5"
f = h5py.File(original_data_path, "r")
hdf5_event_data = (f["event_data"])
original_eventdata = np.memmap(original_data_path, mode="r", shape=hdf5_event_data.shape,
offset=hdf5_event_data.id.get_offset(), dtype=hdf5_event_data.dtype)
original_eventids = np.array(f['event_ids'])
original_rootfiles = np.array(f['root_files'])
original_energies = np.array(f['energies'])
original_positions = np.array(f['positions'])
original_angles = np.array(f['angles'])
original_labels = np.array(f['labels'])
#filtered_eventdata = original_eventdata[test_filtered_indices]
filtered_eventids = original_eventids[test_filtered_indices]
filtered_rootfiles = original_rootfiles[test_filtered_indices]
filtered_energies = original_energies[test_filtered_indices]
filtered_positions = original_positions[test_filtered_indices]
filtered_angles = original_angles[test_filtered_indices]
filtered_labels = original_labels[test_filtered_indices]
mapping_indices = np.array([])
for i in tnrange(filtered_rootfiles.shape[0]):
id_index = np.where(res_eventids == filtered_eventids[i])[0]
mapping_indices = np.append(mapping_indices, id_index[np.where((res_rootfiles[id_index] == filtered_rootfiles[i]))[0]])
np.savez(os.path.join(os.getcwd(),'map_indices_previous_resnet'), mapping_indices)
mapping_indices = np.load(os.path.join(os.getcwd(),'/Index_Storage/map_indices_previous_resnet.npz'),allow_pickle=True)['arr_0']
ordered_rn_eventids = res_eventids[mapping_indices.astype(int)]
ordered_rn_rootfiles = res_rootfiles[mapping_indices.astype(int)]
for i in tnrange(mapping_indices.shape[0]):
assert filtered_eventids[i] == ordered_rn_eventids[i]
assert filtered_rootfiles[i] == ordered_rn_rootfiles[i]
print('Success! We have mapping indices to reorder Tia\'s results.')
| 0.299412 | 0.661909 |
# Description
Generates manubot tables for PhenomeXcan and eMERGE associations given an LV name (which is the only parameter that needs to be specified in the Settings section below).
# Modules loading
```
%load_ext autoreload
%autoreload 2
import re
from pathlib import Path
import pandas as pd
from entity import Trait
import conf
```
# Settings
```
LV_NAME = "LV598"
assert (
conf.MANUSCRIPT["BASE_DIR"] is not None
), "The manuscript directory was not configured"
OUTPUT_FILE_PATH = conf.MANUSCRIPT["CONTENT_DIR"] / "50.00.supplementary_material.md"
display(OUTPUT_FILE_PATH)
assert OUTPUT_FILE_PATH.exists()
# result_set is either phenomexcan or emerge
LV_FILE_MARK_TEMPLATE = "<!-- {lv}:{result_set}_traits_assocs:{position} -->"
TABLE_CAPTION = "Table: Significant trait associations of {lv_name} in {result_set_name}. {table_id}"
TABLE_CAPTION_ID = "#tbl:sup:{result_set}_assocs:{lv_name_lower_case}"
RESULT_SET_NAMES = {
"phenomexcan": "PhenomeXcan",
"emerge": "eMERGE",
}
```
# Load data
## PhenomeXcan LV-trait associations
```
input_filepath = Path(conf.RESULTS["GLS"] / "gls_phenotypes-combined-phenomexcan.pkl")
display(input_filepath)
phenomexcan_lv_trait_assocs = pd.read_pickle(input_filepath)
phenomexcan_lv_trait_assocs.shape
phenomexcan_lv_trait_assocs.head()
```
## eMERGE LV-trait associations
```
input_filepath = Path(conf.RESULTS["GLS"] / "gls_phenotypes-combined-emerge.pkl")
display(input_filepath)
emerge_lv_trait_assocs = pd.read_pickle(input_filepath)
emerge_lv_trait_assocs.shape
emerge_lv_trait_assocs.head()
```
## eMERGE traits info
```
input_filepath = conf.EMERGE["DESC_FILE_WITH_SAMPLE_SIZE"]
display(input_filepath)
emerge_traits_info = pd.read_csv(
input_filepath,
sep="\t",
dtype={"phecode": str},
usecols=[
"phecode",
"phenotype",
"category",
"eMERGE_III_EUR_case",
"eMERGE_III_EUR_control",
],
)
emerge_traits_info = emerge_traits_info.set_index("phecode")
emerge_traits_info = emerge_traits_info.rename(
columns={
"eMERGE_III_EUR_case": "eur_n_cases",
"eMERGE_III_EUR_control": "eur_n_controls",
}
)
emerge_traits_info.shape
emerge_traits_info.head()
assert emerge_traits_info.index.is_unique
```
# Trait associations
## PhenomeXcan
```
from traits import SHORT_TRAIT_NAMES
result_set = "phenomexcan"
def get_trait_objs(phenotype_full_code):
if Trait.is_efo_label(phenotype_full_code):
traits = Trait.get_traits_from_efo(phenotype_full_code)
else:
traits = [Trait.get_trait(full_code=phenotype_full_code)]
# sort by sample size
return sorted(traits, key=lambda x: x.n_cases / x.n, reverse=True)
def get_trait_description(phenotype_full_code):
traits = get_trait_objs(phenotype_full_code)
desc = traits[0].description
if desc in SHORT_TRAIT_NAMES:
return SHORT_TRAIT_NAMES[desc]
return desc
def get_trait_n(phenotype_full_code):
traits = get_trait_objs(phenotype_full_code)
return traits[0].n
def get_trait_n_cases(phenotype_full_code):
traits = get_trait_objs(phenotype_full_code)
return traits[0].n_cases
def num_to_int_str(num):
if pd.isnull(num):
return ""
return f"{num:,.0f}"
def get_part_clust(row):
return f"{row.part_k} / {row.cluster_id}"
lv_assocs = phenomexcan_lv_trait_assocs[
(phenomexcan_lv_trait_assocs["lv"] == LV_NAME)
& (phenomexcan_lv_trait_assocs["fdr"] < 0.05)
].sort_values("fdr")
with pd.option_context(
"display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None
):
display(lv_assocs)
lv_assocs = lv_assocs.assign(
phenotype_desc=lv_assocs["phenotype"].apply(get_trait_description)
)
lv_assocs = lv_assocs.assign(n=lv_assocs["phenotype"].apply(get_trait_n))
lv_assocs = lv_assocs.assign(n_cases=lv_assocs["phenotype"].apply(get_trait_n_cases))
lv_assocs = lv_assocs.assign(coef=lv_assocs["coef"].apply(lambda x: f"{x:.3f}"))
lv_assocs = lv_assocs.assign(
fdr=lv_assocs["fdr"].apply(lambda x: f"{x:.2e}".replace("-", "‑"))
)
lv_assocs = lv_assocs.assign(n=lv_assocs["n"].apply(num_to_int_str))
lv_assocs = lv_assocs.assign(n_cases=lv_assocs["n_cases"].apply(num_to_int_str))
lv_assocs = lv_assocs.assign(part_clust=lv_assocs.apply(get_part_clust, axis=1))
lv_assocs = lv_assocs.drop(columns=["phenotype"])
lv_assocs.shape
lv_assocs = lv_assocs[["phenotype_desc", "n", "n_cases", "part_clust", "fdr"]]
lv_assocs = lv_assocs.rename(
columns={
"part_clust": "Partition / cluster",
"lv": "Latent variable (LV)",
# "coef": r"$\beta$",
"fdr": "FDR",
"phenotype_desc": "Trait description",
"n": "Sample size",
"n_cases": "Cases",
}
)
with pd.option_context(
"display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None
):
display(lv_assocs)
```
### Fill empty
```
if lv_assocs.shape[0] == 0:
lv_assocs.loc[0, "Trait description"] = "No significant associations"
lv_assocs = lv_assocs.fillna("")
```
### Save
```
# start
lv_file_mark_start = LV_FILE_MARK_TEMPLATE.format(
result_set=result_set, lv=LV_NAME, position="start"
)
display(lv_file_mark_start)
# end
lv_file_mark_end = LV_FILE_MARK_TEMPLATE.format(
result_set=result_set, lv=LV_NAME, position="end"
)
display(lv_file_mark_end)
new_content = lv_assocs.to_markdown(index=False, disable_numparse=True)
# add table caption
table_caption = TABLE_CAPTION.format(
lv_name=LV_NAME,
result_set_name=RESULT_SET_NAMES[result_set],
table_id="{"
+ TABLE_CAPTION_ID.format(result_set=result_set, lv_name_lower_case=LV_NAME.lower())
+ "}",
)
display(table_caption)
new_content += "\n\n" + table_caption
full_new_content = (
lv_file_mark_start + "\n" + new_content.strip() + "\n" + lv_file_mark_end
)
with open(OUTPUT_FILE_PATH, "r", encoding="utf8") as f:
file_content = f.read()
new_file_content = re.sub(
lv_file_mark_start + ".*?" + lv_file_mark_end,
full_new_content,
file_content,
flags=re.DOTALL,
)
with open(OUTPUT_FILE_PATH, "w", encoding="utf8") as f:
f.write(new_file_content) # .replace("\beta", r"\beta"))
```
## eMERGE
```
result_set = "emerge"
lv_assocs = emerge_lv_trait_assocs[
(emerge_lv_trait_assocs["lv"] == LV_NAME) & (emerge_lv_trait_assocs["fdr"] < 0.05)
].sort_values("fdr")
with pd.option_context(
"display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None
):
display(lv_assocs)
lv_assocs = lv_assocs.assign(
phenotype_desc=lv_assocs["phenotype"].apply(
lambda x: emerge_traits_info.loc[x, "phenotype"]
)
)
lv_assocs = lv_assocs.assign(
n=lv_assocs["phenotype"].apply(
lambda x: emerge_traits_info.loc[x, ["eur_n_cases", "eur_n_controls"]].sum()
)
)
lv_assocs = lv_assocs.assign(
n_cases=lv_assocs["phenotype"].apply(
lambda x: emerge_traits_info.loc[x, "eur_n_cases"]
)
)
lv_assocs = lv_assocs.assign(coef=lv_assocs["coef"].apply(lambda x: f"{x:.3f}"))
lv_assocs = lv_assocs.assign(
fdr=lv_assocs["fdr"].apply(lambda x: f"{x:.2e}".replace("-", "‑"))
)
lv_assocs = lv_assocs.assign(n=lv_assocs["n"].apply(num_to_int_str))
lv_assocs = lv_assocs.assign(n_cases=lv_assocs["n_cases"].apply(num_to_int_str))
lv_assocs = lv_assocs.rename(columns={"phenotype": "phecode"})
lv_assocs.shape
lv_assocs = lv_assocs[["phecode", "phenotype_desc", "n", "n_cases", "fdr"]]
lv_assocs = lv_assocs.rename(
columns={
"lv": "Latent variable (LV)",
# "coef": r"$\beta$",
"fdr": "FDR",
"phecode": "Phecode",
"phenotype_desc": "Trait description",
"n": "Sample size",
"n_cases": "Cases",
}
)
with pd.option_context(
"display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None
):
display(lv_assocs)
```
### Fill empty
```
if lv_assocs.shape[0] == 0:
lv_assocs.loc[0, "Phecode"] = "No significant associations"
lv_assocs = lv_assocs.fillna("")
```
### Save
```
# start
lv_file_mark_start = LV_FILE_MARK_TEMPLATE.format(
result_set=result_set, lv=LV_NAME, position="start"
)
display(lv_file_mark_start)
# end
lv_file_mark_end = LV_FILE_MARK_TEMPLATE.format(
result_set=result_set, lv=LV_NAME, position="end"
)
display(lv_file_mark_end)
new_content = lv_assocs.to_markdown(index=False, disable_numparse=True)
# add table caption
table_caption = TABLE_CAPTION.format(
lv_name=LV_NAME,
result_set_name=RESULT_SET_NAMES[result_set],
table_id="{"
+ TABLE_CAPTION_ID.format(result_set=result_set, lv_name_lower_case=LV_NAME.lower())
+ "}",
)
display(table_caption)
new_content += "\n\n" + table_caption
full_new_content = (
lv_file_mark_start + "\n" + new_content.strip() + "\n" + lv_file_mark_end
)
with open(OUTPUT_FILE_PATH, "r", encoding="utf8") as f:
file_content = f.read()
new_file_content = re.sub(
lv_file_mark_start + ".*?" + lv_file_mark_end,
full_new_content,
file_content,
flags=re.DOTALL,
)
with open(OUTPUT_FILE_PATH, "w", encoding="utf8") as f:
f.write(new_file_content) # .replace("\beta", r"\beta"))
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import re
from pathlib import Path
import pandas as pd
from entity import Trait
import conf
LV_NAME = "LV598"
assert (
conf.MANUSCRIPT["BASE_DIR"] is not None
), "The manuscript directory was not configured"
OUTPUT_FILE_PATH = conf.MANUSCRIPT["CONTENT_DIR"] / "50.00.supplementary_material.md"
display(OUTPUT_FILE_PATH)
assert OUTPUT_FILE_PATH.exists()
# result_set is either phenomexcan or emerge
LV_FILE_MARK_TEMPLATE = "<!-- {lv}:{result_set}_traits_assocs:{position} -->"
TABLE_CAPTION = "Table: Significant trait associations of {lv_name} in {result_set_name}. {table_id}"
TABLE_CAPTION_ID = "#tbl:sup:{result_set}_assocs:{lv_name_lower_case}"
RESULT_SET_NAMES = {
"phenomexcan": "PhenomeXcan",
"emerge": "eMERGE",
}
input_filepath = Path(conf.RESULTS["GLS"] / "gls_phenotypes-combined-phenomexcan.pkl")
display(input_filepath)
phenomexcan_lv_trait_assocs = pd.read_pickle(input_filepath)
phenomexcan_lv_trait_assocs.shape
phenomexcan_lv_trait_assocs.head()
input_filepath = Path(conf.RESULTS["GLS"] / "gls_phenotypes-combined-emerge.pkl")
display(input_filepath)
emerge_lv_trait_assocs = pd.read_pickle(input_filepath)
emerge_lv_trait_assocs.shape
emerge_lv_trait_assocs.head()
input_filepath = conf.EMERGE["DESC_FILE_WITH_SAMPLE_SIZE"]
display(input_filepath)
emerge_traits_info = pd.read_csv(
input_filepath,
sep="\t",
dtype={"phecode": str},
usecols=[
"phecode",
"phenotype",
"category",
"eMERGE_III_EUR_case",
"eMERGE_III_EUR_control",
],
)
emerge_traits_info = emerge_traits_info.set_index("phecode")
emerge_traits_info = emerge_traits_info.rename(
columns={
"eMERGE_III_EUR_case": "eur_n_cases",
"eMERGE_III_EUR_control": "eur_n_controls",
}
)
emerge_traits_info.shape
emerge_traits_info.head()
assert emerge_traits_info.index.is_unique
from traits import SHORT_TRAIT_NAMES
result_set = "phenomexcan"
def get_trait_objs(phenotype_full_code):
if Trait.is_efo_label(phenotype_full_code):
traits = Trait.get_traits_from_efo(phenotype_full_code)
else:
traits = [Trait.get_trait(full_code=phenotype_full_code)]
# sort by sample size
return sorted(traits, key=lambda x: x.n_cases / x.n, reverse=True)
def get_trait_description(phenotype_full_code):
traits = get_trait_objs(phenotype_full_code)
desc = traits[0].description
if desc in SHORT_TRAIT_NAMES:
return SHORT_TRAIT_NAMES[desc]
return desc
def get_trait_n(phenotype_full_code):
traits = get_trait_objs(phenotype_full_code)
return traits[0].n
def get_trait_n_cases(phenotype_full_code):
traits = get_trait_objs(phenotype_full_code)
return traits[0].n_cases
def num_to_int_str(num):
if pd.isnull(num):
return ""
return f"{num:,.0f}"
def get_part_clust(row):
return f"{row.part_k} / {row.cluster_id}"
lv_assocs = phenomexcan_lv_trait_assocs[
(phenomexcan_lv_trait_assocs["lv"] == LV_NAME)
& (phenomexcan_lv_trait_assocs["fdr"] < 0.05)
].sort_values("fdr")
with pd.option_context(
"display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None
):
display(lv_assocs)
lv_assocs = lv_assocs.assign(
phenotype_desc=lv_assocs["phenotype"].apply(get_trait_description)
)
lv_assocs = lv_assocs.assign(n=lv_assocs["phenotype"].apply(get_trait_n))
lv_assocs = lv_assocs.assign(n_cases=lv_assocs["phenotype"].apply(get_trait_n_cases))
lv_assocs = lv_assocs.assign(coef=lv_assocs["coef"].apply(lambda x: f"{x:.3f}"))
lv_assocs = lv_assocs.assign(
fdr=lv_assocs["fdr"].apply(lambda x: f"{x:.2e}".replace("-", "‑"))
)
lv_assocs = lv_assocs.assign(n=lv_assocs["n"].apply(num_to_int_str))
lv_assocs = lv_assocs.assign(n_cases=lv_assocs["n_cases"].apply(num_to_int_str))
lv_assocs = lv_assocs.assign(part_clust=lv_assocs.apply(get_part_clust, axis=1))
lv_assocs = lv_assocs.drop(columns=["phenotype"])
lv_assocs.shape
lv_assocs = lv_assocs[["phenotype_desc", "n", "n_cases", "part_clust", "fdr"]]
lv_assocs = lv_assocs.rename(
columns={
"part_clust": "Partition / cluster",
"lv": "Latent variable (LV)",
# "coef": r"$\beta$",
"fdr": "FDR",
"phenotype_desc": "Trait description",
"n": "Sample size",
"n_cases": "Cases",
}
)
with pd.option_context(
"display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None
):
display(lv_assocs)
if lv_assocs.shape[0] == 0:
lv_assocs.loc[0, "Trait description"] = "No significant associations"
lv_assocs = lv_assocs.fillna("")
# start
lv_file_mark_start = LV_FILE_MARK_TEMPLATE.format(
result_set=result_set, lv=LV_NAME, position="start"
)
display(lv_file_mark_start)
# end
lv_file_mark_end = LV_FILE_MARK_TEMPLATE.format(
result_set=result_set, lv=LV_NAME, position="end"
)
display(lv_file_mark_end)
new_content = lv_assocs.to_markdown(index=False, disable_numparse=True)
# add table caption
table_caption = TABLE_CAPTION.format(
lv_name=LV_NAME,
result_set_name=RESULT_SET_NAMES[result_set],
table_id="{"
+ TABLE_CAPTION_ID.format(result_set=result_set, lv_name_lower_case=LV_NAME.lower())
+ "}",
)
display(table_caption)
new_content += "\n\n" + table_caption
full_new_content = (
lv_file_mark_start + "\n" + new_content.strip() + "\n" + lv_file_mark_end
)
with open(OUTPUT_FILE_PATH, "r", encoding="utf8") as f:
file_content = f.read()
new_file_content = re.sub(
lv_file_mark_start + ".*?" + lv_file_mark_end,
full_new_content,
file_content,
flags=re.DOTALL,
)
with open(OUTPUT_FILE_PATH, "w", encoding="utf8") as f:
f.write(new_file_content) # .replace("\beta", r"\beta"))
result_set = "emerge"
lv_assocs = emerge_lv_trait_assocs[
(emerge_lv_trait_assocs["lv"] == LV_NAME) & (emerge_lv_trait_assocs["fdr"] < 0.05)
].sort_values("fdr")
with pd.option_context(
"display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None
):
display(lv_assocs)
lv_assocs = lv_assocs.assign(
phenotype_desc=lv_assocs["phenotype"].apply(
lambda x: emerge_traits_info.loc[x, "phenotype"]
)
)
lv_assocs = lv_assocs.assign(
n=lv_assocs["phenotype"].apply(
lambda x: emerge_traits_info.loc[x, ["eur_n_cases", "eur_n_controls"]].sum()
)
)
lv_assocs = lv_assocs.assign(
n_cases=lv_assocs["phenotype"].apply(
lambda x: emerge_traits_info.loc[x, "eur_n_cases"]
)
)
lv_assocs = lv_assocs.assign(coef=lv_assocs["coef"].apply(lambda x: f"{x:.3f}"))
lv_assocs = lv_assocs.assign(
fdr=lv_assocs["fdr"].apply(lambda x: f"{x:.2e}".replace("-", "‑"))
)
lv_assocs = lv_assocs.assign(n=lv_assocs["n"].apply(num_to_int_str))
lv_assocs = lv_assocs.assign(n_cases=lv_assocs["n_cases"].apply(num_to_int_str))
lv_assocs = lv_assocs.rename(columns={"phenotype": "phecode"})
lv_assocs.shape
lv_assocs = lv_assocs[["phecode", "phenotype_desc", "n", "n_cases", "fdr"]]
lv_assocs = lv_assocs.rename(
columns={
"lv": "Latent variable (LV)",
# "coef": r"$\beta$",
"fdr": "FDR",
"phecode": "Phecode",
"phenotype_desc": "Trait description",
"n": "Sample size",
"n_cases": "Cases",
}
)
with pd.option_context(
"display.max_rows", None, "display.max_columns", None, "display.max_colwidth", None
):
display(lv_assocs)
if lv_assocs.shape[0] == 0:
lv_assocs.loc[0, "Phecode"] = "No significant associations"
lv_assocs = lv_assocs.fillna("")
# start
lv_file_mark_start = LV_FILE_MARK_TEMPLATE.format(
result_set=result_set, lv=LV_NAME, position="start"
)
display(lv_file_mark_start)
# end
lv_file_mark_end = LV_FILE_MARK_TEMPLATE.format(
result_set=result_set, lv=LV_NAME, position="end"
)
display(lv_file_mark_end)
new_content = lv_assocs.to_markdown(index=False, disable_numparse=True)
# add table caption
table_caption = TABLE_CAPTION.format(
lv_name=LV_NAME,
result_set_name=RESULT_SET_NAMES[result_set],
table_id="{"
+ TABLE_CAPTION_ID.format(result_set=result_set, lv_name_lower_case=LV_NAME.lower())
+ "}",
)
display(table_caption)
new_content += "\n\n" + table_caption
full_new_content = (
lv_file_mark_start + "\n" + new_content.strip() + "\n" + lv_file_mark_end
)
with open(OUTPUT_FILE_PATH, "r", encoding="utf8") as f:
file_content = f.read()
new_file_content = re.sub(
lv_file_mark_start + ".*?" + lv_file_mark_end,
full_new_content,
file_content,
flags=re.DOTALL,
)
with open(OUTPUT_FILE_PATH, "w", encoding="utf8") as f:
f.write(new_file_content) # .replace("\beta", r"\beta"))
| 0.399577 | 0.725168 |
# Character based RNN language model
(c) Deniz Yuret, 2019. Based on http://karpathy.github.io/2015/05/21/rnn-effectiveness.
* Objectives: Learn to define and train a character based language model and generate text from it. Minibatch blocks of text. Keep a persistent RNN state between updates. Train a Shakespeare generator and a Julia programmer using the same type of model.
* Prerequisites: [RNN basics](60.rnn.ipynb), [Iterators](25.iterators.ipynb)
* New functions:
[converge](http://denizyuret.github.io/Knet.jl/latest/reference/#Knet.converge)
```
# Set display width, load packages, import symbols
ENV["COLUMNS"]=72
using Pkg; haskey(Pkg.installed(),"Knet") || Pkg.add("Knet")
using Statistics: mean
using Base.Iterators: cycle
using IterTools: takenth
using Knet: Knet, AutoGrad, Data, param, param0, mat, RNN, dropout, value, nll, adam, minibatch, progress!, converge
```
## Define the model
```
struct Embed; w; end
Embed(vocab::Int,embed::Int)=Embed(param(embed,vocab))
(e::Embed)(x) = e.w[:,x] # (B,T)->(X,B,T)->rnn->(H,B,T)
struct Linear; w; b; end
Linear(input::Int, output::Int)=Linear(param(output,input), param0(output))
(l::Linear)(x) = l.w * mat(x,dims=1) .+ l.b # (H,B,T)->(H,B*T)->(V,B*T)
# Let's define a chain of layers
struct Chain
layers
Chain(layers...) = new(layers)
end
(c::Chain)(x) = (for l in c.layers; x = l(x); end; x)
(c::Chain)(x,y) = nll(c(x),y)
(c::Chain)(d::Data) = mean(c(x,y) for (x,y) in d)
# The h=0,c=0 options to RNN enable a persistent state between iterations
CharLM(vocab::Int,embed::Int,hidden::Int; o...) =
Chain(Embed(vocab,embed), RNN(embed,hidden;h=0,c=0,o...), Linear(hidden,vocab))
```
## Train and test utilities
```
# For running experiments
function trainresults(file,maker,chars)
if (print("Train from scratch? "); readline()[1]=='y')
model = maker()
a = adam(model,cycle(dtrn))
b = (exp(model(dtst)) for _ in takenth(a,100))
c = converge(b, alpha=0.1)
progress!(p->p.currval, c)
Knet.save(file,"model",model,"chars",chars)
else
isfile(file) || download("http://people.csail.mit.edu/deniz/models/tutorial/$file",file)
model,chars = Knet.load(file,"model","chars")
end
Knet.gc() # To save gpu memory
return model,chars
end
# To generate text from trained models
function generate(model,chars,n)
function sample(y)
p = Array(exp.(y)); r = rand()*sum(p)
for j=1:length(p); (r -= p[j]) < 0 && return j; end
end
x = 1
reset!(model)
for i=1:n
y = model([x])
x = sample(y)
print(chars[x])
end
println()
end
reset!(m::Chain)=(for r in m.layers; r isa RNN && (r.c=r.h=0); end);
```
## The Complete Works of William Shakespeare
```
RNNTYPE = :lstm
BATCHSIZE = 256
SEQLENGTH = 100
VOCABSIZE = 84
INPUTSIZE = 168
HIDDENSIZE = 334
NUMLAYERS = 1;
# Load 'The Complete Works of William Shakespeare'
include(Knet.dir("data","gutenberg.jl"))
trn,tst,shakechars = shakespeare()
map(summary,(trn,tst,shakechars))
# Print a sample
println(string(shakechars[trn[1020:1210]]...))
# Minibatch data
function mb(a)
N = length(a) ÷ BATCHSIZE
x = reshape(a[1:N*BATCHSIZE],N,BATCHSIZE)' # reshape full data to (B,N) with contiguous rows
minibatch(x[:,1:N-1], x[:,2:N], SEQLENGTH) # split into (B,T) blocks
end
dtrn,dtst = mb.((trn,tst))
length.((dtrn,dtst))
summary.(first(dtrn)) # each x and y have dimensions (BATCHSIZE,SEQLENGTH)
# [180, 06:58, 2.32s/i] 3.3026385
shakemaker() = CharLM(VOCABSIZE, INPUTSIZE, HIDDENSIZE; rnnType=RNNTYPE, numLayers=NUMLAYERS)
shakemodel,shakechars = trainresults("shakespeare132.jld2", shakemaker, shakechars);
#exp(shakemodel(dtst)) # Perplexity = 3.3150165f0
generate(shakemodel,shakechars,1000)
```
## Julia programmer
```
RNNTYPE = :lstm
BATCHSIZE = 64
SEQLENGTH = 64
INPUTSIZE = 512
VOCABSIZE = 128
HIDDENSIZE = 512
NUMLAYERS = 2;
# Read julia base library source code
base = joinpath(Sys.BINDIR, Base.DATAROOTDIR, "julia", "base")
text = ""
for (root,dirs,files) in walkdir(base)
for f in files
global text
f[end-2:end] == ".jl" || continue
text *= read(joinpath(root,f), String)
end
# println((root,length(files),all(f->contains(f,".jl"),files)))
end
length(text)
# Find unique chars, sort by frequency, assign integer ids.
charcnt = Dict{Char,Int}()
for c in text; charcnt[c]=1+get(charcnt,c,0); end
juliachars = sort(collect(keys(charcnt)), by=(x->charcnt[x]), rev=true)
charid = Dict{Char,Int}()
for i=1:length(juliachars); charid[juliachars[i]]=i; end
hcat(juliachars, map(c->charcnt[c],juliachars))
# Keep only VOCABSIZE most frequent chars, split into train and test
data = map(c->charid[c], collect(text))
data[data .> VOCABSIZE] .= VOCABSIZE
ntst = 1<<19
tst = data[1:ntst]
trn = data[1+ntst:end]
length.((data,trn,tst))
# Print a sample
r = rand(1:(length(trn)-1000))
println(string(juliachars[trn[r:r+1000]]...))
# Minibatch data
function mb(a)
N = length(a) ÷ BATCHSIZE
x = reshape(a[1:N*BATCHSIZE],N,BATCHSIZE)' # reshape full data to (B,N) with contiguous rows
minibatch(x[:,1:N-1], x[:,2:N], SEQLENGTH) # split into (B,T) blocks
end
dtrn,dtst = mb.((trn,tst))
length.((dtrn,dtst))
summary.(first(dtrn)) # each x and y have dimensions (BATCHSIZE,SEQLENGTH)
# [150, 05:04, 2.03s/i] 3.2988634
juliamaker() = CharLM(VOCABSIZE, INPUTSIZE, HIDDENSIZE; rnnType=RNNTYPE, numLayers=NUMLAYERS)
juliamodel,juliachars = trainresults("juliacharlm132.jld2", juliamaker, juliachars);
#exp(juliamodel(dtst)) # Perplexity = 3.8615866f0
generate(juliamodel,juliachars,1000)
```
|
github_jupyter
|
# Set display width, load packages, import symbols
ENV["COLUMNS"]=72
using Pkg; haskey(Pkg.installed(),"Knet") || Pkg.add("Knet")
using Statistics: mean
using Base.Iterators: cycle
using IterTools: takenth
using Knet: Knet, AutoGrad, Data, param, param0, mat, RNN, dropout, value, nll, adam, minibatch, progress!, converge
struct Embed; w; end
Embed(vocab::Int,embed::Int)=Embed(param(embed,vocab))
(e::Embed)(x) = e.w[:,x] # (B,T)->(X,B,T)->rnn->(H,B,T)
struct Linear; w; b; end
Linear(input::Int, output::Int)=Linear(param(output,input), param0(output))
(l::Linear)(x) = l.w * mat(x,dims=1) .+ l.b # (H,B,T)->(H,B*T)->(V,B*T)
# Let's define a chain of layers
struct Chain
layers
Chain(layers...) = new(layers)
end
(c::Chain)(x) = (for l in c.layers; x = l(x); end; x)
(c::Chain)(x,y) = nll(c(x),y)
(c::Chain)(d::Data) = mean(c(x,y) for (x,y) in d)
# The h=0,c=0 options to RNN enable a persistent state between iterations
CharLM(vocab::Int,embed::Int,hidden::Int; o...) =
Chain(Embed(vocab,embed), RNN(embed,hidden;h=0,c=0,o...), Linear(hidden,vocab))
# For running experiments
function trainresults(file,maker,chars)
if (print("Train from scratch? "); readline()[1]=='y')
model = maker()
a = adam(model,cycle(dtrn))
b = (exp(model(dtst)) for _ in takenth(a,100))
c = converge(b, alpha=0.1)
progress!(p->p.currval, c)
Knet.save(file,"model",model,"chars",chars)
else
isfile(file) || download("http://people.csail.mit.edu/deniz/models/tutorial/$file",file)
model,chars = Knet.load(file,"model","chars")
end
Knet.gc() # To save gpu memory
return model,chars
end
# To generate text from trained models
function generate(model,chars,n)
function sample(y)
p = Array(exp.(y)); r = rand()*sum(p)
for j=1:length(p); (r -= p[j]) < 0 && return j; end
end
x = 1
reset!(model)
for i=1:n
y = model([x])
x = sample(y)
print(chars[x])
end
println()
end
reset!(m::Chain)=(for r in m.layers; r isa RNN && (r.c=r.h=0); end);
RNNTYPE = :lstm
BATCHSIZE = 256
SEQLENGTH = 100
VOCABSIZE = 84
INPUTSIZE = 168
HIDDENSIZE = 334
NUMLAYERS = 1;
# Load 'The Complete Works of William Shakespeare'
include(Knet.dir("data","gutenberg.jl"))
trn,tst,shakechars = shakespeare()
map(summary,(trn,tst,shakechars))
# Print a sample
println(string(shakechars[trn[1020:1210]]...))
# Minibatch data
function mb(a)
N = length(a) ÷ BATCHSIZE
x = reshape(a[1:N*BATCHSIZE],N,BATCHSIZE)' # reshape full data to (B,N) with contiguous rows
minibatch(x[:,1:N-1], x[:,2:N], SEQLENGTH) # split into (B,T) blocks
end
dtrn,dtst = mb.((trn,tst))
length.((dtrn,dtst))
summary.(first(dtrn)) # each x and y have dimensions (BATCHSIZE,SEQLENGTH)
# [180, 06:58, 2.32s/i] 3.3026385
shakemaker() = CharLM(VOCABSIZE, INPUTSIZE, HIDDENSIZE; rnnType=RNNTYPE, numLayers=NUMLAYERS)
shakemodel,shakechars = trainresults("shakespeare132.jld2", shakemaker, shakechars);
#exp(shakemodel(dtst)) # Perplexity = 3.3150165f0
generate(shakemodel,shakechars,1000)
RNNTYPE = :lstm
BATCHSIZE = 64
SEQLENGTH = 64
INPUTSIZE = 512
VOCABSIZE = 128
HIDDENSIZE = 512
NUMLAYERS = 2;
# Read julia base library source code
base = joinpath(Sys.BINDIR, Base.DATAROOTDIR, "julia", "base")
text = ""
for (root,dirs,files) in walkdir(base)
for f in files
global text
f[end-2:end] == ".jl" || continue
text *= read(joinpath(root,f), String)
end
# println((root,length(files),all(f->contains(f,".jl"),files)))
end
length(text)
# Find unique chars, sort by frequency, assign integer ids.
charcnt = Dict{Char,Int}()
for c in text; charcnt[c]=1+get(charcnt,c,0); end
juliachars = sort(collect(keys(charcnt)), by=(x->charcnt[x]), rev=true)
charid = Dict{Char,Int}()
for i=1:length(juliachars); charid[juliachars[i]]=i; end
hcat(juliachars, map(c->charcnt[c],juliachars))
# Keep only VOCABSIZE most frequent chars, split into train and test
data = map(c->charid[c], collect(text))
data[data .> VOCABSIZE] .= VOCABSIZE
ntst = 1<<19
tst = data[1:ntst]
trn = data[1+ntst:end]
length.((data,trn,tst))
# Print a sample
r = rand(1:(length(trn)-1000))
println(string(juliachars[trn[r:r+1000]]...))
# Minibatch data
function mb(a)
N = length(a) ÷ BATCHSIZE
x = reshape(a[1:N*BATCHSIZE],N,BATCHSIZE)' # reshape full data to (B,N) with contiguous rows
minibatch(x[:,1:N-1], x[:,2:N], SEQLENGTH) # split into (B,T) blocks
end
dtrn,dtst = mb.((trn,tst))
length.((dtrn,dtst))
summary.(first(dtrn)) # each x and y have dimensions (BATCHSIZE,SEQLENGTH)
# [150, 05:04, 2.03s/i] 3.2988634
juliamaker() = CharLM(VOCABSIZE, INPUTSIZE, HIDDENSIZE; rnnType=RNNTYPE, numLayers=NUMLAYERS)
juliamodel,juliachars = trainresults("juliacharlm132.jld2", juliamaker, juliachars);
#exp(juliamodel(dtst)) # Perplexity = 3.8615866f0
generate(juliamodel,juliachars,1000)
| 0.441673 | 0.875734 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
filepath = "C:\\Users\\abhijit.a.pande\\Machine Learning\\datasets\\lending_club_loan_two.csv"
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error, mean_squared_error
from sklearn.metrics import classification_report
from sklearn.preprocessing import MinMaxScaler
df = pd.read_csv(filepath)
df.info()
len(df['emp_title'].unique())
df = df.drop('emp_title', axis = 1)
df[df['revol_util'].isnull()].head()
def feat_info(col_name):
print (df[col_name].describe())
feat_info("mort_acc")
sb.countplot(df['loan_status'])
df['loan_amnt'].hist(bins = 30)
sb.distplot(df['loan_amnt'], bins = 40, kde = False)
df.corr()
plt.figure(figsize = (20,20))
plt.ylim(10,0)
sb.heatmap(df.corr(), annot = True, cmap = "viridis")
feat_info('installment')
feat_info('loan_amnt')
sb.scatterplot(x = 'installment', y = 'loan_amnt', data = df)
sb.boxplot(x = 'loan_status', y = 'loan_amnt', data = df)
df.groupby('loan_status').describe()
df.groupby('loan_status')['loan_amnt'].describe()
df['grade'].unique()
df['sub_grade'].unique()
sb.countplot('grade', data = df, hue = 'loan_status')
plt.figure(figsize=(10,6))
sb.countplot(df['sub_grade'])
plt.tight_layout()
sub_grade_order = sorted(df['sub_grade'].unique())
sb.countplot(x = 'sub_grade', order = sub_grade_order, hue = 'loan_status', data = df)
fg = df[(df['sub_grade']>'F') | (df['sub_grade']>'G')]
sb.countplot(x = fg['sub_grade'], hue = 'loan_status', data = fg)
def updloanstatus(loan_status):
if loan_status == "Fully Paid":
return 1
else:
return 0
df['loan_repaid']= df ['loan_status'].apply(updloanstatus)
df.corr()['loan_repaid'].sort_values().drop('loan_repaid').plot(kind = "bar")
df.describe()
df['emp_length'].unique()
def updemplen(emplen):
emplen = str(emplen).replace("nan", "0")
emplen = str(emplen).replace(" years", "")
emplen = str(emplen).replace(" year", "")
emplen = str(emplen).replace("< ", "")
emplen = str(emplen).replace("+", "")
return int(emplen)
df['emp_length'] = df['emp_length'].apply(updemplen)
df['emp_length'].count()
df.describe()
sb.countplot(x = 'emp_length', data = df, hue = 'loan_status')
df_loanpaid = df.pivot_table(index= "emp_length", columns = 'loan_status', aggfunc = 'count')['address']
df_loanpaid['Default Ratio'] = df_loanpaid['Charged Off']/df_loanpaid['Fully Paid']
df_loanpaid
df = df.drop('emp_length', axis = 1)
df = df.drop('title',axis = 1)
df['mort_acc'].value_counts()
df.corr()['mort_acc']
total_mort_acc_avg = df.groupby("total_acc").mean()['mort_acc']
total_mort_acc_avg
def fillinmortacc(total_acc, mort_acc):
if np.isnan(mort_acc):
return total_mort_acc_avg[total_acc]
else:
return mort_acc
df['mort_acc'] = df.apply(lambda x: fillinmortacc(x['total_acc'], x['mort_acc']),axis=1)
#df.pivot_table(index = 'total_acc', values = 'mort_acc', aggfunc = 'mean')
df['mort_acc'].head()
df.info()
df = df.dropna()
df.info()
df.select_dtypes(['object']).columns
df['term'] = df['term'].apply(lambda term:int(term[:3]))
df['term'].value_counts()
df = df.drop('grade', axis = 1)
dummies = pd.get_dummies(df['sub_grade'], drop_first= True)
df = df.drop('sub_grade', axis = 1)
df = pd.concat([df,dummies],axis = 1)
dummies = pd.get_dummies(df[['verification_status', 'application_type', 'initial_list_status', 'purpose']], drop_first= True)
df = df.drop(['verification_status', 'application_type', 'initial_list_status', 'purpose'], axis = 1)
df = pd.concat([df,dummies],axis = 1)
df.info()
df['home_ownership'].value_counts()
def updhomeown(homestat):
if homestat == "NONE" or homestat == "ANY":
return "OTHER"
else:
return homestat
df['home_ownership'] = df['home_ownership'].apply(updhomeown)
df['home_ownership'].value_counts()
df = pd.concat(
[df.drop('home_ownership', axis = 1), pd.get_dummies(df['home_ownership'], drop_first = True)], axis = 1)
df['zips'] = df['address'].apply(lambda address: address[-5:])
df = pd.concat([df.drop('zips', axis = 1), pd.get_dummies(df['zips'],drop_first=True)], axis = 1)
df.drop('issue_d',axis = 1)
df['earliest_cr_line'] = df['earliest_cr_line'].apply(lambda credit: int(credit[-4:]))
df = df.drop(['issue_d', 'loan_status', 'address'], axis = 1)
y = df['loan_repaid'].values
x = df.drop('loan_repaid', axis = 1).values
print(len(df))
x_train, x_test, y_train, y_test= train_test_split(x, y, test_size = 0.2, random_state = 100)
scaler = MinMaxScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
model = Sequential()
es = EarlyStopping(monitor="val_loss", patience = 5, mode = "min", verbose = 1)
model.add(Dense(156, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(78, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(39, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(19, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(1, activation = "sigmoid"))
model.compile(loss = "binary_crossentropy", optimizer = 'rmsprop')
model.fit(x = x_train, y = y_train, epochs = 25, batch_size=256, validation_data=(x_test,y_test),verbose=2, callbacks=[es])
from tensorflow.keras.models import load_model
losses = pd.DataFrame(model.history.history)
losses.plot()
predictions = model.predict(x_test)
predictions[predictions[:] < 0.5] = 0
predictions[predictions[:] >= 0.5] = 1
print(classification_report(y_test, predictions))
y_test
model.save("bankmodel.mdl")
load_model("bankmodel.mdl")
import random
random.seed(101)
random_ind = random.randint(0,len(df))
new_customer = df.drop('loan_repaid', axis = 1).iloc[random_ind]
new_customer
new_customer.values
new_customer.values.reshape(1,78)
new_customer = scaler.transform(new_customer.values.reshape(1,78))
model.predict(new_customer)
df.loc[random_ind]['loan_repaid']
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
filepath = "C:\\Users\\abhijit.a.pande\\Machine Learning\\datasets\\lending_club_loan_two.csv"
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error, mean_squared_error
from sklearn.metrics import classification_report
from sklearn.preprocessing import MinMaxScaler
df = pd.read_csv(filepath)
df.info()
len(df['emp_title'].unique())
df = df.drop('emp_title', axis = 1)
df[df['revol_util'].isnull()].head()
def feat_info(col_name):
print (df[col_name].describe())
feat_info("mort_acc")
sb.countplot(df['loan_status'])
df['loan_amnt'].hist(bins = 30)
sb.distplot(df['loan_amnt'], bins = 40, kde = False)
df.corr()
plt.figure(figsize = (20,20))
plt.ylim(10,0)
sb.heatmap(df.corr(), annot = True, cmap = "viridis")
feat_info('installment')
feat_info('loan_amnt')
sb.scatterplot(x = 'installment', y = 'loan_amnt', data = df)
sb.boxplot(x = 'loan_status', y = 'loan_amnt', data = df)
df.groupby('loan_status').describe()
df.groupby('loan_status')['loan_amnt'].describe()
df['grade'].unique()
df['sub_grade'].unique()
sb.countplot('grade', data = df, hue = 'loan_status')
plt.figure(figsize=(10,6))
sb.countplot(df['sub_grade'])
plt.tight_layout()
sub_grade_order = sorted(df['sub_grade'].unique())
sb.countplot(x = 'sub_grade', order = sub_grade_order, hue = 'loan_status', data = df)
fg = df[(df['sub_grade']>'F') | (df['sub_grade']>'G')]
sb.countplot(x = fg['sub_grade'], hue = 'loan_status', data = fg)
def updloanstatus(loan_status):
if loan_status == "Fully Paid":
return 1
else:
return 0
df['loan_repaid']= df ['loan_status'].apply(updloanstatus)
df.corr()['loan_repaid'].sort_values().drop('loan_repaid').plot(kind = "bar")
df.describe()
df['emp_length'].unique()
def updemplen(emplen):
emplen = str(emplen).replace("nan", "0")
emplen = str(emplen).replace(" years", "")
emplen = str(emplen).replace(" year", "")
emplen = str(emplen).replace("< ", "")
emplen = str(emplen).replace("+", "")
return int(emplen)
df['emp_length'] = df['emp_length'].apply(updemplen)
df['emp_length'].count()
df.describe()
sb.countplot(x = 'emp_length', data = df, hue = 'loan_status')
df_loanpaid = df.pivot_table(index= "emp_length", columns = 'loan_status', aggfunc = 'count')['address']
df_loanpaid['Default Ratio'] = df_loanpaid['Charged Off']/df_loanpaid['Fully Paid']
df_loanpaid
df = df.drop('emp_length', axis = 1)
df = df.drop('title',axis = 1)
df['mort_acc'].value_counts()
df.corr()['mort_acc']
total_mort_acc_avg = df.groupby("total_acc").mean()['mort_acc']
total_mort_acc_avg
def fillinmortacc(total_acc, mort_acc):
if np.isnan(mort_acc):
return total_mort_acc_avg[total_acc]
else:
return mort_acc
df['mort_acc'] = df.apply(lambda x: fillinmortacc(x['total_acc'], x['mort_acc']),axis=1)
#df.pivot_table(index = 'total_acc', values = 'mort_acc', aggfunc = 'mean')
df['mort_acc'].head()
df.info()
df = df.dropna()
df.info()
df.select_dtypes(['object']).columns
df['term'] = df['term'].apply(lambda term:int(term[:3]))
df['term'].value_counts()
df = df.drop('grade', axis = 1)
dummies = pd.get_dummies(df['sub_grade'], drop_first= True)
df = df.drop('sub_grade', axis = 1)
df = pd.concat([df,dummies],axis = 1)
dummies = pd.get_dummies(df[['verification_status', 'application_type', 'initial_list_status', 'purpose']], drop_first= True)
df = df.drop(['verification_status', 'application_type', 'initial_list_status', 'purpose'], axis = 1)
df = pd.concat([df,dummies],axis = 1)
df.info()
df['home_ownership'].value_counts()
def updhomeown(homestat):
if homestat == "NONE" or homestat == "ANY":
return "OTHER"
else:
return homestat
df['home_ownership'] = df['home_ownership'].apply(updhomeown)
df['home_ownership'].value_counts()
df = pd.concat(
[df.drop('home_ownership', axis = 1), pd.get_dummies(df['home_ownership'], drop_first = True)], axis = 1)
df['zips'] = df['address'].apply(lambda address: address[-5:])
df = pd.concat([df.drop('zips', axis = 1), pd.get_dummies(df['zips'],drop_first=True)], axis = 1)
df.drop('issue_d',axis = 1)
df['earliest_cr_line'] = df['earliest_cr_line'].apply(lambda credit: int(credit[-4:]))
df = df.drop(['issue_d', 'loan_status', 'address'], axis = 1)
y = df['loan_repaid'].values
x = df.drop('loan_repaid', axis = 1).values
print(len(df))
x_train, x_test, y_train, y_test= train_test_split(x, y, test_size = 0.2, random_state = 100)
scaler = MinMaxScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
model = Sequential()
es = EarlyStopping(monitor="val_loss", patience = 5, mode = "min", verbose = 1)
model.add(Dense(156, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(78, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(39, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(19, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(1, activation = "sigmoid"))
model.compile(loss = "binary_crossentropy", optimizer = 'rmsprop')
model.fit(x = x_train, y = y_train, epochs = 25, batch_size=256, validation_data=(x_test,y_test),verbose=2, callbacks=[es])
from tensorflow.keras.models import load_model
losses = pd.DataFrame(model.history.history)
losses.plot()
predictions = model.predict(x_test)
predictions[predictions[:] < 0.5] = 0
predictions[predictions[:] >= 0.5] = 1
print(classification_report(y_test, predictions))
y_test
model.save("bankmodel.mdl")
load_model("bankmodel.mdl")
import random
random.seed(101)
random_ind = random.randint(0,len(df))
new_customer = df.drop('loan_repaid', axis = 1).iloc[random_ind]
new_customer
new_customer.values
new_customer.values.reshape(1,78)
new_customer = scaler.transform(new_customer.values.reshape(1,78))
model.predict(new_customer)
df.loc[random_ind]['loan_repaid']
| 0.442155 | 0.484746 |
<a href="https://colab.research.google.com/github/Mirailite/2048-python/blob/master/Covid19_analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Analytic results from iBOTNOI www.botnoigroup.com
# Set of experiments aiming to guide some data analytics for Covid-19 cases
- Hopefully if the dataset is richer, we could extract more meaningful insights.
- Insights from the data shows that
1. the fatality rate correlates with age (less than 2% for age under 40, higher than 20% for age over 70)
2. the fatality rate for male is higher (7% for male, 3% for female)
3. the fatality rate for different coutries are very different and require further investigation
# ตัวอย่างการวิเคราะห์ข้อมูลของ Covid-19
หากมีข้อมูลที่มากกว่านี้จะสามารถหา insight เพิ่มเติมได้ ในการทำการวิเคราะห์ข้อมูล มักจะเริ่มจากคำถามต่าง ๆ เช่น ปัจจัยนี้ส่งผลหรือไม่ จากนั้นจะเอาข้อมูลมาใช้ในการสนับสนุนข้อสมมติฐาน
สิ่งที่ได้จากการวิเคราะห์เป็นดังต่อไปนี้
1. อัตราการตายขึ้นอยู่กับอายุ อายุมากมีความเสี่ยงสูงมากกว่า (ต่ำกว่า 2% หากอายุไม่ถึง 40 ปี และสูงกว่า 20% หากอายุเกิน 70 ปี)
2. อัตราการตายของเพศชายสูงกว่าเพศหญิง (7% สำหรับชาย - 3% สำหรับหญิง)
3. อัตราการตายของแต่ละประเทศสูงไม่เท่ากัน น่าจะวิเคราะห์ถึงสาเหตุว่าทำไม
```
!pip install --upgrade -q pygsheets
```
```
import google.auth
from google.colab import auth
auth.authenticate_user()
import pygsheets
import pandas as pd
import numpy as np
import seaborn as sns
credentials, _ = google.auth.default()
gc = pygsheets.client.Client(credentials)
def plotcandle(datList,labelList):
fig, ax = py.subplots()
ax.figure.figsize=(32, 12)
for i in range(len(labelList)):
ax.boxplot(datList[i], positions=[i/10], widths=0.05)
posList = np.array(list(range(len(labelList))))/10
py.xticks(posList, labelList,rotation='vertical')
py.grid()
py.show()
def plotbar(datList,labelList,title):
posList = range(len(datList))
py.figure(figsize=(16, 6))
py.bar(posList,datList)
py.title(title)
py.xticks(posList, labelList,rotation='vertical')
py.grid()
py.show()
```
# Dataset from
https://docs.google.com/spreadsheets/d/1jS24DjSPVWa4iuxuD4OAXrE3QeI8c9BC1hSlqr-NMiU/edit#gid=1187587451
has detailed information such as summary, age, gender, symtoms for some cases and
https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_daily_reports/03-14-2020.csv
which provides daily updated stats per country
```
sheet = gc.open_by_url('https://docs.google.com/spreadsheets/d/1Ojyifha6IqBKb1SIvst-4pEyJbAXFUJee6iHmxS9B1I/edit?usp=sharing')
dat = sheet.sheet1.get_as_df()
dat = dat[['summary','country','death','symptom']]
odat = dat.copy()
odat['age'] = dat['summary'].str.extract(r'( \d+,)')
odat['age'] = odat['age'].str.extract(r'(\d+)').astype('float')
odat['gender'] = dat['summary'].str.extract(r'(male|female)')
odat['death'] = odat['death'] == 1
odat = odat.dropna()
```
Create age range data
```
กฎodat['agegroup'] = 'agegroup'
ageDic = {}
ageDic['1-10'] = (1,10)
ageDic['10-20'] = (10,20)
ageDic['20-30'] = (20,30)
ageDic['30-40'] = (30,40)
ageDic['40-50'] = (40,50)
ageDic['50-60'] = (50,60)
ageDic['60-70'] = (60,70)
ageDic['70+'] = (70,120)
for k in ageDic.keys():
lb,ub = ageDic[k]
odat['agegroup'][(odat['age']>=lb) & (odat['age']<ub)] = k
odat
```
# 1. Age analysis
1.1 Does age information correlate with confirmed cases?
```
agestat = odat.groupby('agegroup').count()[['age']]
agestat['ageratio'] = agestat['age']/sum(agestat['age'])*100
agestat
plotbar(agestat['ageratio'].values,agestat.index.values,'age confirmed case (%)')
```
1.2 Are the confirmed age distributions similar accross different countries?
```
def getattributedata_bycountry(dat,attr):
dat = dat[dat['country']!='']
dat['age'] = dat[attr].astype('float')
medianage = dat.groupby(by='country').mean()
cList = list(medianage.sort_values(by='age').index)
cFrame = []
nclist = []
cdat = dat['age'].astype('float').dropna().values
cFrame.append(cdat)
nclist.append('all')
for c in cList:
cdat = dat[dat['country']==c]['age'].astype('float').dropna().values
if len(cdat)>5:
cFrame.append(cdat)
nclist.append(c)
return cFrame,nclist
cFrame,nclist = getattributedata(odat,'age')
mList = []
for c in cFrame:
mList.append([c.mean(),c.std()])
agedat = pd.DataFrame(data=mList,columns=['mean','std'])
agedat.index = nclist
agedat = agedat.sort_values(by='mean')
plotbar(agedat['mean'],agedat.index.values,'Mean confirmed age by country')
```
How does the age distributions look like in different countries?
```
plotcandle(cFrame,nclist)
```
1.3 Does age information correlate with death cases?
```
drDic = {}
for k in ageDic.keys():
adat = odat[odat['agegroup']==k]
drDic[k] = (sum(adat['death']),len(adat['death']),sum(adat['death'])/len(adat['death'])*100)
#drDic.values()
drFrame = pd.DataFrame(data=drDic.values(),columns=['death-count','confirmed-count','death-ratio'])
drFrame.index = drDic.keys()
drFrame
plotbar(drFrame['death-ratio'],list(drFrame.index),'Death/Confirmed ratio (%)')
```
# 2. Gender analysis
2.1 What is the ratio between male and female confirmed cases accross different countries?
```
odat['male'] = (odat['gender']=='male').astype('int')
mstat = odat[['country','male']].groupby('country').sum()/odat[['country','male']].groupby('country').count()
mstat = mstat.sort_values(by='male')
clist = np.array(mstat.index)
mstatvalues = mstat.values[:,0]
plotbar(mstatvalues,clist,'male ratio')
```
2.2 What is the death ratio between male and female?
```
gdat = odat[['death','gender']]
gc = gdat.groupby(by='gender').count()
gd = gdat.groupby(by='gender').sum()
gender_dratio = gd['death']/gc['death']
gender_dratio
plotbar(gender_dratio,gender_dratio.index,'Death ratio per Gender (%)')
```
# 3. Country analysis
3.1 Are there differences of death/confirmed ratio by country?
```
import requests
url= 'https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_daily_reports/03-14-2020.csv'
s=requests.get(url)
s = s.content
dat = pd.read_html(s)[0]
dat = dat[['Country/Region','Confirmed','Deaths','Recovered']]
dat = dat.groupby(by = 'Country/Region').sum()
dat['DeathRatio'] = dat['Deaths']/dat['Confirmed']*100
dat = dat.sort_values(by='DeathRatio',ascending=False).iloc[0:50]
plotbar(dat['DeathRatio'].values[3:50],dat.index.values[3:50],'Death/Confirmed ratio')
```
# 4. Symptom
4.1 What are the common symptoms?
```
symptomList = odat[odat['symptom'].str.len()>2]
symptomList = symptomList['symptom'].values
symDic = {}
for s in symptomList:
sList = s.split(',')
for st in sList:
st = st.strip()
try:
symDic[st] = symDic[st] + 1
except:
symDic[st] = 1
symFrame = pd.DataFrame()
symFrame['symptom'] = symDic.keys()
symFrame['count'] = symDic.values()
symFrame = symFrame.sort_values(by='count',ascending=False)
symFrame = symFrame.iloc[0:20]
symFrame['ncount'] = (symFrame['count']/len(symptomList))*100
plotbar(symFrame['ncount'],symFrame['symptom'],'Percentage of symptom confirmed cases (%)')
```
# 5. Is the data sufficient to do death prediction?
```
from sklearn.model_selection import train_test_split
ddat = ddat.dropna()
X_train, X_test, y_train, y_test = train_test_split(ddat[['age','male']], ddat['death'].astype('float'), test_size=0.1, random_state=0)
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import precision_recall_curve
clf = GradientBoostingClassifier()
clf.fit(X_train,y_train)
pdeath = clf.predict_proba(X_test)
pdeath = pdeath[:,0]
pdeath = pdeath + np.random.random(len(pdeath))
pr,re, tr = precision_recall_curve(y_test, pdeath)
py.plot(re,pr)
py.show()
```
|
github_jupyter
|
!pip install --upgrade -q pygsheets
import google.auth
from google.colab import auth
auth.authenticate_user()
import pygsheets
import pandas as pd
import numpy as np
import seaborn as sns
credentials, _ = google.auth.default()
gc = pygsheets.client.Client(credentials)
def plotcandle(datList,labelList):
fig, ax = py.subplots()
ax.figure.figsize=(32, 12)
for i in range(len(labelList)):
ax.boxplot(datList[i], positions=[i/10], widths=0.05)
posList = np.array(list(range(len(labelList))))/10
py.xticks(posList, labelList,rotation='vertical')
py.grid()
py.show()
def plotbar(datList,labelList,title):
posList = range(len(datList))
py.figure(figsize=(16, 6))
py.bar(posList,datList)
py.title(title)
py.xticks(posList, labelList,rotation='vertical')
py.grid()
py.show()
sheet = gc.open_by_url('https://docs.google.com/spreadsheets/d/1Ojyifha6IqBKb1SIvst-4pEyJbAXFUJee6iHmxS9B1I/edit?usp=sharing')
dat = sheet.sheet1.get_as_df()
dat = dat[['summary','country','death','symptom']]
odat = dat.copy()
odat['age'] = dat['summary'].str.extract(r'( \d+,)')
odat['age'] = odat['age'].str.extract(r'(\d+)').astype('float')
odat['gender'] = dat['summary'].str.extract(r'(male|female)')
odat['death'] = odat['death'] == 1
odat = odat.dropna()
กฎodat['agegroup'] = 'agegroup'
ageDic = {}
ageDic['1-10'] = (1,10)
ageDic['10-20'] = (10,20)
ageDic['20-30'] = (20,30)
ageDic['30-40'] = (30,40)
ageDic['40-50'] = (40,50)
ageDic['50-60'] = (50,60)
ageDic['60-70'] = (60,70)
ageDic['70+'] = (70,120)
for k in ageDic.keys():
lb,ub = ageDic[k]
odat['agegroup'][(odat['age']>=lb) & (odat['age']<ub)] = k
odat
agestat = odat.groupby('agegroup').count()[['age']]
agestat['ageratio'] = agestat['age']/sum(agestat['age'])*100
agestat
plotbar(agestat['ageratio'].values,agestat.index.values,'age confirmed case (%)')
def getattributedata_bycountry(dat,attr):
dat = dat[dat['country']!='']
dat['age'] = dat[attr].astype('float')
medianage = dat.groupby(by='country').mean()
cList = list(medianage.sort_values(by='age').index)
cFrame = []
nclist = []
cdat = dat['age'].astype('float').dropna().values
cFrame.append(cdat)
nclist.append('all')
for c in cList:
cdat = dat[dat['country']==c]['age'].astype('float').dropna().values
if len(cdat)>5:
cFrame.append(cdat)
nclist.append(c)
return cFrame,nclist
cFrame,nclist = getattributedata(odat,'age')
mList = []
for c in cFrame:
mList.append([c.mean(),c.std()])
agedat = pd.DataFrame(data=mList,columns=['mean','std'])
agedat.index = nclist
agedat = agedat.sort_values(by='mean')
plotbar(agedat['mean'],agedat.index.values,'Mean confirmed age by country')
plotcandle(cFrame,nclist)
drDic = {}
for k in ageDic.keys():
adat = odat[odat['agegroup']==k]
drDic[k] = (sum(adat['death']),len(adat['death']),sum(adat['death'])/len(adat['death'])*100)
#drDic.values()
drFrame = pd.DataFrame(data=drDic.values(),columns=['death-count','confirmed-count','death-ratio'])
drFrame.index = drDic.keys()
drFrame
plotbar(drFrame['death-ratio'],list(drFrame.index),'Death/Confirmed ratio (%)')
odat['male'] = (odat['gender']=='male').astype('int')
mstat = odat[['country','male']].groupby('country').sum()/odat[['country','male']].groupby('country').count()
mstat = mstat.sort_values(by='male')
clist = np.array(mstat.index)
mstatvalues = mstat.values[:,0]
plotbar(mstatvalues,clist,'male ratio')
gdat = odat[['death','gender']]
gc = gdat.groupby(by='gender').count()
gd = gdat.groupby(by='gender').sum()
gender_dratio = gd['death']/gc['death']
gender_dratio
plotbar(gender_dratio,gender_dratio.index,'Death ratio per Gender (%)')
import requests
url= 'https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_daily_reports/03-14-2020.csv'
s=requests.get(url)
s = s.content
dat = pd.read_html(s)[0]
dat = dat[['Country/Region','Confirmed','Deaths','Recovered']]
dat = dat.groupby(by = 'Country/Region').sum()
dat['DeathRatio'] = dat['Deaths']/dat['Confirmed']*100
dat = dat.sort_values(by='DeathRatio',ascending=False).iloc[0:50]
plotbar(dat['DeathRatio'].values[3:50],dat.index.values[3:50],'Death/Confirmed ratio')
symptomList = odat[odat['symptom'].str.len()>2]
symptomList = symptomList['symptom'].values
symDic = {}
for s in symptomList:
sList = s.split(',')
for st in sList:
st = st.strip()
try:
symDic[st] = symDic[st] + 1
except:
symDic[st] = 1
symFrame = pd.DataFrame()
symFrame['symptom'] = symDic.keys()
symFrame['count'] = symDic.values()
symFrame = symFrame.sort_values(by='count',ascending=False)
symFrame = symFrame.iloc[0:20]
symFrame['ncount'] = (symFrame['count']/len(symptomList))*100
plotbar(symFrame['ncount'],symFrame['symptom'],'Percentage of symptom confirmed cases (%)')
from sklearn.model_selection import train_test_split
ddat = ddat.dropna()
X_train, X_test, y_train, y_test = train_test_split(ddat[['age','male']], ddat['death'].astype('float'), test_size=0.1, random_state=0)
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import precision_recall_curve
clf = GradientBoostingClassifier()
clf.fit(X_train,y_train)
pdeath = clf.predict_proba(X_test)
pdeath = pdeath[:,0]
pdeath = pdeath + np.random.random(len(pdeath))
pr,re, tr = precision_recall_curve(y_test, pdeath)
py.plot(re,pr)
py.show()
| 0.345436 | 0.941007 |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.learner import *
import torchtext
from torchtext import vocab, data
from torchtext.datasets import language_modeling
from fastai.rnn_reg import *
from fastai.rnn_train import *
from fastai.nlp import *
from fastai.lm_rnn import *
import dill as pickle
import spacy
```
## Language modeling
### Data
The [large movie view dataset](http://ai.stanford.edu/~amaas/data/sentiment/) contains a collection of 50,000 reviews from IMDB. The dataset contains an even number of positive and negative reviews. The authors considered only highly polarized reviews. A negative review has a score ≤ 4 out of 10, and a positive review has a score ≥ 7 out of 10. Neutral reviews are not included in the dataset. The dataset is divided into training and test sets. The training set is the same 25,000 labeled reviews.
The **sentiment classification task** consists of predicting the polarity (positive or negative) of a given text.
However, before we try to classify *sentiment*, we will simply try to create a *language model*; that is, a model that can predict the next word in a sentence. Why? Because our model first needs to understand the structure of English, before we can expect it to recognize positive vs negative sentiment.
So our plan of attack is the same as we used for Dogs v Cats: pretrain a model to do one thing (predict the next word), and fine tune it to do something else (classify sentiment).
Unfortunately, there are no good pretrained language models available to download, so we need to create our own. To follow along with this notebook, we suggest downloading the dataset from [this location](http://files.fast.ai/data/aclImdb.tgz) on files.fast.ai.
```
PATH='data/aclImdb/'
TRN_PATH = 'train/all/'
VAL_PATH = 'test/all/'
TRN = f'{PATH}{TRN_PATH}'
VAL = f'{PATH}{VAL_PATH}'
%ls {PATH}
```
Let's look inside the training folder...
```
trn_files = !ls {TRN}
trn_files[:10]
```
...and at an example review.
```
review = !cat {TRN}{trn_files[6]}
review[0]
```
Sounds like I'd really enjoy *Zombiegeddon*...
Now we'll check how many words are in the dataset.
```
!find {TRN} -name '*.txt' | xargs cat | wc -w
!find {VAL} -name '*.txt' | xargs cat | wc -w
```
Before we can analyze text, we must first *tokenize* it. This refers to the process of splitting a sentence into an array of words (or more generally, into an array of *tokens*).
```
spacy_tok = spacy.load('en')
' '.join([sent.string.strip() for sent in spacy_tok(review[0])])
```
We use Pytorch's [torchtext](https://github.com/pytorch/text) library to preprocess our data, telling it to use the wonderful [spacy](https://spacy.io/) library to handle tokenization.
First, we create a torchtext *field*, which describes how to preprocess a piece of text - in this case, we tell torchtext to make everything lowercase, and tokenize it with spacy.
```
TEXT = data.Field(lower=True, tokenize="spacy")
```
fastai works closely with torchtext. We create a ModelData object for language modeling by taking advantage of `LanguageModelData`, passing it our torchtext field object, and the paths to our training, test, and validation sets. In this case, we don't have a separate test set, so we'll just use `VAL_PATH` for that too.
As well as the usual `bs` (batch size) parameter, we also now have `bptt`; this define how many words are processing at a time in each row of the mini-batch. More importantly, it defines how many 'layers' we will backprop through. Making this number higher will increase time and memory requirements, but will improve the model's ability to handle long sentences.
```
bs=64; bptt=70
FILES = dict(train=TRN_PATH, validation=VAL_PATH, test=VAL_PATH)
md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)
```
After building our `ModelData` object, it automatically fills the `TEXT` object with a very important attribute: `TEXT.vocab`. This is a *vocabulary*, which stores which words (or *tokens*) have been seen in the text, and how each word will be mapped to a unique integer id. We'll need to use this information again later, so we save it.
*(Technical note: python's standard `Pickle` library can't handle this correctly, so at the top of this notebook we used the `dill` library instead and imported it as `pickle`)*.
```
pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb'))
```
Here are the: # batches; # unique tokens in the vocab; # tokens in the training set; # sentences
```
len(md.trn_dl), md.nt, len(md.trn_ds), len(md.trn_ds[0].text)
```
This is the start of the mapping from integer IDs to unique tokens.
```
# 'itos': 'int-to-string'
TEXT.vocab.itos[:12]
# 'stoi': 'string to int'
TEXT.vocab.stoi['the']
```
Note that in a `LanguageModelData` object there is only one item in each dataset: all the words of the text joined together.
```
md.trn_ds[0].text[:12]
```
torchtext will handle turning this words into integer IDs for us automatically.
```
TEXT.numericalize([md.trn_ds[0].text[:12]])
```
Our `LanguageModelData` object will create batches with 64 columns (that's our batch size), and varying sequence lengths of around 80 tokens (that's our `bptt` parameter - *backprop through time*).
Each batch also contains the exact same data as labels, but one word later in the text - since we're trying to always predict the next word. The labels are flattened into a 1d array.
```
next(iter(md.trn_dl))
```
### Train
We have a number of parameters to set - we'll learn more about these later, but you should find these values suitable for many problems.
```
em_sz = 200 # size of each embedding vector
nh = 500 # number of hidden activations per layer
nl = 3 # number of layers
```
Researchers have found that large amounts of *momentum* (which we'll learn about later) don't work well with these kinds of *RNN* models, so we create a version of the *Adam* optimizer with less momentum than it's default of `0.9`.
```
opt_fn = partial(optim.Adam, betas=(0.7, 0.99))
```
fastai uses a variant of the state of the art [AWD LSTM Language Model](https://arxiv.org/abs/1708.02182) developed by Stephen Merity. A key feature of this model is that it provides excellent regularization through [Dropout](https://en.wikipedia.org/wiki/Convolutional_neural_network#Dropout). There is no simple way known (yet!) to find the best values of the dropout parameters below - you just have to experiment...
However, the other parameters (`alpha`, `beta`, and `clip`) shouldn't generally need tuning.
```
learner = md.get_model(opt_fn, em_sz, nh, nl,
dropouti=0.05, dropout=0.05, wdrop=0.1, dropoute=0.02, dropouth=0.05)
learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
learner.clip=0.3
```
As you can see below, I gradually tuned the language model in a few stages. I possibly could have trained it further (it wasn't yet overfitting), but I didn't have time to experiment more. Maybe you can see if you can train it to a better accuracy! (I used `lr_find` to find a good learning rate, but didn't save the output in this notebook. Feel free to try running it yourself now.)
```
learner.fit(3e-3, 4, wds=1e-6, cycle_len=1, cycle_mult=2)
learner.save_encoder('adam1_enc')
learner.load_encoder('adam1_enc')
learner.load_cycle('adam3_10',2)
learner.fit(3e-3, 1, wds=1e-6, cycle_len=10)
learner.save_encoder('adam3_10_enc')
```
In the sentiment analysis section, we'll just need half of the language model - the *encoder*, so we save that part.
```
learner.save_encoder('adam3_20_enc')
learner.load_encoder('adam3_20_enc')
```
Language modeling accuracy is generally measured using the metric *perplexity*, which is simply `exp()` of the loss function we used.
```
math.exp(4.165)
pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb'))
```
### Test
We can play around with our language model a bit to check it seems to be working OK. First, let's create a short bit of text to 'prime' a set of predictions. We'll use our torchtext field to numericalize it so we can feed it to our language model.
```
m=learner.model
ss=""". So, it wasn't quite was I was expecting, but I really liked it anyway! The best"""
s = [spacy_tok(ss)]
t=TEXT.numericalize(s)
' '.join(s[0])
```
We haven't yet added methods to make it easy to test a language model, so we'll need to manually go through the steps.
```
# Set batch size to 1
m[0].bs=1
# Turn off dropout
m.eval()
# Reset hidden state
m.reset()
# Get predictions from model
res,*_ = m(t)
# Put the batch size back to what it was
m[0].bs=bs
```
Let's see what the top 10 predictions were for the next word after our short text:
```
nexts = torch.topk(res[-1], 10)[1]
[TEXT.vocab.itos[o] for o in to_np(nexts)]
```
...and let's see if our model can generate a bit more text all by itself!
```
print(ss,"\n")
for i in range(50):
n=res[-1].topk(2)[1]
n = n[1] if n.data[0]==0 else n[0]
print(TEXT.vocab.itos[n.data[0]], end=' ')
res,*_ = m(n[0].unsqueeze(0))
print('...')
```
### Sentiment
We'll need to the saved vocab from the language model, since we need to ensure the same words map to the same IDs.
```
TEXT = pickle.load(open(f'{PATH}models/TEXT.pkl','rb'))
```
`sequential=False` tells torchtext that a text field should be tokenized (in this case, we just want to store the 'positive' or 'negative' single label).
`splits` is a torchtext method that creates train, test, and validation sets. The IMDB dataset is built into torchtext, so we can take advantage of that. Take a look at `lang_model-arxiv.ipynb` to see how to define your own fastai/torchtext datasets.
```
IMDB_LABEL = data.Field(sequential=False)
splits = torchtext.datasets.IMDB.splits(TEXT, IMDB_LABEL, 'data/')
t = splits[0].examples[0]
t.label, ' '.join(t.text[:16])
```
fastai can create a ModelData object directly from torchtext splits.
```
md2 = TextData.from_splits(PATH, splits, bs)
m3 = md2.get_model(opt_fn, 1500, bptt, emb_sz=em_sz, n_hid=nh, n_layers=nl,
dropout=0.1, dropouti=0.4, wdrop=0.5, dropoute=0.05, dropouth=0.3)
m3.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
m3.load_encoder(f'adam3_20_enc')
```
Because we're fine-tuning a pretrained model, we'll use differential learning rates, and also increase the max gradient for clipping, to allow the SGDR to work better.
```
m3.clip=25.
lrs=np.array([1e-4,1e-4,1e-4,1e-3,1e-2])
m3.freeze_to(-1)
m3.fit(lrs/2, 1, metrics=[accuracy])
m3.unfreeze()
m3.fit(lrs, 1, metrics=[accuracy], cycle_len=1)
m3.fit(lrs, 7, metrics=[accuracy], cycle_len=2, cycle_save_name='imdb2')
m3.load_cycle('imdb2', 4)
accuracy_np(*m3.predict_with_targs())
```
A recent paper from Bradbury et al, [Learned in translation: contextualized word vectors](https://einstein.ai/research/learned-in-translation-contextualized-word-vectors), has a handy summary of the latest academic research in solving this IMDB sentiment analysis problem. Many of the latest algorithms shown are tuned for this specific problem.

As you see, we just got a new state of the art result in sentiment analysis, decreasing the error from 5.9% to 5.5%! You should be able to get similarly world-class results on other NLP classification problems using the same basic steps.
There are many opportunities to further improve this, although we won't be able to get to them until part 2 of this course...
### End
|
github_jupyter
|
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.learner import *
import torchtext
from torchtext import vocab, data
from torchtext.datasets import language_modeling
from fastai.rnn_reg import *
from fastai.rnn_train import *
from fastai.nlp import *
from fastai.lm_rnn import *
import dill as pickle
import spacy
PATH='data/aclImdb/'
TRN_PATH = 'train/all/'
VAL_PATH = 'test/all/'
TRN = f'{PATH}{TRN_PATH}'
VAL = f'{PATH}{VAL_PATH}'
%ls {PATH}
trn_files = !ls {TRN}
trn_files[:10]
review = !cat {TRN}{trn_files[6]}
review[0]
!find {TRN} -name '*.txt' | xargs cat | wc -w
!find {VAL} -name '*.txt' | xargs cat | wc -w
spacy_tok = spacy.load('en')
' '.join([sent.string.strip() for sent in spacy_tok(review[0])])
TEXT = data.Field(lower=True, tokenize="spacy")
bs=64; bptt=70
FILES = dict(train=TRN_PATH, validation=VAL_PATH, test=VAL_PATH)
md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)
pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb'))
len(md.trn_dl), md.nt, len(md.trn_ds), len(md.trn_ds[0].text)
# 'itos': 'int-to-string'
TEXT.vocab.itos[:12]
# 'stoi': 'string to int'
TEXT.vocab.stoi['the']
md.trn_ds[0].text[:12]
TEXT.numericalize([md.trn_ds[0].text[:12]])
next(iter(md.trn_dl))
em_sz = 200 # size of each embedding vector
nh = 500 # number of hidden activations per layer
nl = 3 # number of layers
opt_fn = partial(optim.Adam, betas=(0.7, 0.99))
learner = md.get_model(opt_fn, em_sz, nh, nl,
dropouti=0.05, dropout=0.05, wdrop=0.1, dropoute=0.02, dropouth=0.05)
learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
learner.clip=0.3
learner.fit(3e-3, 4, wds=1e-6, cycle_len=1, cycle_mult=2)
learner.save_encoder('adam1_enc')
learner.load_encoder('adam1_enc')
learner.load_cycle('adam3_10',2)
learner.fit(3e-3, 1, wds=1e-6, cycle_len=10)
learner.save_encoder('adam3_10_enc')
learner.save_encoder('adam3_20_enc')
learner.load_encoder('adam3_20_enc')
math.exp(4.165)
pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb'))
m=learner.model
ss=""". So, it wasn't quite was I was expecting, but I really liked it anyway! The best"""
s = [spacy_tok(ss)]
t=TEXT.numericalize(s)
' '.join(s[0])
# Set batch size to 1
m[0].bs=1
# Turn off dropout
m.eval()
# Reset hidden state
m.reset()
# Get predictions from model
res,*_ = m(t)
# Put the batch size back to what it was
m[0].bs=bs
nexts = torch.topk(res[-1], 10)[1]
[TEXT.vocab.itos[o] for o in to_np(nexts)]
print(ss,"\n")
for i in range(50):
n=res[-1].topk(2)[1]
n = n[1] if n.data[0]==0 else n[0]
print(TEXT.vocab.itos[n.data[0]], end=' ')
res,*_ = m(n[0].unsqueeze(0))
print('...')
TEXT = pickle.load(open(f'{PATH}models/TEXT.pkl','rb'))
IMDB_LABEL = data.Field(sequential=False)
splits = torchtext.datasets.IMDB.splits(TEXT, IMDB_LABEL, 'data/')
t = splits[0].examples[0]
t.label, ' '.join(t.text[:16])
md2 = TextData.from_splits(PATH, splits, bs)
m3 = md2.get_model(opt_fn, 1500, bptt, emb_sz=em_sz, n_hid=nh, n_layers=nl,
dropout=0.1, dropouti=0.4, wdrop=0.5, dropoute=0.05, dropouth=0.3)
m3.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
m3.load_encoder(f'adam3_20_enc')
m3.clip=25.
lrs=np.array([1e-4,1e-4,1e-4,1e-3,1e-2])
m3.freeze_to(-1)
m3.fit(lrs/2, 1, metrics=[accuracy])
m3.unfreeze()
m3.fit(lrs, 1, metrics=[accuracy], cycle_len=1)
m3.fit(lrs, 7, metrics=[accuracy], cycle_len=2, cycle_save_name='imdb2')
m3.load_cycle('imdb2', 4)
accuracy_np(*m3.predict_with_targs())
| 0.512937 | 0.924959 |
# `keras-unet-collection.models` user guide
This user guide requires `keras-unet-collection==0.1.9` or higher.
## Content
* [**U-net**](#U-net)
* [**V-net**](#V-net)
* [**Attention-Unet**](#Attention-Unet)
* [**U-net++**](#U-net++)
* [**UNET 3+**](#UNET-3+)
* [**R2U-net**](#R2U-net)
* [**ResUnet-a**](#ResUnet-a)
* [**U^2-Net**](#U^2-Net)
* [**TransUNET**](#TransUNET)
* [**Swin-UNET**](#Swin-UNET)
```
import tensorflow as tf
from tensorflow import keras
print('TensorFlow {}; Keras {}'.format(tf.__version__, keras.__version__))
```
# Step 1: importing `models` from `keras_unet_collection`
```
from keras_unet_collection import models
```
# Step 2: defining your hyper-parameters
Commonly used hyper-parameter options are listed as follows. Full details are available through the Python helper function:
* `inpust_size`: a tuple or list that defines the shape of input tensors.
* `models.resunet_a_2d`, `models.transunet_2d`, and `models.swin_unet_2d` support int only, others also support `inpust_size=(None, None, 3)`.
* `activation='PReLU'` is not compatible with `inpust_size=(None, None, 3)`.
* `filter_num`: a list that defines the number of convolutional filters per down- and up-sampling blocks.
* For `unet_2d`, `att_unet_2d`, `unet_plus_2d`, `r2_unet_2d`, depth $\ge$ 2 is expected.
* For `resunet_a_2d` and `u2net_2d`, depth $\ge$ 3 is expected.
* `n_labels`: number of output targets, e.g., `n_labels=2` for binary classification.
* `activation`: the activation function of hidden layers. Available choices are `'ReLU'`, `'LeakyReLU'`, `'PReLU'`, `'ELU'`, `'GELU'`, `'Snake'`.
* `output_activation`: the activation function of the output layer. Recommended choices are `'Sigmoid'`, `'Softmax'`, `None` (linear), `'Snake'`.
* `batch_norm`: if specified as True, all convolutional layers will be configured as stacks of "Conv2D-BN-Activation".
* `stack_num_down`: number of convolutional layers per downsampling level.
* `stack_num_up`: number of convolutional layers (after concatenation) per upsampling level.
* `pool`: the configuration of downsampling (encoding) blocks.
* `pool=False`: downsampling with a convolutional layer (2-by-2 convolution kernels with 2 strides; optional batch normalization and activation).
* `pool=True` or `pool='max'` downsampling with a max-pooling layer.
* `pool='ave'` downsampling with a average-pooling layer.
* `unpool`: the configuration of upsampling (decoding) blocks.
* `unpool=False`: upsampling with a transpose convolutional layer (2-by-2 convolution kernels with 2 strides; optional batch normalization and activation).
* `unpool=True` or `unpool='bilinear'` upsampling with bilinear interpolation.
* `unpool='nearest'` upsampling with reflective padding.
* `name`: user-specified prefix of the configured layer and model. Use `keras.models.Model.summary` to identify the exact name of each layer.
# Step 3: Configuring your model
**Note**
Configured models can be saved through `model.save(filepath, save_traces=True)`, but they may contain python objects that are not part of the `tensorflow.keras`. Thus when loading the model, it is preferred to load the weights only, and set/freeze them within a new configuration.
```python
e.g.
weights = dummy_loader(model_old_path)
model_new = swin_transformer_model(...)
model_new.set_weights(weights)
```
## U-net
**Example 1**: U-net for binary classification with:
1. Five down- and upsampliung levels (or four downsampling levels and one bottom level).
2. Two convolutional layers per downsampling level.
3. One convolutional layer (after concatenation) per upsamling level.
2. Gaussian Error Linear Unit (GELU) activcation, Softmax output activation, batch normalization.
3. Downsampling through Maxpooling.
4. Upsampling through reflective padding.
```
model = models.unet_2d((None, None, 3), [64, 128, 256, 512, 1024], n_labels=2,
stack_num_down=2, stack_num_up=1,
activation='GELU', output_activation='Softmax',
batch_norm=True, pool='max', unpool='nearest', name='unet')
```
## V-net
**Example 2**: Vnet (originally proposed for 3-d inputs, here modified for 2-d inputs) for binary classification with:
1. Input size of (256, 256, 1); PReLU does not support input tensor with shapes of NoneType
1. Five down- and upsampliung levels (or four downsampling levels and one bottom level).
1. Number of stacked convolutional layers of the residual path increase with downsampling levels from one to three (symmetrically, decrease with upsampling levels).
* `res_num_ini=1`
* `res_num_max=3`
2. PReLU activcation, Softmax output activation, batch normalization.
3. Downsampling through stride convolutional layers.
4. Upsampling through transpose convolutional layers.
```
model = models.vnet_2d((256, 256, 1), filter_num=[16, 32, 64, 128, 256], n_labels=2,
res_num_ini=1, res_num_max=3,
activation='PReLU', output_activation='Softmax',
batch_norm=True, pool=False, unpool=False, name='vnet')
```
## Attention-Unet
**Example 3**: attention-Unet for single target regression with:
1. Four down- and upsampling levels.
2. Two convolutional layers per downsampling level.
3. Two convolutional layers (after concatenation) per upsampling level.
2. ReLU activation, linear output activation (None), batch normalization.
3. Additive attention, ReLU attention activation.
4. Downsampling through stride convolutional layers.
5. Upsampling through bilinear interpolation.
```
model = models.att_unet_2d((None, None, 3), [64, 128, 256, 512], n_labels=1,
stack_num_down=2, stack_num_up=2,
activation='ReLU', atten_activation='ReLU', attention='add', output_activation=None,
batch_norm=True, pool=False, unpool='bilinear', name='attunet')
```
## U-net++
**Example 4**: U-net++ for three-label classification with:
1. Four down- and upsampling levels.
2. Two convolutional layers per downsampling level.
3. Two convolutional layers (after concatenation) per upsampling level.
2. LeakyReLU activation, Softmax output activation, no batch normalization.
3. Downsampling through Maxpooling.
4. Upsampling through transpose convolutional layers.
5. Deep supervision.
```
model = models.unet_plus_2d((None, None, 3), [64, 128, 256, 512], n_labels=3,
stack_num_down=2, stack_num_up=2,
activation='LeakyReLU', output_activation='Softmax',
batch_norm=False, pool='max', unpool=False, deep_supervision=True, name='xnet')
```
## UNET 3+
**Example 5**: UNet 3+ for binary classification with:
1. Four down- and upsampling levels.
2. Two convolutional layers per downsampling level.
3. One convolutional layers (after concatenation) per upsampling level.
2. ReLU activation, Sigmoid output activation, batch normalization.
3. Downsampling through Maxpooling.
4. Upsampling through transpose convolutional layers.
5. Deep supervision.
```
model = models.unet_3plus_2d((128, 128, 3), n_labels=2, filter_num_down=[64, 128, 256, 512],
filter_num_skip='auto', filter_num_aggregate='auto',
stack_num_down=2, stack_num_up=1, activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool='max', unpool=False, deep_supervision=True, name='unet3plus')
```
* `filter_num_skip` and `filter_num_aggregate` can be specified explicitly:
```
model = models.unet_3plus_2d((128, 128, 3), n_labels=2, filter_num_down=[64, 128, 256, 512],
filter_num_skip=[64, 64, 64], filter_num_aggregate=256,
stack_num_down=2, stack_num_up=1, activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool='max', unpool=False, deep_supervision=True, name='unet3plus')
```
## R2U-net
**Example 6**: R2U-net for binary classification with:
1. Four down- and upsampling levels.
2. Two recurrent convolutional layers with two iterations per down- and upsampling level.
2. ReLU activation, Softmax output activation, no batch normalization.
3. Downsampling through Maxpooling.
4. Upsampling through reflective padding.
```
model = models.r2_unet_2d((None, None, 3), [64, 128, 256, 512], n_labels=2,
stack_num_down=2, stack_num_up=1, recur_num=2,
activation='ReLU', output_activation='Softmax',
batch_norm=True, pool='max', unpool='nearest', name='r2unet')
```
## ResUnet-a
**Example 7**: ResUnet-a for 16-label classification with:
1. input size of (128, 128, 3)
1. Six downsampling levels followed by an Atrous Spatial Pyramid Pooling (ASPP) layer with 256 filters.
1. Six upsampling levels followed by an ASPP layer with 128 filters.
2. dilation rates of {1, 3, 15, 31} for shallow layers, {1,3,15} for intermediate layers, and {1,} for deep layers.
3. ReLU activation, Sigmoid output activation, batch normalization.
4. Downsampling through stride convolutional layers.
4. Upsampling through reflective padding.
```
model = models.resunet_a_2d((128, 128, 3), [32, 64, 128, 256, 512, 1024],
dilation_num=[1, 3, 15, 31],
n_labels=16, aspp_num_down=256, aspp_num_up=128,
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool=False, unpool='nearest', name='resunet')
```
* `dilation_num` can be specified per down- and uplampling level:
```
model = models.resunet_a_2d((128, 128, 3), [32, 64, 128, 256, 512, 1024],
dilation_num=[[1, 3, 15, 31], [1, 3, 15, 31], [1, 3, 15], [1, 3, 15], [1,], [1,],],
n_labels=16, aspp_num_down=256, aspp_num_up=128,
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool=False, unpool='nearest', name='resunet')
```
## U^2-Net
**Example 8**: U^2-Net for binary classification with:
1. Six downsampling levels with the first four layers built with RSU, and the last two (one downsampling layer, one bottom layer) built with RSU-F4.
* `filter_num_down=[64, 128, 256, 512]`
* `filter_mid_num_down=[32, 32, 64, 128]`
* `filter_4f_num=[512, 512]`
* `filter_4f_mid_num=[256, 256]`
1. Six upsampling levels with the deepest layer built with RSU-F4, and the other four layers built with RSU.
* `filter_num_up=[64, 64, 128, 256]`
* `filter_mid_num_up=[16, 32, 64, 128]`
3. ReLU activation, Sigmoid output activation, batch normalization.
4. Deep supervision
5. Downsampling through stride convolutional layers.
6. Upsampling through transpose convolutional layers.
*In the original work of U^2-Net, down- and upsampling were achieved through maxpooling (`pool=True` or `pool='max'`) and bilinear interpolation (`unpool=True` or unpool=`'bilinear'`).
```
model = models.u2net_2d((128, 128, 3), n_labels=2,
filter_num_down=[64, 128, 256, 512], filter_num_up=[64, 64, 128, 256],
filter_mid_num_down=[32, 32, 64, 128], filter_mid_num_up=[16, 32, 64, 128],
filter_4f_num=[512, 512], filter_4f_mid_num=[256, 256],
activation='ReLU', output_activation=None,
batch_norm=True, pool=False, unpool=False, deep_supervision=True, name='u2net')
```
* `u2net_2d` supports automated determination of filter numbers per down- and upsampling level. Auto-mode may produce a slightly larger network.
```
model = models.u2net_2d((None, None, 3), n_labels=2,
filter_num_down=[64, 128, 256, 512],
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool=False, unpool=False, deep_supervision=True, name='u2net')
```
## TransUNET
**Example 9**: TransUNET for 12-label classification with:
* input size of (512, 512, 3)
* Four down- and upsampling levels.
* Two convolutional layers per downsampling level.
* Two convolutional layers (after concatenation) per upsampling level.
* 12 transformer blocks (`num_transformer=12`).
* 12 attention heads (`num_heads=12`).
* 3072 MLP nodes per vision transformer (`num_mlp=3072`).
* 768 embeding dimensions (`embed_dim=768`).
* Gaussian Error Linear Unit (GELU) activcation for transformer MLPs.
* ReLU activation, softmax output activation, batch normalization.
* Downsampling through maxpooling.
* Upsampling through bilinear interpolation.
```
model = models.transunet_2d((512, 512, 3), filter_num=[64, 128, 256, 512], n_labels=12, stack_num_down=2, stack_num_up=2,
embed_dim=768, num_mlp=3072, num_heads=12, num_transformer=12,
activation='ReLU', mlp_activation='GELU', output_activation='Softmax',
batch_norm=True, pool=True, unpool='bilinear', name='transunet')
```
## Swin-UNET
**Example 10**: Swin-UNET for 3-label classification with:
* input size of (128, 128, 3)
* Four down- and upsampling levels (or three downsampling levels and one bottom level) (`depth=4`).
* Two Swin-Transformers per downsampling level.
* Two Swin-Transformers (after concatenation) per upsampling level.
* Extract 2-by-2 patches from the input (`patch_size=(2, 2)`)
* Embed 2-by-2 patches to 64 dimensions (`filter_num_begin=64`, a.k.a, number of embedded dimensions).
* Number of attention heads for each down- and upsampling level: `num_heads=[4, 8, 8, 8]`.
* Size of attention windows for each down- and upsampling level: `window_size=[4, 2, 2, 2]`.
* 512 nodes per Swin-Transformer (`num_mlp=512`)
* Shift attention windows (i.e., Swin-MSA) (`shift_window=True`).
```
model = models.swin_unet_2d((128, 128, 3), filter_num_begin=64, n_labels=3, depth=4, stack_num_down=2, stack_num_up=2,
patch_size=(2, 2), num_heads=[4, 8, 8, 8], window_size=[4, 2, 2, 2], num_mlp=512,
output_activation='Softmax', shift_window=True, name='swin_unet')
```
|
github_jupyter
|
import tensorflow as tf
from tensorflow import keras
print('TensorFlow {}; Keras {}'.format(tf.__version__, keras.__version__))
from keras_unet_collection import models
e.g.
weights = dummy_loader(model_old_path)
model_new = swin_transformer_model(...)
model_new.set_weights(weights)
model = models.unet_2d((None, None, 3), [64, 128, 256, 512, 1024], n_labels=2,
stack_num_down=2, stack_num_up=1,
activation='GELU', output_activation='Softmax',
batch_norm=True, pool='max', unpool='nearest', name='unet')
model = models.vnet_2d((256, 256, 1), filter_num=[16, 32, 64, 128, 256], n_labels=2,
res_num_ini=1, res_num_max=3,
activation='PReLU', output_activation='Softmax',
batch_norm=True, pool=False, unpool=False, name='vnet')
model = models.att_unet_2d((None, None, 3), [64, 128, 256, 512], n_labels=1,
stack_num_down=2, stack_num_up=2,
activation='ReLU', atten_activation='ReLU', attention='add', output_activation=None,
batch_norm=True, pool=False, unpool='bilinear', name='attunet')
model = models.unet_plus_2d((None, None, 3), [64, 128, 256, 512], n_labels=3,
stack_num_down=2, stack_num_up=2,
activation='LeakyReLU', output_activation='Softmax',
batch_norm=False, pool='max', unpool=False, deep_supervision=True, name='xnet')
model = models.unet_3plus_2d((128, 128, 3), n_labels=2, filter_num_down=[64, 128, 256, 512],
filter_num_skip='auto', filter_num_aggregate='auto',
stack_num_down=2, stack_num_up=1, activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool='max', unpool=False, deep_supervision=True, name='unet3plus')
model = models.unet_3plus_2d((128, 128, 3), n_labels=2, filter_num_down=[64, 128, 256, 512],
filter_num_skip=[64, 64, 64], filter_num_aggregate=256,
stack_num_down=2, stack_num_up=1, activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool='max', unpool=False, deep_supervision=True, name='unet3plus')
model = models.r2_unet_2d((None, None, 3), [64, 128, 256, 512], n_labels=2,
stack_num_down=2, stack_num_up=1, recur_num=2,
activation='ReLU', output_activation='Softmax',
batch_norm=True, pool='max', unpool='nearest', name='r2unet')
model = models.resunet_a_2d((128, 128, 3), [32, 64, 128, 256, 512, 1024],
dilation_num=[1, 3, 15, 31],
n_labels=16, aspp_num_down=256, aspp_num_up=128,
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool=False, unpool='nearest', name='resunet')
model = models.resunet_a_2d((128, 128, 3), [32, 64, 128, 256, 512, 1024],
dilation_num=[[1, 3, 15, 31], [1, 3, 15, 31], [1, 3, 15], [1, 3, 15], [1,], [1,],],
n_labels=16, aspp_num_down=256, aspp_num_up=128,
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool=False, unpool='nearest', name='resunet')
model = models.u2net_2d((128, 128, 3), n_labels=2,
filter_num_down=[64, 128, 256, 512], filter_num_up=[64, 64, 128, 256],
filter_mid_num_down=[32, 32, 64, 128], filter_mid_num_up=[16, 32, 64, 128],
filter_4f_num=[512, 512], filter_4f_mid_num=[256, 256],
activation='ReLU', output_activation=None,
batch_norm=True, pool=False, unpool=False, deep_supervision=True, name='u2net')
model = models.u2net_2d((None, None, 3), n_labels=2,
filter_num_down=[64, 128, 256, 512],
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool=False, unpool=False, deep_supervision=True, name='u2net')
model = models.transunet_2d((512, 512, 3), filter_num=[64, 128, 256, 512], n_labels=12, stack_num_down=2, stack_num_up=2,
embed_dim=768, num_mlp=3072, num_heads=12, num_transformer=12,
activation='ReLU', mlp_activation='GELU', output_activation='Softmax',
batch_norm=True, pool=True, unpool='bilinear', name='transunet')
model = models.swin_unet_2d((128, 128, 3), filter_num_begin=64, n_labels=3, depth=4, stack_num_down=2, stack_num_up=2,
patch_size=(2, 2), num_heads=[4, 8, 8, 8], window_size=[4, 2, 2, 2], num_mlp=512,
output_activation='Softmax', shift_window=True, name='swin_unet')
| 0.766468 | 0.92912 |
# NumPy
NumPy ist ein Erweiterungsmodul für numerische Berechnungen mit Python. Es beinhaltet grundlegende Datenstrukturen, sprich Matrizen und mehrdimensionale Arrays. Selbst ist NumPy in C umgesetzt worden und bietet mithilfe der Python-Schnittstelle die Möglichkeit Berechnungen schnell durchzuführen. Die Module SciPy, Matplotlib und Pandas greifen auf die erwähnten Datenstrukturen zurück, daher stellt NumPy das Fundament der Scientific-Python-Libraries auf.
Mehr zu NumPy auf der offiziellen Website: http://numpy.org/
### Download von NumPy
Mit Python wird automatisch pip (Package-Manager für Python-Module auf PyPI.org) installiert. Selbst steht pip für "pip installs packages", was der Komandosyntax entspricht mit der Python-Module heruntergeladen werden.
```
# nicht starten, da NumPy bereits installiert wurde und die notwendigen Rechte fehlen
!pip3 install numpy
```
### Verwenden von Math
```
from math import *
zahlen = [1, 2, 3, 4, 5, 6]
ergebnis = []
for x in zahlen:
y = sin(x)
ergebnis.append(y)
print(ergebnis)
type(zahlen)
```
### Verwenden von NumPy
```
import numpy as np
```
## Arrays / Vektoren
$zahlen = \left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\
\end{array}
\right)$
```
zahlen = np.array([1, 2, 3, 4])
ergebnis = sin(zahlen)
print(ergebnis)
type(zahlen)
```
<b><font color="red">Hinweis:</font></b> Die Sinus-Funktion `sin()` aus dem Modul `math` und `numpy` sind nicht dieselben! Python erkennt anhand des Typs von `zahlen` auf welche Sinus-Funktion zugegriffen werden soll.
- `math` -> `list`
- `numpy` -> `numpy.ndarray`
### Typen der NumPy-Werte
Das Array `zahlen` enthält nur Integers (Ganze Zahlen), daher wird der Typ des Vektors auf `int64` gesetzt. Die Ausgabe von `ergebnis` gibt bei der Berechnung der Sinuswerte von `zahlen` als Typ Float (Gleitpunktzahlen/Dezimalzahlen), also `float64` an.
```
zahlen.dtype
ergebnis.dtype
```
### Definition des Typs der Arrays
```
# Ausgabe einer Gleitpunktzahl
x = np.array([2,4,8,16], dtype=float)
x
# Ausgabe einer Komplexen Zahl
y = np.array([1,2,5,7], dtype=complex)
y
```
## Matrizen
$M_1\ = \left(
\begin{array}{ccc}
1 & 2 & 3 \\
4 & 5 & 6 \\
\end{array}
\right)$
```
M1 = np.array([[1, 2, 3], [4, 5, 6]])
M1
```
### Anzeigen der Dimension der Matrix
```
M1.shape
```
### Spezielle Funktionen
#### 3x3-Nullmatrix
```
M2 = np.zeros((3, 3))
M2
```
#### 3x4-Einheitsmatrix
```
M3 = np.ones((3, 4))
M3
```
#### Nullvektor
```
x = np.zeros(3)
x
```
#### Einheitsvektor
```
y = np.ones(3)
y
```
### `arange()` und `linspace()` für Sequenzen von Zahlen
Syntax: `arange(startwert, endwert, inkrement/schrittweite)`
<b><font color="red">Hinweis:</font></b> Wie in der `range()`-Funktion ist der Startwert inklusiv und der Endwert exklusiv.
```
time = np.arange(0, 5, 0.5)
time
```
Syntax: `linspace(startwert, endwert, anzahl der arrays)`
```
t = np.linspace(0, 5, 11)
t
```
### Operationen
```
x = np.arange(1, 6, 1)
y = np.arange(2, 12, 2)
```
$x=\left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\ 5 \\
\end{array}
\right)$
```
x
```
$y=\left(
\begin{array}{ccc}
2 \\ 4 \\ 6 \\ 8 \\ 10 \\
\end{array}
\right)$
```
y
```
### Addition
$\left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\ 5 \\
\end{array}
\right)
+
\left(
\begin{array}{ccc}
2 \\ 4 \\ 6 \\ 8 \\ 10 \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
3 \\ 6 \\ 9 \\ 12 \\ 15 \\
\end{array}
\right)$
```
x + y
```
### Subtraktion
$\left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\ 5 \\
\end{array}
\right)
-
\left(
\begin{array}{ccc}
2 \\ 4 \\ 6 \\ 8 \\ 10 \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
-1 \\ -2 \\ -3 \\ -4 \\ -5 \\
\end{array}
\right)
$
```
x - y
```
### Erweiterung
$\left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\ 5 \\
\end{array}
\right)
\cdot 4
=
\left(
\begin{array}{ccc}
4 \\ 8 \\ 12 \\ 16 \\ 20 \\
\end{array}
\right)
$
```
x*4
```
### Achtung!
-> Sehr gewöhnungsbedürftig ist, dass die Multiplikation und Division, als auch die Potenz und Wurzel von Arrays und Matrizen möglich ist
#### Multiplikation
<b><font color="red">Hinweis:</font></b> Nicht zu verwechseln mit dem Skalarprodukt!
$\left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\ 5 \\
\end{array}
\right)
\cdot
\left(
\begin{array}{ccc}
2 \\ 4 \\ 6 \\ 8 \\ 10 \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
2 \\ 8 \\ 18 \\ 32 \\ 50 \\
\end{array}
\right)
$
```
x * y
```
#### Division
$\left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\ 5 \\
\end{array}
\right)
/
\left(
\begin{array}{ccc}
2 \\ 4 \\ 6 \\ 8 \\ 10 \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
0.5 \\ 0.5 \\ 0.5 \\ 0.5 \\ 0.5 \\
\end{array}
\right)
$
```
x / y
```
#### Potenz
$\left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\ 5 \\
\end{array}
\right) ^2\
=
\left(
\begin{array}{ccc}
1 \\ 4 \\ 9 \\ 16 \\ 25 \\
\end{array}
\right)$
```
x**2
```
<b><font color="red">Hinweis:</font></b> Die Verwendung der `pow()`-Funktion aus dem `math`-Modul führt zu einer Fehlermeldung.
#### Wurzel
$\sqrt{
\left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\ 4 \\ 5 \\
\end{array}
\right)}
=
\left(
\begin{array}{ccc}
1.000 \\ 1.414 \\ 1.732 \\ 2.000 \\ 2.236 \\
\end{array}
\right)$
```
x**0.5
```
<b><font color="red">Hinweis:</font></b> Die Verwendung der `sqrt()`-Funktion aus dem `math`-Modul führt zu einer Fehlermeldung.
## Vektoren- und Matrizenberechnungen
### Skalarprodukt
(auch Innere Produkt)
$a\cdot b = \left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\
\end{array}
\right)
\cdot
\left(
\begin{array}{ccc}
0 \\ 1 \\ 0 \\
\end{array}
\right)
= 2
$
```
a = np.array([1,2,3])
b = np.array([0,1,0])
print(np.inner(a, b)) # 1-D-Array
print(np.dot(a, b)) # N-D-Array
print(a @ b)
```
### Matrizenprodukt
```
a = np.array([[1,2],[3,4]])
b = np.array([[11,12],[13,14]])
print(np.inner(a, b))
print(np.dot(a, b))
print(a @ b)
A = np.array([[11, 12, 13, 14], [21, 22, 23, 24], [31, 32, 33, 34]])
B = np.array([[5, 4, 2], [1, 0, 2], [3, 8, 2], [24, 12, 57]])
# print(np.inner(A, B)) # Fehlermeldung
print(np.dot(A, B))
print(A @ B)
```
### Kreuzprodukt
$a\times b = \left(
\begin{array}{ccc}
1 \\ 2 \\ 3 \\
\end{array}
\right)
\times
\left(
\begin{array}{ccc}
0 \\ 1 \\ 0 \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
-3 \\ 6 \\ -3 \\
\end{array}
\right)
$
```
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
np.cross(x, y)
```
|
github_jupyter
|
# nicht starten, da NumPy bereits installiert wurde und die notwendigen Rechte fehlen
!pip3 install numpy
from math import *
zahlen = [1, 2, 3, 4, 5, 6]
ergebnis = []
for x in zahlen:
y = sin(x)
ergebnis.append(y)
print(ergebnis)
type(zahlen)
import numpy as np
zahlen = np.array([1, 2, 3, 4])
ergebnis = sin(zahlen)
print(ergebnis)
type(zahlen)
zahlen.dtype
ergebnis.dtype
# Ausgabe einer Gleitpunktzahl
x = np.array([2,4,8,16], dtype=float)
x
# Ausgabe einer Komplexen Zahl
y = np.array([1,2,5,7], dtype=complex)
y
M1 = np.array([[1, 2, 3], [4, 5, 6]])
M1
M1.shape
M2 = np.zeros((3, 3))
M2
M3 = np.ones((3, 4))
M3
x = np.zeros(3)
x
y = np.ones(3)
y
time = np.arange(0, 5, 0.5)
time
t = np.linspace(0, 5, 11)
t
x = np.arange(1, 6, 1)
y = np.arange(2, 12, 2)
x
y
x + y
x - y
x*4
x * y
x / y
x**2
x**0.5
a = np.array([1,2,3])
b = np.array([0,1,0])
print(np.inner(a, b)) # 1-D-Array
print(np.dot(a, b)) # N-D-Array
print(a @ b)
a = np.array([[1,2],[3,4]])
b = np.array([[11,12],[13,14]])
print(np.inner(a, b))
print(np.dot(a, b))
print(a @ b)
A = np.array([[11, 12, 13, 14], [21, 22, 23, 24], [31, 32, 33, 34]])
B = np.array([[5, 4, 2], [1, 0, 2], [3, 8, 2], [24, 12, 57]])
# print(np.inner(A, B)) # Fehlermeldung
print(np.dot(A, B))
print(A @ B)
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
np.cross(x, y)
| 0.111519 | 0.959649 |
# Melodic Expectation with Markov Models
In this notebook we will look at Markov Chains for modelling musical expectation.
We have already seen a Markov Model in the class on key estimation with HMMs (Hidden Markov Models).
```
import os
import numpy as np
import partitura
from rnn import load_data
# To filter out short melodies The minimum number of notes that a sequence should have
min_seq_len = 10
sequences = load_data(min_seq_len)
```
## Tasks 1; data loading & preparing:
1. check out the content of the variable "sequences", if unclear have a look at the loading function.
2. which musical texture do these sequences exhibit? (https://en.wikipedia.org/wiki/Texture_(music))
3. write a function to derive sequences of pitches from this data.
4. write a function to derive sequences of durations from this data. Modify this to compute inter onset intervals (IOIs; the time between two consecutive onsets). Can you encode rests as well by comparing duration with IOI?
## Tasks 2; data exploration:
1. compute and draw a histogram of pitches. Modify this to show pitch classes!
2. compute and draw a histogram of IOIs. The input MIDI files are deadpan, i.e. the IOIs in seconds correspond to the notated duration exactly. Look through the IOIs and make an educated guess for some smallest float time unit that could serve as integer smallest time division. Encode the IOIs as multiples of this smallest integer. Which multiples make musical sense?
## Tasks 3; A Markov Chain:
1. choose a data type to model: pitch, pitch class, IOIs, or durations (including or without an encoding for rests). Concatenate all the sequences into one long data sequence.
2. You have now a sequence **X** of symbols from an alphabet **A** (set of possible symbols of your chosen data type):
$$ \mathbf{X} = \{\mathbf{x_0}, \dots, \mathbf{x_n} \mid \mathbf{x}_{i} \in \mathbf{A} \forall i \in 0, \dots, n \}$$
Compute the empirical conditional probability of seeing any symbol after just having seen any other:
$$ \mathbb{P}(\mathbf{x_i}\mid \mathbf{x_{i-1}}) $$
What is the dimensionality of this probability given $\lvert A \rvert = d $? Do you recall what this probability was called in the context of HMMs?
3. compute the entropy of the data (only your chosen type). Recall https://en.wikipedia.org/wiki/Entropy_(information_theory)
## Tasks 4; Markov Chain Generation:
1. By computing the probability $ \mathbb{P}(\mathbf{x_i}\mid \mathbf{x_{i-1}}) $ in task 3 you have fully specified a discrete-time finite state space Markov Chain model (https://en.wikipedia.org/wiki/Discrete-time_Markov_chain)! Given an initial symbol "s_0", you can generate the subsequent symbols by sampling from the conditional probability distribution
$$ \mathbb{P}(\mathbf{x_i}\mid \mathbf{x_{i-1}} = \mathbf{s_{0}}) $$
Write a function that samples from a finite state space given an input probability distribution.
2. Use the previously defined function and the Markov Chain to write a sequence generator based on an initial symbol.
3. Start several "walkers", i.e. sampled/generated sequences. Compute the entropy of this generated data and compare it to the entropy in task 3.
## Tasks 5; n-gram Context Model:
1. The Markov Chains used until now have only very limited memory. In fact, they only ever know the last played pitch or duration. Longer memory models can be created by using the conditional probability of any new symbol based on an n-gram context of the symbol (https://en.wikipedia.org/wiki/N-gram):
$$ \mathbb{P}(\mathbf{x_i}\mid \mathbf{x_{i-1}}, \dots, \mathbf{x_{i-n}}) $$
This probability will generally not look like a matrix anymore, but we can easily encode it as a dictionary. Write a function that creates a 3-gram context model from the data sequence **X**!
2. The longer the context, the more data we need to get meaningful or even existing samples for all contexts (note that the number of different contexts grows exponentially with context length). What could we do to approximate the distribution for unseen contexts?
## Tasks 6; multi-type Markov Chains and back to music:
1. To generate a somewhat interesting melody, we want to get a sequence of both pitches and durations. If we encode rests too, we can generate any melody like this. So far our Markov Chains dealt with either pitch or duration/IOI. What could we do to combine them? Describe two approaches and why to choose which one.
2. Implement a simple melody generator with pitch and IOI/duration (simplest; modify taska 4; 2 to a generator of the other type and use them to create independent seuqnces). Write some generated melodies to MIDI files!
## (Tasks 7); more stuff for music:
1. Keys are perceptual centers of gravity in the pitch space, so if we transpose all the input sequences to the same key we can compute empirical pitch distributions within a key!
2. One solution to tasks 5, 2 is to use Prediction by Partial Matching. This is the basis of the most elaborate probabilitstic model ofsymbolic music the Information Dynamics of Music (IDyOM). See references here:
https://researchcommons.waikato.ac.nz/bitstream/handle/10289/9913/uow-cs-wp-1993-12.pdf
https://mtpearce.github.io/idyom/
|
github_jupyter
|
import os
import numpy as np
import partitura
from rnn import load_data
# To filter out short melodies The minimum number of notes that a sequence should have
min_seq_len = 10
sequences = load_data(min_seq_len)
| 0.233008 | 0.989182 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.